Niko MatsakisClaiming, auto and otherwise

This blog post proposes adding a third trait, Claim, that would live alongside Copy and Clone. The goal of this trait is to improve Rust’s existing split, where types are categorized as either Copy (for “plain old data”1 that is safe to memcpy) and Clone (for types that require executing custom code or which have destructors). This split has served Rust fairly well but also has some shortcomings that we’ve seen over time, including maintenance hazards, performance footguns, and (at times quite significant) ergonomic pain and user confusion.

TL;DR

The proposal in this blog post has three phases:

  1. Adding a new Claim trait that refines Clone to identify “cheap, infallible, and transparent” clones (see below for the definition, but it explicitly excludes allocation). Explicit calls to x.claim() are therefore known to be cheap and easily distinguished from calls to x.clone(), which may not be. This makes code easier to understand and addresses existing maintenance hazards (obviously we can bikeshed the name).
  2. Modifying the borrow checker to insert calls to claim() when using a value from a place that will be used later. So given e.g. a variable y: Rc<Vec<u32>>, an assignment like x = y would be transformed to x = y.claim() if y is used again later. This addresses the ergonomic pain and user confusion of reference-counted values in rust today, especially in connection with closures and async blocks.
  3. Finally, disconnect Copy from “moves” altogether, first with warnings (in the current edition) and then errors (in Rust 2027). In short, x = y would move y unless y: Claim. Most Copy types would also be Claim, so this is largely backwards compatible, but it would let us rule out cases like y: [u8; 1024] and also extend Copy to types like Cell<u32> or iterators without the risk of introducing subtle bugs.

For some code, automatically calling Claim may be undesirable. For example, some data structure definitions track reference count increments closely. I propose to address this case by creating a “allow-by-default” automatic-claim lint that crates or modules can opt-into so that all “claims” can be made explicit. This is more-or-less the profile pattern, although I think it’s notable here that the set of crates which would want “auto-claim” do not necessarily fall into neat categories, as I will discuss.

Step 1: Introducing an explicit Claim trait

Quick, reading this code, can you tell me anything about it’s performance characteristics?

tokio::spawn({
    // Clone `map` and store it into another variable
    // named `map`. This new variable shadows the original.
    // We can now write code that uses `map` and then go on
    // using the original afterwards.
    let map = map.clone();
    async move { /* code using map */ }
});

/* more code using map */

Short answer: no, you can’t, not without knowing the type of map. The call to map.clone() may just be cloning a large map or incrementing a reference count, you can’t tell.

One-clone-fits-all creates a maintenance hazard

When you’re in the midst of writing code, you tend to have a good idea whether a given value is “cheap to clone” or “expensive”. But this property can change over the lifetime of the code. Maybe map starts out as an Rc<HashMap<K, V>> but is later refactored to HashMap<K, V>. A call to map.clone() will still compile but with very different performance characteristics.

In fact, clone can have an effect on the program’s semantics as well. Imagine you have a variable c: Rc<Cell<u32>> and a call c.clone(). Currently this creates another handle to the same underlying cell. But if you refactor c to Cell<u32>, that call to c.clone() is now creating an independent cell. Argh. (We’ll see this theme, of the importance of distinguishing interior mutability, come up again later.)

Proposal: an explicit Claim trait distinguishing “cheap, infallible, transparent” clones

Now imagine we introduced a new trait Claim. This would be a subtrait of Clonethat indicates that cloning is:

  • Cheap: Claiming should complete in O(1) time and avoid copying more than a few cache lines (64-256 bytes on current arhictectures).
  • Infallible: Claim should not encounter failures, even panics or aborts, under any circumstances. Memory allocation is not allowed, as it can abort if memory is exhausted.
  • Transparent: The old and new value should behave the same with respect to their public API.

The trait itself could be defined like so:2

trait Claim: Clone {
    fn claim(&self) -> Self {
        self.clone()
    }
}

Now when I see code calling map.claim(), even without knowing what the type of map is, I can be reasonably confident that this is a “cheap clone”. Moreover, if my code is refactored so that map is no longer ref-counted, I will start to get compilation errors, letting me decide whether I want to clone here (potentially expensive) or find some other solution.

Step 2: Claiming values in assignments

In Rust today, values are moved when accessed unless their type implement the Copy trait. This means (among other things) that given a ref-counted map: Rc<HashMap<K, V>>, using the value map will mean that I can’t use map anymore. So e.g. if I do some_operation(map), then gives my handle to some_operation, preventing me from using it again.

Not all memcopies should be ‘quiet’

The intention of this rule is that something as simple as x = y should correspond to a simple operation at runtime (a memcpy, specifically) rather than something extensible. That, I think, is laudable. And yet the current rule in practice has some issues:

  • First, x = y can still result in surprising things happening at runtime. If y: [u8; 1024], for example, then a few simple calls like process1(y); process2(y); can easily copy large amounts of data (you probably meant to pass that by reference).
  • Second, seeing x = y.clone() (or even x = y.claim()) is visual clutter, distracting the reader from what’s really going on. In most applications, incrementing ref counts is simply not that interesting that it needs to be called out so explicitly.

Some things that should implement Copy do not

There’s a more subtle problem: the current rule means adding Copy impls can create correctness hazards. For example, many iterator types like std::ops::Range<u32> and std::vec::Iter<u32> could well be Copy, in the sense that they are safe to memcpy. And that would be cool, because you could put them in a Cell and then use get/set to manipulate them. But we don’t implement Copy for those types because it would introduce a subtle footgun:

let mut iter0 = vec.iter();
let mut iter1 = iter0;
iter1.next(); // does not effect `iter0`

Whether this is surprising or not depends on how well you know Rust – but definitely it would be clearer if you had to call clone explicitly:

let mut iter0 = vec.iter();
let mut iter1 = iter0.clone();
iter1.next();

Similar considerations are the reason we have not made Cell<u32> implement Copy.

The clone/copy rules interact very poorly with closures

The biggest source of confusion when it comes to clone/copy, however, is not about assignments like x = y but rather closures and async blocks. Combining ref-counted values with closures is a big stumbling block for new users. This has been true as long as I can remember. Here for example is a 2014 talk at Strangeloop in which the speaker devotes considerable time to the “accidental complexity” (their words, but I agree) they encountered navigating cloning and closures (and, I will note, how the term clone is misleading because it doesn’t mean a deep clone). I’m sorry to say that the situation they describe hasn’t really improved much since then. And, bear in mind, this speaker is a skilled programmer. Now imagine a novice trying to navigate this. Oh boy.

But it’s not just beginners who struggle! In fact, there isn’t really a convenient way to manage the problem of having to clone a copy of a ref-counted item for a closure’s use. At the RustNL unconf, Jonathan Kelley, who heads up the Dioxus Labs, described how at CloudFlare codebase they spent significant time trying to find the most ergonomic way to thread context (and these are not Rust novices).

In that setting, they had a master context object cx that had a number of subsystems, each of which was ref-counted. Before launching a new task, they would handle out handles to the subsystems that task required (they didn’t want every task to hold on to the entire context). They ultimately landed on a setup like this, which is still pretty painful:

let _io = cx.io.clone():
let _disk = cx.disk.clone():
let _health_check = cx.health_check.clone():
tokio::spawn(async move {
    do_something(_io, _disk, _health_check)
})

You can make this (in my opinion) mildly better by leveraging variable shadowing, but even then, it’s pretty verbose:

tokio::spawn({
    let io = cx.io.clone():
    let disk = cx.disk.clone():
    let health_check = cx.health_check.clone():
    async move {
        do_something(io, disk, health_check)
    }
})

What you really want is to just write something like this, like you would in Swift or Go or most any other modern language:3

tokio::spawn(async move {
    do_something(cx.io, cx.disk, cx.health_check)
})

“Autoclaim” to the rescue

What I propose is to modify the borrow checker to automatically invoke claim as needed. So e.g. an expression like x = y would be automatically converted to x = y.claim() if y will be used again later. And closures that capture variables in their environment would respect auto-claim as well, so move || process(y) would become { let y = y.claim(); move || process(y) } if y were used again later.

Autoclaim would not apply to the last use of a variable. So x = y only introduces a call to claim if it is needed to prevent an error. This avoids unnecessary reference counting.

Naturally, if the type of y doesn’t implement Claim, we would give a suitable error explaining that this is a move and the user should insert a call to clone if they want to make a cloned value.

Support opt-out with an allow-by-default lint

There is definitely some code that benefits from having the distinction between moving an existing handle and claiming a new one made explicit. For these cases, what I think we should do is add an “allow-by-default” automatic-claim lint that triggers whenever the compiler inserts a call to claim on a type that is not Copy. This is a signal that user-supplied code is running.

To aid in discovery, I would consider a automatic-operations lint group for these kind of “almost always useful, but sometimes not” conveniences; effectively adopting the profile pattern I floated at one point, but just by making it a lint group. Crates could then add automatic-operations = 'deny" (bikeshed needed) in the [lints] section of their Cargo.toml.

Step 3. Stop using Copy to control moves

Adding “autoclaim” addresses the ergonomic issues around having to call clone, but it still means that anything which is Copy can be, well, copied. As noted before that implies performance footguns ([u8;1024] is probably not something to be copied lightly) and correctness hazards (neither is an iterator).

The real goal should be to disconnect “can be memcopied” and “can be automatically copied”4. Once we have “autoclaim”, we can do that, thanks to the magic of lints and editions:

  • In Rust 2024 and before, we warn when x = y copies a value that is Copy but not Claim.
  • In the next Rust edition (Rust 2027, presumably), we make it a hard error so that the rule is just tied to Claim trait.

At codegen time, I would still expect us to guarantee that x = y will memcpy and will not invoke y.claim(), since technically the Clone impl may not be the same behavior; it’d be nice if we could extend this guarantee to any call to clone, but I don’t know how to do that, and it’s a separate problem. Furthermore, the automatic_claims lint would only apply to types that don’t implement Copy.5

Frequently asked questions

All right, I’ve laid out the proposal, let me dive into some of the questions that usually come up.

Are you ??!@$!$! nuts???

I mean, maybe? The Copy/Clone split has been a part of Rust for a long time6. But from what I can see in real codebases and daily life, the impact of this change would be a net-positive all around:

  • For most code, they get less clutter and less confusing error messages but the same great Rust taste (i.e., no impact on reliability or performance).
  • Where desired, projects can enable the lint (declaring that they care about performance as a side benefit). Furthermore, they can distinguish calls to claim (cheap, infallible, transparent) from calls to clone (anything goes).

What’s not to like?

What kind of code would #[deny(automatic_claims)]?

That’s actually an interesting question! At first I thought this would correspond to the “high-level, business-logic-oriented code” vs “low-level systems software” distinction, but I am no longer convinced.

For example, I spoke with someone from Rust For Linux who felt that autoclaim would be useful, and it doesn’t get more low-level than that! Their basic constraint is that they want to track carefully where memory allocation and other fallible operations occur, and incrementing a reference count is fine.

I think the real answer is “I’m not entirely sure”, we have to wait and see! I suspect it will be a fairly small, specialized set of projects. This is part of why I this this is a good idea.

Well my code definitely wants to track when ref-counts are incremented!

I totally get that! And in fact I think this proposal actually helps your code:

  • By setting #![deny(automatic_claims)], you declare up front the fact that reference counts are something you track carefully. OK, I admit not everything will consider this a pro. Regardless, it’s a 1-time setup cost.
  • By distinguishing claim from clone, your project avoids surprising performance footguns (this seems inarguably good).
  • In the next edition, when we no longer make Copy implicitly copy, you further avoid the footguns associated with that (also inarguably good).

Is this revisiting RFC 936?

Ooh, deep cut! RFC 936 was a proposal to split Pod (memcopyable values) from Copy (implicitly memcopyable values). At the time, we decided not to do this.7 I am even the one who summarized the reasons. The short version is that we felt it better to have a single trait and lints.

I am definitely offering another alternative aiming at the same problem identified by the RFC. I don’t think this means we made the wrong decision at the time. The problem was real, but the proposed solutions were not worth it. This proposal solves the same problems and more, and it has the benefit of ~10 years of experience.8 (Also, it’s worth pointing out that this RFC came two months before 1.0, and I definitely feel to avoid derailing 1.0 with last minute changes – stability without stagnation!)

Doesn’t having these “profile lints” split Rust?

A good question. Certainly on a technical level, there is nothing new here. We’ve had lints since forever, and we’ve seen that many projects use them in different ways (e.g., customized clippy levels or even – like the linux kernel – a dedicated custom linter). An important invariant is that lints define “subsets” of Rust, they don’t change it. Any given piece of code that compiles always means the same thing.

That said, the profile pattern does lower the cost to adding syntactic sugar, and I see a “slippery slope” here. I don’t want Rust to fundamentally change its character. We should still be aiming at our core constituency of programs that prioritize performance, reliability, and long-term maintenance.

How will we judge when an ergonomic change is “worth it”?

I think we should write up some design axioms. But it turns out we already have a first draft! Some years back Aaron Turon wrote an astute analysis in the “ergonomics initiative” blog post. He identified three axes to consider:

  • Applicability. Where are you allowed to elide implied information? Is there any heads-up that this might be happening?
  • Power. What influence does the elided information have? Can it radically change program behavior or its types?
  • Context-dependence. How much of do you have to know about the rest of the code to know what is being implied, i.e. how elided details will be filled in? Is there always a clear place to look?

Aaron concluded that "implicit features should balance these three dimensions. If a feature is large in one of the dimensions, it’s best to strongly limit it in the other two." In the case of autoclaim, the applicability is high (could happen a lot with no heads up) and the context dependence is medium-to-large (you have to know the types of things and traits they implement). We should therefore limit power, and this is why we put clear guidelines on who should implement Claim. And of course for the cases where that doesn’t suffice, the lint can limit the applicability to zero.

I like this analysis. I also want us to consider “who will want to opt-out and why” and see if there are simple steps (e.g., ruling out allocation) we can take which will minimize that while retaining the feature’s overall usefulness.

What about explicit closure autoclaim syntax?

In a recent lang team meeting Josh raised the idea of annotating closures (and presumably async blocks) with some form of syntax that means “they will auto-capture things they capture”. I find the concept appealing because I like having an explicit version of automatic syntax; also, projects that deny automatic_claim should have a lightweight alternative for cases where they want to be more explicit. However, I’ve not seen any actual specific proposal and I can’t think of one myself that seems to carry its weight. So I guess I’d say “sure, I like it, but I would want it in addition to what is in this blog post, not instead of”.

What about explicit closure capture clauses?

Ah, good question! It’s almost like you read my mind! I was going to add to the previous question that I do like the idea of having some syntax for “explicit capture clauses” on closures.

Today, we just have || $body (which implicitly captures paths in $body in some mode) and move || $body (which implicitly captures paths in $body by value).

Some years ago I wrote a draft RFC in a hackmd that I still mostly like (I’d want to revisit the details). The idea was to expand move to let it be more explicit about what is captured. So move(a, b) || $body would capture only a and b by value (and error if $body references other variables). But move(&a, b) || $body would capture a = &a. And move(a.claim(), b) || $body would capture a = a.claim().

This is really attacking a different problem, the fact that closure captures have no explicit form, but it also gives a canonical, lighterweight pattern for “claiming” values from the surrounding context.

How did you come up with the name Claim?

I thought Jonathan Kelley suggested it to me, but reviewing my notes I see he suggested Capture. Well, that’s a good name too. Maybe even a better one! I’ve already written this whole damn blog post using the name Claim, so I’m not going to go change it now. But I’d expect a proper bikeshed before taking any real action.


  1. I love Wikipedia (of course), but using the name passive data structure (which I have never heard before) instead of plain old data feels very… well, very Wikipedia↩︎

  2. In point of fact, I would prefer if we could define the claim method as “final”, meaning that it cannot be overridden by implementations, so that we would have a guarantee that x.claim() and x.clone() are identical. You can do this somewhat awkwardly by defining claim in an extension trait, like so, but it’d be a bit embarassing to have that in the standard library. ↩︎

  3. Interestingly, when I read that snippet, I had a moment where I thought “maybe it should be async move { do_something(cx.io.claim(), ...) }?”. But of course that won’t work, that would be doing the claim in the future, whereas we want to do it before. But it really looks like it should work, and it’s good evidence for how non-obvious this can be. ↩︎

  4. In effect I am proposing to revisit the decision we made in RFC 936, way back when. Actually, I have more thoughts on this, I’ll leave them to a FAQ! ↩︎

  5. Oooh, that gives me an idea. It would be nice if in addition to writing x.claim() one could write x.copy() (similar to iter.copied()) to explicitly indicate that you are doing a memcpy. Then the compiler rule is basicaly that it will insert either x.claim() or x.copy() as appropriate for types that implement Claim↩︎

  6. I’ve noticed I’m often more willing to revisit long-standing design decisions than others I talk to. I think it comes from having been present when the decisions were made. I know most of them were close calls and often began with “let’s try this for a while and see how it feels…”. Well, I think it comes from that and a certain predilection for recklessness. 🤘 ↩︎

  7. This RFC is so old it predates rfcbot! Look how informal that comment was. Astounding. ↩︎

  8. This seems to reflect the best and worst of Rust decision making. The best because autoclaim represents (to my mind) a nice “third way” in between two extreme alternatives. The worst because the rough design for autoclaim has been clear for years but it sometimes takes a long time for us to actually act on things. Perhaps that’s just the nature of the beast, though. ↩︎

Frédéric WangMy recent contributions to Gecko (1/3)

Introduction

Igalia has been contributing to the web platform implementations of different web engines for a long time. One of our goals is ensuring that these implementations are interoperable, by relying on various web standards and web platform tests. In July 2023, I happily joined a project that focuses on this goal, and I worked more specifically on the Gecko web engine. One year later, three new features I contributed to are being shipped in Firefox. In this series of blog posts, I’ll give an overview of those features (namely registered custom properties, content visibility, and fetch priority) and my journey to make them “ride the train” as Mozilla people say.

Let’s start with registered custom properties, an enhancement of traditional CSS variables.

Registered custom properties

You may already be familiar with CSS variables, these “dash dash” names that facilitate the maintenance of a large web site by allowing author-defined CSS properties. In the example below, the :root selector defines a variable --main-theme-color with value “blue”, which is used for the style applied to other elements via the var() CSS function. As you can see, this makes the usage of the main theme color in different places more readable and makes customizing that color much easier.

:root { --main-theme-color: blue; }
p { color: var(--main-theme-color); }
section {
  padding: 1em;
  border: 1px solid var(--main-theme-color);
}
.progress-bar {
  height: 10px;
  width: 100%;
  background: linear-gradient(white, var(--main-theme-color));
}
<section>
  <p>Loading...</p>
  <div class="progress-bar"></div>
</section>

In browsers supporting CSS variables, you should see a frame containing the text “Loading” and a progress bar, all of these components being blue:

Loading...

Having such CSS variables available is already nice, but they are lacking some features available to native CSS properties… For example, there is (almost) no syntax checking on specified values, they are always inherited, and their initial value is always the guaranteed invalid value. In order to improve on that situation, the CSS Properties and Values specification provides some APIs to register custom properties with further characteristics:

  • An accepted syntax for the property; for example, igalia | <url> | <integer>+ means either the custom identifier “igalia”, or a URL, or a space-separated list of integers.
  • Whether the property is inherited or non-inherited.
  • An initial value.

Custom properties can be registered via CSS or via a JS API, and these ways are equivalent. For example, to register --main-theme-color as a non-inherited color with initial value blue:

@property --main-theme-color {
  syntax: "<color>";
  inherits: false;
  initial-value: blue;
}
window.CSS.registerProperty({
  name: "--main-theme-color",
  syntax: "<color>",
  inherits: false,
  initialValue: blue,
});

Interpolation of registered custom properties

By having custom properties registered with a specific syntax, we open up the possibility of interpolating between two values of the properties when performing an animation. Consider the following example, where the width of the animated div depends on the custom property --my-length. Defining this property as a length allows browsers to interpolate it continuously between 10px and 200px when it is animated:

 @property --my-length {
   syntax: "<length>";
   inherits: false;
   initialValue: '0px';
 }
 @keyframes test {
   from {
     --my-length: 10px;
   }
   to {
     --my-length: 200px;
   }
 }
 div#animated {
   animation: test 2s linear both;
   width: var(--my-length, 10px);
   height: 200px;
   background: lightblue;
 }

With non-registered custom properties, we can instead only animate discretely; --my-length would suddenly jump from 10px to 200px halfway through the duration of the animation, which is generally not what is desired for lengths.

Custom properties in the cascade

If you check the Interop 2023 Dashboard for custom properties, you may notice that interoperability was really bad at the beginning of the year, and this was mainly due to Firefox’s low score. Consequently, when I joined the project, I was asked to help with improving that situation.

Graph showing the 2023 evolution of scores and interop for custom properties

While the two registration methods previously mentioned had already been implemented, the main issue was that the CSS cascade was always treating custom properties as inherited and initialized with the guaranteed invalid value. This is indeed correct for unregistered custom properties, but it’s generally incorrect for registered custom properties!

In bug 1840478, bug 1855887, and others, I made registered custom properties work properly in the cascade, including non-inherited properties and registered initial values. But in the past, with the previous assumptions around inheritance and initial values, it was possible to store the computed values of custom properties on an element as a “cheap” map, considering only the properties actually specified on the element or an ancestor and (in most cases) only taking shallow copies of the parent’s map. As a result, when generalizing the cascade for registered custom properties, I had to be careful to avoid introducing performance regressions for existing content.

Custom properties in animations

Another area where the situation was pretty bad was animations. Not only was Firefox unable to interpolate registered custom properties between two values — one of the main motivations for the new spec — but it was actually unable to animate custom properties at all!

The main problem was that the existing animation code referred to CSS properties using an enum nsCSSPropertyID, with all custom properties represented by the single value nsCSSPropertyID::eCSSPropertyExtra_variable. To make this work for custom properties, I had to essentially replace that value with a structure containing the nsCSSPropertyID and the name of the custom properties.

I uploaded patches to bug 1846516 to perform that change throughout the whole codebase, and with a few more tweaks, I was able to make registered custom properties animate discretely, but my patches still needed some polish before they could be reviewed. I had to move onto other tasks, but fortunately, some Mozilla folks were kind enough to take over this task, and more generally, complete the work on registered custom properties!

Conclusion

This was an interesting task to work on, and because a lot of the work happened in Stylo, the CSS engine shared by Servo and Gecko, I also had the opportunity to train more on the Rust programming language. Thanks to help from folks at Mozilla, we were able to get excellent progress on registered custom properties in Firefox in 2023, and this feature is expected to ship in Firefox 128!

As I said, I’ve since moved onto other tasks, which I’ll describe in subsequent blog posts in this series. Stay tuned for content-visibility, enabling interesting layout optimizations for web pages.

The Mozilla BlogNatalia Domagala on fighting for transparent AI, the power of algorithms, climate change and more

At Mozilla, we know we can’t create a better future alone, that is why each year we will be highlighting the work of 25 digital leaders using technology to amplify voices, effect change, and build new technologies globally through our Rise 25 Awards. These storytellers, innovators, activists, advocates, builders and artists are helping make the internet more diverse, ethical, responsible and inclusive.

This week, we chatted with Natalia Domagala, an advocate and global digital policy specialist fighting to make technology work for people and societies. We talked with Natalia about the power of algorithms, her favorite work projects, climate change, fighting misinformation and more.

The first question that I kind of wanted to ask you about was algorithms. I know you do a lot of work in that space. What do you think people overlook the most when it comes to knowing about algorithms on the internet?

Domagala: I think it’s perhaps the fact that these algorithms actually exist, because we know about this, but most people actually don’t. I think an average internet user never actually questions, what happens at the back end? How does the internet actually function? I don’t think many people ask themselves those questions. And then when they suddenly browse for a new item that they want to buy, and then suddenly they go online the next day, and they see a list of similar items suggested within their browser, or they open their social media account, they suddenly see all the relevant ads. I think a lot of people just think that this is some sort of magic, that suddenly the computer knows what they need and what they want. I think it’s a very key point to educate people about how algorithms are being used and to actually tell them that they are being used and what you see online doesn’t just magically appear in there. It’s actually there, because there are systems that are scraping your data and then analyzing your data and then feeding it back to you in a way that, for the most part, actually encourages you to buy something or give up more of your data as well.

What do you think are some easy ways that people can become more knowledgeable and become a little bit smarter about algorithms, and also data?

So I think the first thing is using a secure browser and using a browser that doesn’t necessarily store your data. I think the same goes for tools and apps that we’re using to communicate. So using apps that have higher standards of privacy, using apps that don’t actually store your data, that don’t use your data for anything. Another big one is not linking your accounts. I know this is quite inconvenient, because the way that the internet has developed is that now you can just log into so many services using one login from one social media portal, one website to everything — and that again creates that kind of feedback loop with our data that’s not very privacy-friendly. I think also using incognito mode, that could be a quick solution. The one that I think is really sort of a bit annoying, I think, to people, but is good is actually reading all the privacy policies. And if you go on a website and you’re prompted about cookies, instead of clicking accept all — which is the easiest way because that’s how the user experience is structured — actually going through it, unchecking all those cookies and saying “no, I don’t want you to store this data. I do not want you to collect this information.” There is something really important to be said about how our online experience is structured for convenience. But overall, I think just getting into the habit of not just closing those windows, but actually saying, “reject all.”

You’ve done a lot of work with algorithms and this type of privacy protection. What has been your favorite project to work on in the work that you do?

I think my favorite project was the algorithmic transparency standard, which I worked on when I was at the U.K. government. It was all about creating a standardized way for government agencies to share information about their use of AI. It’s all about making sure that this information is easily accessible and that you as a member of the public can actually go on a website and find out how the government uses algorithms about you, how this could affect you, or what kind of decisions, what kind of policy areas, what kind of contexts those algorithms are being used in. At the time when we were working on it, that was something that hadn’t actually been done on a national scale, so it was a very interesting, very exciting project because we got to create something for the first time. It was very much public facing. It was all about actually asking the people: what kind of information would you like to see from the government? How would you like this information to be presented? Is there anything else you would like to know? What kind of feedback loops should be put in place as well? So to me, that was really a way to educate the people about how and why government uses AI, but also a fantastic way for government departments to compare how they’re using this technology, and if there are any similarities, any areas for improvement, any kind of ways to actually involve external researchers into their work as well. I think it’s a win-win sort of project which I would love to see in other countries as well, but also in the private sector, because algorithms are everywhere, but we don’t actually know about this. We don’t have enough transparency when it comes to that.

<figcaption class="wp-element-caption">Natalia Domagala at Mozilla’s Rise25 award ceremony in October 2023.</figcaption>

What do you think is the biggest challenge that we face in the world this year on and offline, and how do we combat it?  

I think many of the challenges that we are facing, not just this year, but in the years to come, are intertwined. For example, for me personally, one of the most pressing challenges that we’re facing is climate change. And this is something that we can actually see unraveling in front of us. Already we see all the wildfires, we see the floods, we see the droughts, hurricanes and all that. You might ask how is this connected with the challenges we’re facing in the digital world? Well, I actually think in many ways, because there is an immense environmental impact of AI, especially training and running AI systems or any advanced computing systems on the internet. They all require a great deal of power and electricity, and this intensifies greenhouse gas emissions. This leads to an increase in energy consumption as well, and eventually, that also requires more natural resources. I think as the world gets more digitized, but at the same time our resources are becoming more scarce, this is something that we will absolutely have to address. Also, in the digital world right now, there’s so much AI-powered misinformation and disinformation. I think to continue with this climate example, I think there is a lot of content out there, a lot of lobbying from groups and parties that actually have no interest in reducing emissions, no interest at all in taking environmental action, and thanks to AI, it’s actually really easy for them. It’s possible to produce and spread misinformation and disinformation at the kind of scale that we hadn’t really seen before — scale and speed as well. AI makes it very easy, and this is not just related to climate, but we can take that pattern and look at it in every aspect of our lives really, including politics with things like election related misinformation and current affairs reporting and anything really. What we see on the internet shapes human behaviors on a large and a global scale, so it’s powerful and can be of interest as well. I think the second issue related to AI is deep fakes. Generative AI creates a whole range of new challenges that we need to address, and we need to address quickly because this technology is growing, and it’s being developed again at unprecedented scale. Things like how to distinguish fake content from authentic content, or there are challenges related to intellectual property protection. There are challenges to consent. There are challenges related to things like using someone’s voice or someone’s image or someone’s creative outputs to train or develop AI without their knowledge. There are so many stories in the media about writers whose work has been used without their consent, or musicians that had their voices taken to create songs that are actually not theirs. I think this is partially due to the insufficient governance of AI and the lack of appropriate regulations to manage the digital sphere overall. In terms of how to convert these challenges, I think they are too complex, I’m afraid, to have an easy solution. One step would be to start introducing regulation of AI and regulation of digital markets that’s actually fit for purpose, that has a specific emphasis on fighting misinformation and disinformation, that has specific areas that talk about creative outputs, intellectual property, deepfakes and how to deal with them as well. Another step is education and raising public awareness, especially when it comes to AI and how it can be used, how it can be misused, how it can be manipulated. A very simple thing is raising the public awareness of what we are seeing online and sort of trying to build this critical thinking and the ability to challenge what we’re seeing and question the content that we’ve been given. I think this is really important, especially in the era when it’s so easy to put out anything online that looks really credible. 

Where do you draw inspiration from in the work that you are currently doing? 

I think the world around me and just understanding what’s going on, in terms of what are some of the bigger trends that are happening globally. I think AI was something that I got into relatively early in the policy sphere, just because I found it just through research from talking to a lot of people. The same with transparency. Transparency has always been there, but I think now it’s more appreciated because people are understanding the risks and mistakes. For me, personally as well, I read a lot. I read fiction, nonfiction. Everything really. A lot of the inspiration for my work and for my life comes from just going to a library or bookshop and walking around, and sort of seeing what draws my attention, and then, thinking how I can relate that into my life or my work. Also, big conferences and gatherings, but especially the ones that bring in people from different areas. I think that’s where a lot of creativity and a lot of productive collaborations can actually happen if you just have a group of people who are passionate about something that come from completely different areas and just put them in one room, those kinds of meetups or conferences were something that I definitely benefited from in terms of shaping my ideas, or even bouncing ideas off other people. Traveling and looking at different parts of the world, I think, especially in the AI policy space. It’s really interesting to see how different countries are approaching that, but also just from a cultural perspective, what’s the approach to data and privacy? What’s the approach to sharing your information? What’s your level of trust in the government? What’s your level of trust in corporations? And I think a lot of that you can really observe when you travel. I did anthropology as my first as my bachelor’s degree, so I have a lot of curiosity in terms of exploring other parts of the world, exploring other cultures and trying to understand how people live, and what is it that we can learn from them as well.

We started Rise25 to celebrate Mozilla’s 25th anniversary. What do you hope people are celebrating in the next 25 years?

I hope that we are celebrating the internet that’s democratic and serves the interests of people and communities rather than big corporations. I also hope we are celebrating the existence of the kind of AI that makes our lives easier by eliminating the burdensome and repetitive tasks that are time-consuming that no one wants to do but the kind of AI that’s actually safe, well regulated, transparent, and also built and deployed with the highest ethical principles in mind that’s actually a positive part of our lives that makes our everyday experience move and freeze our time to do things that we actually want to do without compromising our data, privacy or our cybersecurity.

What gives you hope about the future of our world?

Mainly people. I feel like as the challenges that we are facing in the world are getting worse, the grassroots solutions that come from the people are getting more radical or getting more innovative and effective, and that gives me a lot of hope. Initiatives like Rise25 as well give me a lot of hope. You can see all of those wonderful people making things happen against all odds, really driving positive change in the kind of conditions that are not actually set up for them to succeed and people that are challenging the status quo in the work that they’re doing, even if it’s unpopular. That gives me a lot of hope. I’m also very impressed by the younger generation and their activism, the way they refuse to submit, and the way they unapologetically decide to fight for what they believe is right. I think that’s definitely something that millennials didn’t have the courage to do, and it’s incredible to see that now the generations that come after us are a little bit more ready to change the world the way that perhaps we didn’t. That gives me a lot of hope as well, the way that they just go for it and take action instead of waiting for governments or corporations or anyone else to fix it. They just believe that they can fix it themselves, and that’s really optimistic and really reassuring.

Get Firefox

Get the browser that protects what’s important

The post Natalia Domagala on fighting for transparent AI, the power of algorithms, climate change and more appeared first on The Mozilla Blog.

The Mozilla BlogFirefox tips and tricks for online shopping

My relationship with online shopping is ever evolving. It’s either a little too convenient, extremely gratifying or entirely too much fun. I’m an eBay hawk, a casual Amazon browser and a Sephora VIB member. I recently joined the team at Fakespot though, which changed the game for my online shopping habits. Suddenly, browsing these retailers became professional, not just personal. I now second guess a product with a poor product grade rating (more on that below) and am filled with glee at the sight of a Fakespot-approved Shopify site. 

Working at Mozilla has taken my shopping habits to new heights. I feel armed with information and shortcuts and have a world of add ons at my fingertips. Keep reading for more on Fakespot and other hacks to shop smarter on Firefox. 

Fakespot

Download Fakespot’s add on to find out if product reviews are reliable. Let Fakespot’s AI technology sort through hundreds and thousands of reviews and detect unreliable reviews and potential scammers. Fakespot is available on all my favorite retailers — Amazon, eBay, Sephora,Best Buy, Home Depot, and more. If you’re shopping on Amazon, don’t forget to sort products by review reliability — or cut through the clutter altogether and hide products with potential fake reviews. Don’t sleep on Fakespot’s technology to detect the reliability of reviews on TripAdvisor and Yelp as well. 

Three easy ways to use Fakespot today: 

  1. Download Fakespot on desktop and let it get to work. 
  2. Go to fakespot.com and copy and paste your product into the Fakespot analyzer. 
  3. Download the Fakespot app on your mobile device and start securely shopping . 
A check mark next to the text "Fakespot."
Use AI to detect fake reviews and scams
Add to Firefox

Shopping extensions

Add your favorite shopping extension for deals and savings. There are nearly 2,000 shopping extensions available, offering cash back and finding any not-so-obvious savings across a variety of sites. They’re perfect for anyone looking to stretch their budget without the hassle of manually searching for coupon codes. (As always the case with third-party software, make sure you trust the developer before installing.)

Credit card autofill

For those moments when you’re shopping from bed and you’re too exhausted to get up and grab your wallet from across the room (just me?) — Firefox lets you automatically fill in your saved information for payment methods. Don’t worry, your CVV number is not saved so keep that safe in your mind. Follow the steps here to use this feature.

Private browsing mode

Looking for gifts? Firefox’s private browsing mode with enhanced tracking protection has you covered. It erases your browsing history and any tracking cookies from websites once you close the window, so your gift ideas stay hidden from those you’re shopping for. 

Firefox private browsing window with a purple background, featuring the Firefox logo and a search bar that reads "Search with Google or enter address."

There are endless ways to make Firefox your own, whether you’re a shopper, a gamer, a creative, a minimalist, a (tab) maximalist or however you choose to navigate the internet. We want to know how you customize Firefox. Let us know and tag us on X or Instagram at @Firefox. 

Get Firefox
Get the browser that protects what’s important

The post Firefox tips and tricks for online shopping appeared first on The Mozilla Blog.

Mozilla ThunderbirdMaximize Your Day: Treat Your Email Like Laundry

Imagine for a moment if we treated email the same way we treat our laundry. It might look something like this: At least ten times an hour, we’d look in the dryer, sigh at the mix of wet and dry clothes, wonder where the shirt we needed was, and then close the dryer door again without emptying a thing. Laura Mae Martin, author of Uptime: A Practical Guide to Personal Productivity and Wellbeing, has a better approach. Treat your email like you would ideally treat your laundry.

How do we put this metaphor to work in our inboxes? Martin has some steps for getting the most out of this analogy, and the first is to set aside a specific time in your day to tackle your inbox. This is the email equivalent of emptying your dryer, not just looking in it, and sorting the clothes into baskets. You’re already setting future you up for a better day with this first step!

The Process

At this set time, you’ll have a first pass at everything in your inbox, or as much as you can, sorting your messages into one of four ‘baskets’ – Respond, To Read, Revisit, and Relax (aka, the archive where the email lives once you’ve acted on it from a basket, and the trash for deleted emails). Acting on those messages comes after the sorting is done. So instead of ‘touching’ your email a dozen times with your attention, you only touch it twice: sorting it, and acting on it.

Let’s discuss those first three baskets in a little more detail.

First, the ‘Respond’ basket is for emails that require a response from you, which need you and your time to complete. Next, the ‘To Read’ basket is for emails that you’d like to read for informative purposes, but don’t require a response. Finally, the ‘Revisit’ basket is for emails where you need to respond but can’t right now because you’re waiting for the appropriate time, a response from someone, etc.

Here’s more info on how treating your email like laundry looks in your inbox. You don’t have separate dryers for work clothes and personal clothes, so ideally you want your multiple inboxes in one place, like Thunderbird’s Unified Folders view. The baskets (Respond, To Read, Revisit) are labels, tags, or folders. Unread messages should not be in the same place with sorted email; that’s like putting in wet clothes with your nice, dry laundry!

Baskets and Batch Tasking

You might be wondering “why not just use this time to sort AND respond to messages?” The answer is that this kind of multitasking saps your focus, thanks to something called attention residue. Hopping between sorting and replying – and increasing the chance of falling down attention rabbit holes doing the latter – makes attention residue thicker, stickier, and ultimately harder to shake. Batch tasking, or putting related tasks together for longer stretches of time, keeps potentially distracting tasks like email in check. So, sorting is one batch, responding is another, etc. No matter how much you’re tempted, don’t mix the tasks!

Putting It Into Practice

You know why you should treat your email like laundry, and you know the process. Here’s some steps for day one and beyond to make this efficient approach a habit.

One-time Setup:

  • Put active emails in your inbox in one of the first three baskets (Respond, To Read, Revisit)
  • If email doesn’t need one of these baskets, archive or delete them

Daily Tasks

  • Remember the 4 Baskets are tasks to be done separately
  • Pick a time to sort your email each day – at least once, and hopefully no more than two or three more times. Remember, this is time ONLY to sort emails into your baskets.
  • Give future you the gift of a sorted inbox
  • Find and schedule time during the day to deal with the baskets – but only one basket at a time! Have slots just for responding, reading, or checking on the progress of your Revisit emails. Think of your energy flow during the day, and assign your most mentally strenuous boxes for your peak energy times.

One Last Fold

Thanks for joining us in our continuing journey to turn our inboxes, calendars, and tasks lists into inspiring productivity tools instead of burdens. We know opening our inboxes can sometimes feel overwhelming, which makes it easier for them to steal our focus and our time. But if you treat your email like laundry, this chore can help make your inbox manageable and put you in control of it, instead of the other way around.

We’re excited to try this method, and we hope you are too. We’re also eager to try this advice with our actual laundry. Watch out, inboxes and floor wardrobes. We’re coming for you!

Until next time, stay productive!

Want more email productivity tips? Read this:

The post Maximize Your Day: Treat Your Email Like Laundry appeared first on The Thunderbird Blog.

This Week In RustThis Week in Rust 552

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X(formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is yazi, a blazing fast terminal file manager based on async I/O.

Despite a lamentable lack of suggestions, llogiq is content with his choice.

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (Formerly twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (Formerly twitter) or Mastodon!

Updates from the Rust Project

470 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week: * No RFCs were approved this week.

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team RFCs entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline RFCs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-06-19 - 2024-07-17 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

If there’s a backdoor attack lurking in the crates ecosystem, then it’s lurking pretty deep at present. The popular crates that we all rely on day to day generally appear to be what they say they are.

Adam Harvey on his blog

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox NightlyCSS Rules in your Firefox DevTools – These Weeks in Firefox: Issue 163

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Pier Angelo Vendrame
  • Sebastian Zartner [:sebo]
  • Sukhmeet[:sukh]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtension APIs
  • As part of the cross-browser compatibility improvements for Manifest Version 3 extensions landed in Firefox 128:
    • Content scripts can now be executed in the webpage global using the execution world MAIN (which is now supported by the scripting API and content scripts declared in the manifest.json file) and not be blocked by a strict webpage CSP (Bug 1736575)
      • NOTE: content scripts executed in the MAIN world do not have access to any WebExtensions API.
    • Added support for domainType (“firstParty”, “thirdParty”) DNR rule conditions (Bug 1797408)
    • Performance related improvement on evaluating DNR rules using requestDomains and initiatorDomains conditions (Bug 1853569)
    • Event pages will not be suspended if API calls that require user actions (e.g. permissions.request API calls) are still pending (Bug 1844044 / Bug 1874406)
    • Event pages persisted listeners removed through the removeListener method will stay persisted and can respawn the event page after it has been suspended (Bug 1869125)
      • NOTE: API events persisted listeners will instead be completely removed (not persisted anymore and not respawning the event page anymore) if the extension event page scripts do not add the listeners again (by not calling addListener) when the event page is started again.

Developer Tools

DevTools
  • Sebastian Zartner [:sebo] added warning when properties only applying to replaced elements are used on non-replaced elements (#1583903), and when column-span is used on elements outside of multi-column containers (#1848705)
  • Thanks to Valentin Gosu [:valentin] for fixing an issue that failing service worker requests when Responsive Design Mode was enabled (#1885308)
  • Thanks James Teh [:Jamie] for fixing an accessibility issue in the DevTools accessibility tree (#1898661)
  • Alex fixed an issue that could prevent DevTools to open (#1898490)
  • Julian fixed an issue that was preventing DevTools to consume sourcemaps files when they required credentials (#1899389)
  • Nicolas tweaked the filters button colors in Console and Netmonitor so their states should be more explicit (#1590432)
    • The filter bar of the Firefox DevTools Network Monitor in the light theme. Various resource types are filtered for (HTML, CSS, JS, XHR and Images). Fonts, Media, WS and Other are not being filtered for. Images is keyboard selected.

      Filter to your hearts content!

    • The filter bar of the Firefox DevTools Network Monitor in the dark theme. Various resource types are filtered for (HTML, CSS, JS, XHR and Images). Fonts, Media, WS and Other are not being filtered for. Images is keyboard selected.

      Filter to your hearts content! In dark mode!

  • Nicolas added @property rules (MDN) information in the var() tooltip (#1899489)
    • The CSS rules pane in the Firefox DevTools Inspector is shown showing a CSS selector with 6 rules. One of the rules sets a background colour using a CSS variable. That rule is being hovered, and a tooltip is shown for it describing the value of the variable (it's "gold").
    • And we now indicate when custom property declaration are invalid when their value does not match registered custom property definition (#1866712)
      • The CSS rules pane in the Firefox DevTools Inspector is shown showing a CSS selector with 2 rules. One of the rules is using a custom property. An error icon is shown, and a tooltip saying 'Property value does not match expected "" syntax'.
  • Nicolas added support for @starting-style rules (MDN) in the Rules view (#1892192)
  • Nicolas added support for @scope rules (MDN) in the Rules view (#1893593)
    • The CSS rules pane in the Firefox DevTools Inspector is shown showing some CSS selector using the @scope at-rule to set the colours of a li::marker and li element differently in different scopes.
WebDriver BiDi
  • External:
    • Thanks to James Hendry who removed the deprecated desiredCapabilities and requiredCapabilities from geckodriver (#1823907)
  • Related to that, Henrik updated the default value of the remote.active-protocols preference to “1”, which means that CDP is now disabled by default (#1882089)
  • Henrik implemented support for the http and bidi flags on the WebDriver Session, which allows to know if a specific session is using classic, bidi or both. (#1884090 and #1898719)
  • Julian added support for several arguments of the network.continueRequest command. Clients can now update headers, cookies, method and post body of an intercepted request. This also fixes a bug where intercepted requests in the beforeRequestSent phase could still be sent to the server (#1850680)
  • Sasha fixed the order in which we emit network events in case of redirects. Our behavior now correctly matches the specifications (#1879580)
  • Sasha implemented the userContext argument for the permissions.setPermission command which allows to update a permission only for a specific user context (#1894217)
  • Henrik improved the way we handle error pages in the navigation helpers used by WebDriver BiDi (#1878690)
  • Sasha updated the exception thrown when the input.setFiles command is used with a file which doesn’t exist. (#1887644)
  • Sasha updated our vendored version of puppeteer to v22.9.0. As usual we try to keep up to date with Puppeteer releases to benefit from their latest test changes and improvements in BiDi support. (#1897183)

Lint, Docs and Workflow

Migration Improvements

Performance

Profile Management

  • Initial work on the toolkit profile service and profile database is in review. Engineering work is pausing for two weeks to free up engineers for some Review Checker work.

Search and Navigation

  • HTTPS trimming in the address bar
    • Marco fixed a bug related to displaying the scheme for RTL (right-to-left) domains (1862404)
  • Google account signed-in status
    • Stephanie landed patches enabling telemetry indicating whether the client was signed in to a Google account at the time of a SERP load (1877494, 1892332)
  • Search Config v2
    • Mark & Mandy have been hard at work on the new search config over the past several months, and it is now permanently enabled (1900638)
    • Standard8 resolved an incident where one of our Glean pings wasn’t being sent due to the new search config (1901057, 1901208)
  • Bug fixes, clean up and intermittents

Storybook/Reusable Components

The Talospace ProjectChromium Power ISA patches ... from Solid Silicon

It appears that some of the issues observed by me and others with Chromium on Fedora ppc64le may in fact be due to an incomplete patch set, which is now available on Solid Silicon's Gitlab. If your distro doesn't support this, now you have an upstream to point them at or build your own. They include the Ungoogled changes as well, even though I retain my philosophical objections to Chromium, and still use Firefox personally (I've got to get back on the horse and resume maintaining my personal builds now that I've got Plasma 6 back running on Xorg again).

Oh, yeah, it really is that Solid Silicon. You can make your own speculations from the commit log, though regardless of whether Solid Silicon is truly a separate concern or a Raptor subsidiary, it wouldn't be surprising that Raptor resources are assisting since they've kind of bet the store on the S1.

Timothy Pearson's comments in the Electron Github suggest that Google has been pretty resistant to incorporating support for architectures outside of their core platforms. This is not a wholly unreasonable position on Google's part but it's not a particularly charitable one, and unlike Mozilla, the Chrome team doesn't really have the concept of a tier-3 build nor any motivation to. That kind of behaviour is all the more reason not to encourage browser monocultures because it's not just the layout engine that causes vendor lock-in. Fortunately V8, the JavaScript engine, is maintained separately, and reportedly has been more accommodating presumably because of things like Node.js on IBM hardware (even IBM i via PASE!).

Mozilla is much more accepting of this as long as regressions aren't introduced. This is why TenFourFox patches were largely not upstreamed since they would potentially cause problems with Cocoa widgets in later versions of macOS, though what patches were generally applicable I would do so. The main reason I'm still maintaining the Firefox ppc64le JIT patches outside is because I still can't solve these recent startup crashes deep within Wasm code, which largely limits me to Baseline Compiler and thus is not suitable for loading into the tree yet (we'd have to also upstream pref changes that would adversely affect tier-1 until this is fixed). I still intend to pull these patches up to the next ESR, especially since Github is glacially slow now without a JIT and it's affecting my personal ability to do other tasks. Maybe I should be working on something like rr for ppc64le at the same time because stepping through deeply layered code in gdb is a great way to go stark raving mad.

Firefox Developer ExperienceFirefox DevTools Newsletter — 127

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 127 Nightly release cycle.

Performance project

If you’ve been reading us for a bit, you are now well aware that we’re focusing on performance for a few months to make our tools as fast as they can be.

We made displaying rules in the Inspector 5% faster for the common case, and even 600 times faster on pages with very large stylesheets (going from ~3 seconds to ~5 milliseconds in a page using Tailwind)! This was made possible by moving away from our DevTools-specific, JS-written, CSS lexer to a Rust-based implementation. In various places of the codebase, we need to know the different “parts” of a CSS selector, or a property declaration. To have a reliable way of analyzing a given CSS snippet, we use a CSS lexer which computes a sequence of tokens describing the different parts of the snippet. Since this tokenization is actually also done at the CSS engine level when a stylesheet is parsed, as described in the CSS Syntax Module Level 3 specification. We were trying to do the same thing as the engine, and given that we do have access to the engine machinery, it felt silly not sharing the same code. This performance project was a nice opportunity to integrate with the Rust-based implementation the engine is using and ditch our JS-implementation.

Oh my bugs

As temperatures rise in the Northern hemisphere, we’re entering bugs season, and unfortunately, our project isn’t immune to that. First, we identified and addressed a pretty severe race condition that could result in the toolbox not opening at all (#1898490). We also got reports of Debugger crashing (#1891699), as well as issues in the Console when displaying wasm stracktraces (#1888645). Hopefully everything is now working correctly.

If those could be thought of “killer bees” bugs, we also tackle some annoying “midge” bugs:

  • The Network panel could be missing requests made from iframes at the very end of their lifecycle, for example in the unload event (#1887852)
  • When using the node picker, you can hold the Shift key to be able to retrieve elements that are not receiving mouse events (e.g. having pointer-events: none declaration). When using this feature, our heuristic should now better pick the “deepest” element under the mouse (#1889500)
  • Did you know that you could nest @keyframes rule in other at-rules? In such case, we’re now properly detecting the rules, and displaying it in the Rules view, like non-nested keyframes rules (#1894603)
  • Firefox 125 added support for the Popover API, but it wasn’t possible to inspect their ::backdrop pseudo-element, it’s now fixed.
  • Finally, last year, on OSX, we changed the location for screeshots taken in DevTools, from Downloads to Pictures. This was confusing for some people as Firefox Screenshots still put them in the Downloads folder, so we reverted our change.

And that’s it for this months folks, Thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂

The Mozilla BlogIntroducing Anonym: Raising the bar for privacy-preserving digital advertising

Mozilla has acquired Anonym, a trailblazer in privacy-preserving digital advertising. This strategic acquisition enables Mozilla to help raise the bar for the advertising industry by ensuring user privacy while delivering effective advertising solutions.

The online advertising industry is undergoing a significant transformation. With growing consumer concerns and increasing scrutiny from regulators, it’s evident that current data practices are excessive and unsustainable. We are at the forefront of a pivotal shift in how privacy and advertising coexist, reshaping the digital landscape for advertisers, platforms, and consumers.

Amidst this moment of change, Anonym stands out for its unique privacy-preserving technology. By securely combining encrypted data sets from platforms and advertisers, Anonym enables scalable, privacy-safe measurement and optimization of advertiser campaigns, thereby leading a shift toward a more sustainable advertising ecosystem.  

Here’s how it works:

  • Secure Environment: Data sets are matched in a highly secure environment, ensuring advertisers, publishers, and Anonym don’t access any user level data.
  • Anonymized Analytics: The process results in anonymized insights and models, helping advertisers measure and improve campaign performance while safeguarding consumer privacy.
  • Differential Privacy Algorithms: These algorithms add “noise” to the data, protecting it from being traced back to individual users.

This acquisition marks a significant step in addressing the urgent need for privacy-preserving advertising solutions. By combining Mozilla’s scale and trusted reputation with Anonym’s cutting-edge technology, we can enhance user privacy and advertising effectiveness, leveling the playing field for all stakeholders.

Anonym was founded with two core beliefs: First, that people have a fundamental right to privacy in online interactions and second, that digital advertising is critical for the sustainability of free content, services and experiences. Mozilla and Anonym share the belief that advanced technologies can enable relevant and measurable advertising while still preserving user privacy.

As we integrate Anonym into the Mozilla family, we are excited about the possibilities this partnership brings. While Anonym will continue to serve its customer base, together, we are poised to lead the industry toward a future where privacy and effective advertising go hand in hand, supporting a free and open internet.

About Anonym: Anonym was founded in 2022 by former Meta executives Brad Smallwood and Graham Mudd. The company was backed by Griffin Gaming Partners, Norwest Venture Partners, Heracles Capital as well as a number of strategic individual investors.

The post Introducing Anonym: Raising the bar for privacy-preserving digital advertising appeared first on The Mozilla Blog.

Don Martihappy Father’s Day, here’s a Dad joke

Ready? Joke time. Here’s an old one.

What’s the difference between a donut and a turd?

I don’t know.

Remind me never to send you out for donuts.

What reminded me of that joke is all the surveillance advertising companies going on about how surveillance advertising is so good for small businesses. But if they have so much trouble telling small businesses and fraud apart, how can they know? Maybe surveillance ads are just better for fraud. The interesting comparison to make is not between a legit business at times they have surveillance advertising on or off, because the scammers competing to reach the same customers are leaving the surveillance ads on. IMHO you have to look from the customer side. If surveillance advertising helps legit companies reach people who can benefit from their products, then people who use ad blockers or privacy tools should be less happy with the stuff they buy.

Instead, people who installed ad blockers for a study turned out to be less likely to regret their recent purchases, and that’s surprising enough to be worth digging into. Maybe it’s not fraud, just drop-shippers. Lots of drop-shippers/social media advertisers are finding existing cheap products, marking them up, and selling using surveillance ads. It’s not illegal, but the people who click the ads end up paying more money for the same stuff. Maybe the reason that the ad blocker users are happier as shoppers is that they search out and buy, say, a $20 product for $20 instead of paying a drop-shipper $99? Or maybe ad blocker users are just making fewer but better thought out purchases?

Don Martilinks for 15 June 2024

Just some reading material, more later. I did mess with the CSS on this blog a little, so pages with code on them should look a little better on small screens even if you have to scroll horizontally to see the code.

The Eclipse of the Russian Arms Market China is entering the market for traditional Russian products.

‘Devastating’ potential impact of Google AI Overviews on publisher visibility revealed (This is strange. Right at the time Google needs all the support they can get for their unpopular privacy and antitrust positions, they’re taking action against everyone else on the web. Not sure what the plan is here.)

Which top sites block AI crawlers? All in all, most sites I looked at don’t care to have their content used to train AI. (IMHO this will be a big issue with the Fediverse—currently the only way to pass a noai signal is to defederate. I made a FEP (fep-5e53) so will see what happens.)

Why First Party Data May Not Save Digital Advertising (This is why it’s going to be better to get real consent, later, from fewer people than bogus consent based on zero information about the brand or publisher.)

AI won’t kill ad agencies. Here’s why. Why? Because an agency can amortize the cost of expertise across multiple different paying clients.

United Airlines wants to show you personalized seatback ads: Here’s how to opt out (Meanwhile, other airlines are getting rid of heavy seatback entertainment systems to save fuel, since passengers are bringing devices with better screens anyway.)

“Your personal information is very important to us.” (XScreenSaver for Android has a privacy policy now.)

Economic Termites Are Everywhere [E]conomic termites…are instances of monopolization big enough to make investors a huge amount of money, but not noticeable enough for most of us. An individual termite isn’t big enough to matter, but the existence of a termite is extremely bad news, because it means there are others. Add enough of them up, and you get our modern economic experience.

Tesla may be in trouble, but other EVs are selling just fine (How much of this is the brand personality and how much is the problem that Teslas are expensive to insure? I think every car I have ever owned ended up costing a lot more in car insurance than its price.)

Facebook’s Taylor Swift Fan Pages Taken Over by Animal Abuse, Porn, and Scams (Moderation is the hard part of running any online forum, and AI moderators are the new self-driving cars.)

You Can Still Die From World War I Dangers in France’s Red Zones (This is why Europe has an AI Act. They have more important problems than building robots to take people’s art. Putting limits on luxury and counterproductive uses of AI will free up money and developer time for the stuff they really need. Before people in the USA get mad about this, remember we did it too. There’s no such thing as a 1943 Cadillac Coupe de Ville.)

We need to rewild the internet For California residents, GPC automates the request to “accept” or “reject” sales of your data, such as cookie-based tracking, on its websites. However, it isn’t yet supported by major default browsers like Chrome and Safari. Broad adoption will take time, but it’s a small step in changing real-world outcomes by driving antimonopoly practices deep into the standards stack — and it’s already being adopted elsewhere.

Frederik BraunHow I got a new domain name

Welcome! If you're reading this, you might have noticed that my blog and this post is on my new domain name frederikbraun.de.

And here is the story. The story of a young nerd in the 1990s. The story of my aunt, who went to the Miniatur Wunderland, left the …

Frederik BraunWhat is mixed content?

In web security, you may have heard of "mixed content". Maybe you saw a DevTools message like this one.

Mixed Content: Upgrading insecure display request ‘http://...’ to use ‘https’.

This blog post is going to explain what "mixed content" means, its implications for your website and how to handle mixed …

Firefox UXComing Back to Firefox as a User Researcher

Reflecting on two years of working on the browser that first showed me the internet

Firefox illustration by UX designer Gabrielle Lussier

Last week marked two years of working on Firefox. For me, this was a return to the browser I fervently used in my early internet days (circa 2004–2011). I don’t recall exactly when I left, and whether it was abrupt or gradual, but at some point Firefox was out and Chrome was the browser on my screen. Looking back, I’m pretty sure it was notifications telling me Gmail would work better on Chrome that led me there. Oof.

I certainly wasn’t alone. The storied history of browsers (including not one, but two browser wars) is marked by intense competition and shifting landscapes. Starting around 2010–2011, as Chrome’s market share went up, Firefox’s went down.

A doorway to the internet

When I started working on Firefox, a colleague likened a browser to a doorway — you walk through several a day, but don’t think much about them. It’s a window to the internet, but it’s not the internet. It helps you search the web, but it’s not a search engine. It’s a universal product, but many struggle to describe it.

So what is it, then, and why am I so happy I get to spend my days thinking about it?

A browser is an enabler, facilitating online exploration, learning, work, communication, entertainment, shopping, and more. More technically, it renders web pages, uses code to display content, and provides navigation and organization tools that allow people to explore, interact with, and retrieve information on the web.

With use cases galore, there are challenges. It’s a product that needs to be good at many things.

To help our design, product and engineering stakeholders meet these challenges, the Firefox User Research team tackles topics including managing information in the browser (what’s your relationship to tabs?), privacy in the browser, when and how people choose browsers (if they choose at all), and why they stay or leave. Fascinating research topics feel endless in the browser world.

My introduction to browser users

For my first project at Mozilla, I conducted 17, hour-long, in-depth interviews with browser users. A formative introduction to how people think about and use browsers. When I look back on that study, I recall how much I learned about a product that I previously hadn’t given much thought to. Here I summarize some of those initial learnings.

Browser adoption on desktop vs mobile: Firefox is a browser that people opt-in to. Unlike other mainstream browsers, it doesn’t come pre-installed on devices. This means that users must actively choose Firefox, bypassing the default. While many people do this — close to 200 million monthly on Firefox — using the default is common, and even more so on mobile. When talking to users of various browsers, the sentiment that “I just use what came on the device” is particularly prevalent for mobile.

Why is this so? For one, people have different needs on their desktop and mobile browsers (e.g. conducting complex work vs quick searches), leading to different behaviors. The presence of stand-alone apps on mobile that help people accomplish some of the tasks they might have otherwise done in their browser (e.g. email, shopping) also differentiate the experience.

That’s not the whole story, though. Gatekeeping practices by large tech companies, such as self-preferencing and interoperability, play a role. These practices, which Europe’s Digital Markets Act and related remedies like browser choice screens aim to address, limit consumer choice and are especially potent on mobile. In my in-depth interviews, for example, I spoke with a devoted Firefox desktop user. When explaining to me why she used the default browser on her mobile phone, she held up her phone, pointing to the dock at the bottom of her home screen. She wanted quick access to her browser through this dock, and didn’t realize she could replace the default browser that came there with one of her choosing.

Online privacy dilemmasHaving worked on privacy and the protection of personal information in the past, I was keen to learn about users’ attitudes and behaviors towards online privacy. What were their stances? How did they protect themselves? My in-depth interviews revealed that attitudes and feelings range vastly: protective, indifferent, disempowered, resigned. And often, attitudes and values towards privacy don’t align with behaviors. In today’s online world, acting on your values can be hard.

The intention-action gap speaks to the many cases when our attitudes, values or goals are at odds with our behavior. While the draw of convenience and other tradeoffs are certainly at play in the online privacy gap, so too are deceptive digital designs that make it all too difficult to use the internet on your own terms. These include buried privacy settings, complex opt-out processes, and deceptive cookie banners.

Navigating online privacy risks can feel daunting and confusing — and for good reason. One participant in the interviews described it as something that she didn’t have the time or esoteric knowledge for, even though she cared about it:

“It’s so big and complicated for a user like me, you really have to put in the time to figure it out, to understand it. And I don’t have the time for that, I honestly don’t. But that doesn’t stop me from doing things online, because, how, if being online is such an important part of my day?”

On the browser side, the technical aspects of online privacy present a perennial challenge for communicating our protective measures to users. How do we communicate the safeguards we offer users in ways that are accessible and effective?

Browser recommendations: For a product that isn’t top of mind for most people, many are steered to their browsers by word of mouth and other types of recommendations. In fact, we consistently find that around one-third of our users report having recommended Firefox in the past month. That’s more people talking about browsers than I would have imagined.

The people I interviewed spoke about recommendations from family members (“Mom, you need to step up your browser game!” one participant recalled her son saying as he guided her to a new browser), tech-oriented friends, IT departments at work, computer repair shops, and online forums and other communities.

One factor behind personal recommendations is likely that most people are satisfied with their browser. Our quantitative user research team finds high levels of browser satisfaction among not only Firefox users, but the users of other popular browsers examined in their work.

Wrapping up

Coming back to Firefox involved a process of piecing together what had happened to the browser with the little fox. In doing so, I’ve learned a lot about what brings people to browsers, and away from them, and the constrained digital landscape in which these dynamics occur. The web has changed a great deal since Firefox 1.0 was released in 2004, but Mozilla’s goal of fostering an open and accessible internet remains constant.

Thank you for reviewing a draft of this post, Laura Lopez and Rosanne Scholl.

Mozilla Addons BlogDeveloper Spotlight: Dedalium — turn the entire web into an RPG game

You might be scrolling through your morning news, checking email, or any other routine online moment when suddenly you notice a small winged beast slowly glide across your screen. It’s a challenge. A chance to earn more crystals. A fight to the finish, should you choose to accept the duel. Since you’re not super busy and battles only take a few seconds — and you sure could use more crystals to upgrade gear — you click the angry creature and next thing you know your Network Guardian (avatar) and opponent appear on floating battle stations exchanging blows. It’s a close contest, but soon your nemesis succumbs to his injuries. The thrill of victory is fleeting, though. Gotta get back to those emails.

 

Customize the skills, gear (and fashion!) of your own Network Guardian.

Dedalium is a novel game concept. There are a lot of browser games out there, but nothing quite like Dedalium, which turns the entire internet into a role-playing game, or RPG. You start by customizing the look and skills of your Network Guardian and then you’re ready to wait for battle invites to emerge; or you can go on the offensive and seek out challengers. Beyond battles, you’ll occasionally find crystals or loot boxes on the edges of your screen.

There’s also a solo Adventure mode featuring 100+ levels that lead to a final battle against the big boss Spamicus Wildpost, who has never been defeated since Dedalium’s beta launch last year.

“We’ve created something new and innovative,” says Dedalium co-creator Joel Corominas. “We call this concept ‘augmented web’ akin to augmented reality but within the web environment. While it may take time for players and browser users to fully appreciate, we strongly believe it will become a significant trend in the future. We are proud to have pioneered this concept and believe it adds a fun, interactive layer to web browsing.”

Dedalium is the debut title from Loycom Games, which Corominas co-founded in 2021 with his game development partner Adrián Quevedo. Loycom’s mission is to “gamify internet browsing.”

Still in beta, Dedalium is growing quickly. About 4,000 players currently engage with the game daily across various browsers. If you’re looking for an entirely unique browser gaming experience, Dedalium is definitely that. At first I was worried random game prompts would get annoying as I went about my business on the web, but to my delight I usually found myself eager to engage in a quick Dedalium detour. The game does a great job of never feeling intrusive. But even so, you can pause the game anytime and set specific websites as no-play zones.

If turning the entire web into an RPG sounds like a good time, give Dedalium a shot and good luck gathering those crystals!


Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey.

The post Developer Spotlight: Dedalium — turn the entire web into an RPG game appeared first on Mozilla Add-ons Community Blog.

The Mozilla BlogFirefox tips and tricks for creatives

On my way to the airport last week, my driver asked what I do for a living. “I’m a content creator,” I replied. “I’m the video lead at Mozilla.”

“Wow, that sounds fun,” he said.

It can be! But, like lots of other creative professions: It’s not as glamorous as it sounds. Between the emails, meetings, docs, slides, more emails, contracts, spreadsheets, more meetings… alas, our days are not filled with boundless artistry.

Thankfully, Firefox has a ton of built-in features that save me time, so I can focus on my creative work. Here are some of my favorites.

Picture-in-picture

I produce video content, so naturally my job involves watching a LOT of videos. I try to keep up with what’s trending, particularly on YouTube. With the PiP tool, I can easily pin the latest gadget reviews and tech podcasts anywhere on my screen while I’m working. Best of all, the video window floats over any app, not just Firefox—and you can even mute it and turn on captions. Look, I’m not saying you should sneak in a couple episodes of Firefox Presents during your next Zoom meeting; I’m just saying you can.

Eyedropper tool

This one frequently comes in handy for my designer colleagues. Just select the tool (under “More Tools” in the Firefox toolbar menu or under “Browser Tools” in the Tools menu), highlight any color on any website, and voilà: instant hexcode.

Firefox Eyedropper Tool Highlighting Hex Code #ff4a74 on a Gradient Background

PDF editor

Proposals, SOWs, and contracts, oh my! Creative work involves a lot of documents—which, thankfully, you can edit right inside the browser with ease.

PDF Document Editing with Highlighted Text and Fox Image Using Firefox PDF Viewer

Screenshot tool

Screenshots have never been easier with the Screenshot tool right in your toolbar. Customize your Firefox toolbar by dragging the Screenshot button Icon of a pair of scissors cutting a dotted rectangle, representing the Firefox Screenshot Tool from the Customize Toolbar menu. Once it’s set up, you can download or copy any part of your screen with a single click. For full instructions on adding the Screenshot tool to your browser, check out the guide here.

Screenshot of the Firefox YouTube channel page showing 148K subscribers, with options to Copy or Download a selected video thumbnail.

Dark mode

As a night owl, I can’t live without this. Even better, it works for both mobile and desktop. So, whether you’re doomscrolling in bed, or catching up on email in a dark airplane cabin, you won’t fry your poor retinas. Here’s how to turn on dark mode on Firefox

Firefox Color

Speaking of UI: Did you know you can create your own browser themes in Firefox? Not only can you customize the colors to your heart’s content; you can even upload your own background patterns. It might not hack your productivity, but the soothing feng shui is priceless.

There are endless ways to make Firefox your own, whether you’re a creative, a gamer, a shopper, a minimalist, a (tab) maximalist or however you choose to navigate the internet. We want to know how you customize Firefox. Let us know and tag us on X or Instagram at @Firefox. 

Get Firefox

Get the browser that protects what’s important

The post Firefox tips and tricks for creatives appeared first on The Mozilla Blog.

Mozilla Addons BlogManifest V3 updates landed in Firefox 127

Welcome add-on developers! Below is the next installation in our series of community updates designed to provide clarity and transparency as we continue to deliver Manifest V3 related improvements with each new Firefox release.

The engineering team continues to build upon previous MV3 Chrome compatibility related work available in Firefox 126 with several additional items that landed in Firefox 127, which was released on June 11. Beginning in the 127 release, the following improvements have launched:

  • Customized keyboard shortcuts associated with the _execute_browser_action command for MV2 extensions will be automatically associated with the _execute_action command when migrating the same extension to MV3. This allows the custom keyboard shortcuts to keep functioning as expected from an end user perspective.
  • declarativeNetRequest getDynamicRules and getSessonRules API methods now accept the additional ruleIds filter as a parameter and the rule limits have been increased to match the limits enforced by other browsers.

The team will land more Chrome compatibility enhancements in Firefox 128 in addition to delivering other Manifest V3 improvements, at which time MV3 will be supported on Firefox for Android.

And to reiterate a couple important points we’ve communicated in our previous updates published in March and May:

  • The webRequest API is not on a deprecation path in Firefox
  • Mozilla has no plans to deprecate MV2

For more information on adopting MV3, please refer to our migration guide. If you have questions or feedback on our MV3 plans we would love to hear from you in the comments section below or if you prefer, drop us an email. Thanks for reading and happy coding!

The post Manifest V3 updates landed in Firefox 127 appeared first on Mozilla Add-ons Community Blog.

The Rust Programming Language BlogAnnouncing Rust 1.79.0

The Rust team is happy to announce a new version of Rust, 1.79.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.79.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.79.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.79.0 stable

Inline const expressions

const { ... } blocks are now stable in expression position, permitting explicitly entering a const context without requiring extra declarations (e.g., defining const items or associated constants on a trait).

Unlike const items (const ITEM: ... = ...), inline consts are able to make use of in-scope generics, and have their type inferred rather than written explicitly, making them particularly useful for inline code snippets. For example, a pattern like:

const EMPTY: Option<Vec<u8>> = None;
let foo = [EMPTY; 100];

can now be written like this:

let foo = [const { None }; 100];

Notably, this is also true of generic contexts, where previously a verbose trait declaration with an associated constant would be required:

fn create_none_array<T, const N: usize>() -> [Option<T>; N] {
    [const { None::<T> }; N]
}

This makes this code much more succinct and easier to read.

See the reference documentation for details.

Bounds in associated type position

Rust 1.79 stabilizes the associated item bounds syntax, which allows us to put bounds in associated type position within other bounds, i.e. T: Trait<Assoc: Bounds...>. This avoids the need to provide an extra, explicit generic type just to constrain the associated type.

This feature allows specifying bounds in a few places that previously either were not possible or imposed extra, unnecessary constraints on usage:

  • where clauses - in this position, this is equivalent to breaking up the bound into two (or more) where clauses. For example, where T: Trait<Assoc: Bound> is equivalent to where T: Trait, <T as Trait>::Assoc: Bound.
  • Supertraits - a bound specified via the new syntax is implied when the trait is used, unlike where clauses. Sample syntax: trait CopyIterator: Iterator<Item: Copy> {}.
  • Associated type item bounds - This allows constraining the nested rigid projections that are associated with a trait's associated types. e.g. trait Trait { type Assoc: Trait2<Assoc2: Copy>; }.
  • opaque type bounds (RPIT, TAIT) - This allows constraining associated types that are associated with the opaque type without having to name the opaque type. For example, impl Iterator<Item: Copy> defines an iterator whose item is Copy without having to actually name that item bound.

See the stabilization report for more details.

Extending automatic temporary lifetime extension

Temporaries which are immediately referenced in construction are now automatically lifetime extended in match and if constructs. This has the same behavior as lifetime extension for temporaries in block constructs.

For example:

let a = if true {
    ..;
    &temp() // used to error, but now gets lifetime extended
} else {
    ..;
    &temp() // used to error, but now gets lifetime extended
};

and

let a = match () {
    _ => {
        ..;
        &temp() // used to error, but now gets lifetime extended
    }
};

are now consistent with prior behavior:

let a = {
    ..;
    &temp() // lifetime is extended
};

This behavior is backwards compatible since these programs used to fail compilation.

Frame pointers enabled in standard library builds

The standard library distributed by the Rust project is now compiled with -Cforce-frame-pointers=yes, enabling downstream users to more easily profile their programs. Note that the standard library also continues to come up with line-level debug info (e.g., DWARF), though that is stripped by default in Cargo's release profiles.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.79.0

Many people came together to create Rust 1.79.0. We couldn't have done it without all of you. Thanks!

This Week In RustThis Week in Rust 551

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X(formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is hydra, an actor framework inspired by Erlang/Elixir.

Thanks to DTZxPorter for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (Formerly twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (Formerly twitter) or Mastodon!

Updates from the Rust Project

409 pull requests were merged in the last week

Rust Compiler Performance Triage

This week saw more regressions than wins, caused mostly by code being reorganized within the compiler and a new feature being implemented. There have also been some nice improvements caused by better optimizing spans.

Triage done by @kobzol. Revision range: 1d52972d..b5b13568

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.6% [0.2%, 2.7%] 105
Regressions ❌
(secondary)
1.0% [0.1%, 6.9%] 74
Improvements ✅
(primary)
-0.5% [-1.0%, -0.2%] 20
Improvements ✅
(secondary)
-1.4% [-8.8%, -0.2%] 32
All ❌✅ (primary) 0.5% [-1.0%, 2.7%] 125

5 Regressions, 3 Improvements, 4 Mixed; 5 of them in rollups 59 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo Language Team
  • No Language Team RFCs entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline RFCs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-06-12 - 2024-07-10 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I like explicit but I hate noise...

dlevac discussing error handling on /r/golang

Thanks to robin for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Don MartiX-Robots-Tag for GPC

It’s easy to mock generative AI for weird stuff like telling people to put glue on pizza, inspiring a reporter to write a story about making glue pizza, then training on the story. But there is a serious side to this stuff. Protecting the content of a web site from AI training is not just about trying to avoid market competition with copied and scrambled versions of your own content. it’s not fair use, seriously, just read the actual four factors of fair use or ask a librarian. It’s just not a thing. MSN boosted an AI-generated article stating that an Irish DJ and talk-show host was on trial over alleged sexual misconduct. When you put parts of your personal life on your web site, the blurry compressed version of it that AI spews out has other, more personal, risks too. Nonconsensual Nude Apps are just the beginning.

AI-specific laws are still in progress, and copyright cases are still making their way through the court system. I still don’t know if all the stuff I did to block AI training on a web site is going to be enforceable. But in the meantime we do have a tool that is already in place and tested. Global Privacy Control is an opt-out preference signal (OOPS), a way to signal, in a legally enforceable way, that you opt out of the sale or sharing of your personal information.

GPC already protects residents of California, Colorado, Connecticut, and other states in the USA, and enforcement is coming on line in other jurisdications as well. Sounds like a useful tool, right? One missing piece. The current GPC standard covers a signal sent from the client to the server. When you visit a site as a user, this is just fine. But the missing piece is what happens when your personal info is on a server, but the company looking to exploit it is running a client—a crawler or scraper. That’s where we need to borrow from the methods for blocking AI training on a web site and add a GPC meta tag and HTTP header.

The header is pretty easy. I just did it. Have a look at this site’s HTTP headers in developer tools or do a

curl -I -q https://blog.zgp.org/ | grep X-Robots-Tag

and there it is. Same with the meta tag.

<meta name=”robots” content=”noai, noimageai, GPC”>

TODO items

  • Colorado has a process for registering OOPSs, so I will need to write this up and submit it so it’s valid there, but in other jurisdictions the OOPS is valid as long as it expresses the deliberate opt-out of the user, which mine does.

  • Just to make it extra clear, I need to put something in my Web Site User Agreement, the way a lot of sites do for noai

  • continue to GPC all the things!

Remember that laws are downstream of norms here. People generally believe in moral rights and some kind of copyrights for people who do creative work, and people generally believe in some kind of privacy right to control use of your personal information. The details will get worked out. Big AI will probably be able to make bogus legal arguments, delay, and lobby for a while, but in the long run the law will reflect norms more than it reflects billable hours spent trying to push a disliked business model uphill. Comments and suggestions welcome.

Related

GPC all the things!

Block AI training on a web site

Bonus link

AI chatbots are intruding into online communities where people are trying to connect with other humans (not with personal stories based on mine they’d better not)

Firefox Developer ExperienceFirefox WebDriver Newsletter — 127

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 127 release cycle.

Contributions

Firefox – including our WebDriver implementation – is developed as an open source project, and everyone is welcome to contribute. We are always grateful to receive external contributions, here are the ones which made it in 127:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.

General

Bug fixes

  • Fixed a bug with the "wheel" action, which can be used both in WebDriver BiDi and WebDriver classic. We now correctly handle modifier keys such as Shift, Control, etc. With this, you can simulate a user action scrolling the wheel while holding a modifier.

WebDriver BiDi

New: Support for the “permissions.setPermission” command

The permissions module is an extension to the WebDriver BiDi specification, defined in the Permissions specification. It is the first extension for WebDriver BiDi to be implemented in Firefox, with the permissions.setPermission command. This command allows you to grant, deny or prompt for a given permission, such as “geolocation”. The permission will be set for a provided origin, and optionally for a specific user context.

The descriptor argument should be a Permission Descriptor, which is basically an object with a name string property set to the name of the permission to update. The state argument should be one of "granted", "denied" or "prompt". The origin argument should be the origin for which the permission setting will be set. And finally the optional argument userContext should be the user context id where the permission should be applied ("default" if omitted).

Below is an example of setting the "geolocation" permission to "prompt" for the "https://www.google.com" origin:

-> {
  "method": "permissions.setPermission",
  "params": {
    "descriptor": {
      "name": "geolocation",
    },
    "state": "prompt",
    "origin": "https://www.google.com"
  },
  "id": 2
}

<- { "type": "success", "id": 2, "result": {} }

Afterwards, trying to use a geolocation feature on a website with the “https://www.google.com” origin such as Google Maps will trigger the permission prompt as shown below:

Screenshot of Google Maps showing the "geolocation" permission prompt<figcaption class="wp-element-caption">Google Maps shows the "geolocation" permission prompt</figcaption>

New: Support for accessibility locator in the “browsingContext.locateNodes” command

The accessibility locator allows you to find elements matching a specific computed role or accessible name. This locator has the type "accessibility", and for the value it expects an object with a "name" property (for accessible name) and/or a "role" property (for computed role). You may provide one or both properties at the same time. Note that the start nodes (startNodes argument) can contain elements, documents and document fragments.

For instance, considering the following markup, which attributes the checkbox role to a span, labelled by another span element:

<!DOCTYPE html>
<html>
  <body>
    <span role="checkbox" aria-checked="false" tabindex="0" aria-labelledby="tac"
    ></span>
    <span id="tac">Checkbox name</span>
  </body>
</html>

You can find the checkbox element either by using the “role” accessibility locator:

{
  "method": "browsingContext.locateNodes",
  "params": {
    "locator": {
      "type": "accessibility",
      "value": {
        "role": "checkbox"
      }
    },
    "context": "2a22b1c6-6fa8-4e62-b4af-32ed2ff1ced7"
  },
  "id": 19
}

Or by using the accessible name, which is the value of the aria-labelledby element:

{
  "method": "browsingContext.locateNodes",
  "params": {
    "locator": {
      "type": "accessibility",
      "value": {
        "name": "Checkbox name"
      }
    },
    "context": "2a22b1c6-6fa8-4e62-b4af-32ed2ff1ced7"
  },
  "id": 20
}

Both commands will return the span with role="checkbox":

{
  "type": "success",
  "id": 20,
  "result": {
    "nodes": [
      {
        "type": "node",
        "sharedId": "16d8d8ab-7404-4d4b-83e9-203fd9801f0a",
        "value": {
          "nodeType": 1,
          "localName": "span",
          "namespaceURI": "http://www.w3.org/1999/xhtml",
          "childNodeCount": 0,
          "attributes": {
            "role": "checkbox",
            "aria-checked": "false",
            "tabindex": "0",
            "aria-labelledby": "tac"
          },
          "shadowRoot": null
        }
      }
    ]
  }
}

New: Support for “devicePixelRatio” parameter in the “browsingContext.setViewport” command

We now support the devicePixelRatio parameter in the browsingContext.setViewport command, which allows to emulate the behavior of screens with different device pixel ratio (such as high density displays). The devicePixelRatio is expected to be a positive number.

Bug fixes

Marionette (WebDriver classic)

Bug fixes

Cameron KaisermacOS Sequoia

Do you like your computers to be big, fire-prone and inflexible? Then you'll love macOS Sequoia, another missed naming opportunity from the company that should have brought you macOS Mettler, macOS Bolinas (now with no support for mail), or macOS Weed. Plus, now you'll have to deal with pervasive ChatGPT integration, meaning you won't have to watch the next Mission: Impossible to find out what the Entity AI will do to you.

Now that I've had my cup of snark, though, Intel Mac users beware: this one almost uniformly requires a T2 chip, the Apple A10 derivative used as a security controller in the last generation of Intel Macs, and even at least one Mac that does have one isn't supported (the 2018 MacBook Air, presumably because of its lower-powered CPU-GPU, which is likely why the more powerful 2019 iMac without one is supported, albeit incompletely). It would not be a stretch to conclude that this is the final macOS for Intel Macs, though Rosetta 2's integration to support x86_64 in VMs means Intel Mac software will likely stay supported on Apple silicon for awhile. But that shouldn't be particularly surprising. What I did find a little more ominous is that only the 2020 MacBook Air and up is supported in their price segment, and since those Macs are about four years old now, it's possible some M1 Macs might not make the jump to macOS 16 either — whatever Apple ends up calling it.

The Mozilla BlogUncovering GenAI trends: Using local language models to explore 35 organizations

(To read the complete analysis as well as the results of each language model, visit the mozilla.ai blog here.)

Over the past few months, Mozilla.ai has engaged with several organizations to learn how they are using language models in practice. We spoke with 35 organizations across various sectors, including finance, government, startups, and large enterprises. Our interviewees ranged from machine learning engineers to CTOs, capturing a diverse range of perspectives. 

To analyze these interviews, we used open-source local language models running on our laptops. The analysis confirmed the trends we had anticipated during our interviews and shed light on the differences in each model’s presentation of said trends.

Objective: Help shape our product vision

Our primary aim was to identify patterns and trends that could inform our product development strategy. Despite the unique nature of each discussion, we usually focused on four critical areas:

  1. LLM use cases being explored
  2. Technology, methodologies, and approaches employed
  3. Challenges in developing and delivering LLM solutions
  4. Desired capabilities and features

Data collection & model selection

After each conversation, we wrote up summary notes. In total, these notes for the 35 conversations amounted to 18,481 words (approximately 24,600 tokens), almost the length of a novella. To avoid confirmation bias and subjective interpretation, we decided to leverage language models for a more objective analysis of the data. By providing the models with the complete set of notes, we aimed to uncover patterns and trends without our pre-existing notions and biases.

Given privacy concerns, we decided to keep the information local. Therefore, I selected a set of models that I could run on my MacBook Pro M3 (36GB) locally. Here’s an overview of the models and configurations used:

ModelParametersQuantizationSize
Llama-3-8B-Instruct-Gradient-1048k 8BQ5_05.6GB
Phi-3-medium-128k-instruct14BIQ3_M6.47GB
Qwen1.5-7B-Chat7B1_55.53GB

There are a number of options to run LLMs locally, such as ollama, lm-studio, and llamafile. I used both lm-studio and llamafile (an in-house solution by the Mozilla Innovation Team).

Summarizing with local language models

The prompt used to generate model outputs was: “Summarize the following information to get the key takeaways about developing LLM solutions in 10 bullet points. Take the full information from start to finish into account. Never use company names or an individual’s name. [Full notes]”

To read the complete analysis as well as the results of each language model, visit the mozilla.ai blog here.

Key takeaways

I was impressed by the quality of the responses from these models, which were all capable of running locally on my laptop. They identified the majority of trends and patterns among the 35 organizations we studied. Each model also highlighted unique insights and communicated in different styles:

  • Llama-3-8B-Instruct-Gradient-1048k emphasized the main LLM use-cases that were discussed and the difficulties moving from prototype to production. The style of the sentences generated can be quite long.
  • Phi-3-medium-128k-instruct picked up on the reluctance of many organizations to finetune models. Its style feels more conversational than the other models.
  • Qwen1.5-7B-Chat highlighted the lack of technical expertise many organizations suffer from. Its style is more concise and straightforward, similar to the style of chatGPT.

Across all the models, three key takeaways stood out:

  1. Evaluation: Many organizations highlight the challenges of evaluating LLMs, finding it time-consuming.
  2. Privacy: Data privacy and security are major concerns influencing tool and platform choices.
  3. Reusability and customization: Organizations value reusability and seek customizable models for specific tasks.

This exercise showcased how well local language models can extract valuable insights from large text datasets. The discussion and feedback from our network and end-users will continue to guide our efforts at Mozilla.ai, helping us develop tools that support diverse use cases and make LLM solutions more accessible and effective for organizations of all sizes.

The post Uncovering GenAI trends: Using local language models to explore 35 organizations  appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird: The Build and Release Process Explained

Our Community Office Hours session for May 2024 has concluded, and it was quite informative (especially for non-developers like me)! Wayne and Daniel shed light on Thunderbird’s build and release process, ran through a detailed presentation, answered questions, and treated us to live demos showing how a new Thunderbird build gets pushed and promoted to release.

Below you’ll find a lightly edited recording of the session, and the presentation slides in PDF format.

We’ll be announcing the topic of our June Office Hours session soon, so keep an eye on the Thunderbird blog.

Links and Further Reading


ORIGINAL ANNOUNCEMENT

Have you ever wondered what the release process of Thunderbird is like? Wanted to know if a particular bug would be fixed in the next release? Or how long release support lasts? Or just how many point releases are there?

In the May Office Hours, we’ll demystify the current Thunderbird release process as we get closer to the next Extended Security Release on July 10, 2024. 

May Office Hours: The Thunderbird Release Process

One of our guests you may know already: Wayne Mery, our release and community manager. Daniel Darnell, a key release engineer, will also join us. They’ll answer questions about what roles they play, how we stage releases, and when they know if releases are ready. Additionally, they’ll tell us about the future of Thunderbird releases, including working with add-on developers and exploring a monthly release cadence.

Join us as our guests answer these questions and more in the next edition of our Community Office Hours! You can also submit your own questions about this topic beforehand and we’ll be sure to answer them: officehours@thunderbird.net

Catch Up On Last Month’s Thunderbird Community Office Hours

While you’re thinking of questions to ask, watch last month’s office hours where we chatted with three key developers bringing Rust and native Microsoft Exchange support into Thunderbird. You can find the video on our TILvids page.

Join The Video Chat

We’ll be back in our Big Blue Button room, provided by KDE and the Linux Application Summit. We’re grateful for their support and to have an open source web conferencing solution for our community office hours.

Date and Time: Friday, May 31 at 17:30 UTC

Direct URL to Join: https://meet.thunderbird.net/b/hea-uex-usn-rb1

Access Code: 964573

The post Thunderbird: The Build and Release Process Explained appeared first on The Thunderbird Blog.

Don MartiBlock AI training on a web site

(Update 14 Jun 2024: Add darkvisitors.com API and GPC.)

I’m going to start with a warning. You can’t completely block “AI” training from a web site. Underground AI will always get through, and it might turn out that the future of AI-based infringement is bot accounts so that the sites that profit from it can just be shocked at what one of their users was doing—kind of like how big companies monetize copyright infringement.

But there are some ways to tell the halfway crooks of the AI business to go away. Will update if I find others.

robots.txt

Dark Visitors - A List of Known AI Agents on the Internet is a good source of an up-to-date set of lines to add to your robots.txt file.

This site uses the API to catch up on the latest. So if I fall behind on reading the technology news, the Makefile has me covered.

# update AI crawlers blocking list from darkvisitors.com tmp/robots.txt : curl -X POST "https://api.darkvisitors.com/robots-txts" \ -H "Authorization: Bearer $(shell pass darkvisitors-token)" \ -H "Content-Type: application/json" \ -d '{"agent_types": ["AI Data Scraper", "AI Assistant", "Undocumented AI Agent", "AI Search Crawler"], "disallow": "/"}' \ > $@ # The real robots.txt is built from the local lines # in the conf directory, with the # darkvisitors.com lines added public/robots.txt : conf/robots.txt tmp/robots.txt cat conf/robots.txt tmp/robots.txt > $@

One of my cleanup scripts gets rid of the tmp/robots.txt fetched from Dark Visitors if it gets stale, and I use Pass to store the token.

X-Robots-Tag HTTP header

DeviantArt covers how to set the X-Robots-Tag header (which also has other uses to help control how search engines crawl your site) to express an opt-out.

On Apache httpd (I know, I’m old school) it’s something like this:

Header Set X-Robots-Tag "noai"

You can check it under “network” in browser developer tools. It should show up in response headers.

noai meta tag

Raptive Support covers the noai meta tag. Pretty easy, just put this in the HTML head with any other meta and link elements.

<meta name="robots" content="noai, noimageai">

That support FAQ includes a good point that applies to all of these—the opt out is stronger if it’s backed up with the site Terms of Service or User Agreement. Big companies have invested hella lawyer hours in making these things more enforceable, and if they wanted to override ToS they would be acting against their other interests in keeping their sites in company town mode.

new: GPC

This is the first site to include the new meta tag and X-Robots-Tag header for Global Privacy Control. Basically you have legally enforceable rights in your personal information, blogs have personal information, but regular GPC only works from your browser (client) to company on the server. This goes the other way, and sends a legally enforceable* *yes, I know, this has not yet been tested in court, but give it a minute, we’re just getting started here privacy signal from a personal blog on the server to an AI scraper on the client side.

So the new header on here is

X-Robots-Tag: noai, noimageai, GPC

So we’re up to four, somebody send me number five?

Related

Google Chrome ad features checklist covers the client side of this— how to protect your personal info, and other people’s, from being fed to AI (among other abuses)

remove AI from Google Search on Firefox: how to remove the “AI”-generated material from Google search results

How to Stop Your Data From Being Used to Train AI | WIRED covers much other software including Adobe, Slack, and others. The list below only includes companies currently with an opt-out process. For example, Microsoft’s Copilot does not offer users with personal accounts the option to have their prompts not used to improve the software.

Bonus links

The Internet is a Series of Webs The future of the internet seems up in the air. Consumed by rotting behemoths. What we have now is failing, but it is also part of our every-day life, our politics, our society, our communities and our friendships. All of those are at risk, in part because the ways we communicate are under attack. (So if Google search ads are scammy enough to get an FBI warning, Meta is a shitshow, and Amazon is full of fake and stolen stuff, what do you do? Make a list of legit companies on your blog and hope others do the same?)

For tech CEOs, the dystopia is the point The CEOs obviously don’t much care what some flyby cultural critics think of their branding aspirations, but beyond even that, we have to bear in mind that these dystopias are actively useful to them.

Apple Removes Nonconsensual AI Nude Apps Following 404 Media Investigation (think of how bad the Internet would be without independent sites covering the big companies…then go subscribe to 404 Media.)

Amazon is filled with garbage ebooks. Here’s how they get made. The biographer in question was just one in a vast, hidden ecosystem centered on the production and distribution of very cheap, low-quality ebooks about increasingly esoteric subjects. Many of them gleefully share misinformation or repackage basic facts from WikiHow behind a title that’s been search-engine-optimized to hell and back again. Some of them even steal the names of well-established existing authors and masquerade as new releases from those writers. (I’m going to the real bookstore.)

“Pink slime” local news outlets erupt all over US as election nears Kathleen Carley, a computer science professor at Carnegie Mellon University, said her research suggests that following the 2022 midterms “a lot more money” is being poured into pink slime sites, including advertising on Meta.

Don Martibusiness recommendations

Since there’s a search quality crisis on, a lot of the companies you might find on social media are scams, and a lot of the stuff sold on big retail sites is fake, here are some real businesses I can recommend in several categories. Will fill in some more.

I personally know about all of these and would be happy to answer questions.

art, crafts, gifts

Modern Mouse (A place for local artists and artisans to sell their work.)

books

Books Inc (Several Bay Area locations including SFO. If they don’t have it they can order it.)

burritos

Island Taqueria 1313 Park St., Alameda. (Bay Area’s best burritos. El Gran Taco in San Francisco would have been a contender but they’re gone now.)

car repair

Fred’s Wrenchouse

delicatessen

Zingerman’s Deli (mail order available)

earbuds

JVC Gumy HAFX7 These really sound good and come with a set of silicone ear pieces in different sizes, so in real-world listening situations they sound better than more expensive options that don’t fit as well. (In my experience most drama and waste from electronic devices are caused by apps, firmware, Terms of Service, radios, and batteries. These have none of those.)

electrician

sotelectric dot com memo to self: check and fix link

hardware

Encinal True Value Hardware

Paganos Hardware

Internet service

monkeybrains.net

pharmacy

Versailles Pharmacy 2801 Encinal Ave., Alameda.

plants

Annie’s Annuals and Perennials

plumbing

Gladiator Plumber 1752 Timothy Drive, San Leandro.

roofing

Planchon Roofing & Siding Co

sidewall shingling

Nica Sidewall Shingling

stereo repair

Champlifier

Bonus links

Microsoft is reworking Recall after researchers point out its security problems (Maybe this is downstream of extreme economic inequality? When so many decisions are made by an out-of-touch management class that shares few of the problems of regular people, new product news turns into an endless stream of weird shit that makes regular people’s problems worse.)

New York to ban ‘addictive’ suggested posts on social media feeds for kids In practice, the bill would stop platforms from showing suggested posts to people under the age of 18, content the legislation describes as addictive. Instead, children would get posts only from accounts they follow. A minor could still get the suggested posts if he or she has what the bill defines as verifiable parental consent.

We’re unprepared for the threat GenAI on Instagram, Facebook, and Whatsapp poses to kids Waves of Child Sexual Abuse Material (CSAM) are inundating social media platforms as bad actors target these sites for their accessibility and reach. (The other issue is labor organizing among social site moderators. The people who run social platforms seem to really think they can AI their way out of dealing with the moderators’ union.)

I turned in my manuscript! (Looks like Evan’s ActivityPub book is coming soon. I put in a purchase request at the library already.)

The Mozilla BlogFirefox tips and tricks for gamers

Once my work day is over and my baby is asleep, there’s nothing I love more than settling in with my weighted blanket, grabbing some pillows, and playing video games. I don’t get to play video games as much as I’d like to anymore, so I need every tool at my disposal working for me to make sure I can maximize my time. I reached out to my fellow gamers here at Mozilla, and here’s how we use Firefox to help us game.

Fakespot 

I have a deep love of Animal Crossing that extends to buying physical Amiibo Cards that allow me to invite villagers to my island for coffee in-game. Cards from the first sets are really hard to find locally, so I use Fakespot to examine reviews on Amazon, Walmart, and Best Buy and give me a seller rating so that I can buy my cards with confidence.

I always use Fakespot when researching every game or accessory to ensure what I am buying has reliable reviews and comes from reputable sellers. Nothing can ruin the gaming experience more than if a product is used, damaged, repackaged or counterfeit. 

Cloud gaming 

One of my coworkers uses Firefox for Xbox Cloud Gaming, and has reported that Firefox works pretty great for this. You can play on the console, save your progress and continue your game on Firefox from anywhere.

Picture-in-Picture

I don’t have as much free time to game anymore, so when I’m looking for walkthroughs for how to find all of the Korok seeds, I find myself often using Picture in Picture. It allows me to keep a walkthrough video playing while I’m looking up other locations and maps at the same time in The Legend of Zelda: Breath of the Wild. The perfect companion for the completionist gamers out there.

Steam add-ons

I love that Firefox has such an extensive library of add-ons to customize Firefox for what you need. Another coworker mentioned using some great extensions with the gaming platform Steam.

Sync

During the day I use my laptop, but during the evening I almost exclusively use my phone. Firefox’s mobile sync allows me to find guides and tips during the day. Later, when I’m deeply nestled in my blanket cocoon, I can sync the tabs I want from my laptop to my phone, and I don’t have to get up from my game to find that resource I was looking for.

Dark mode

While I’m working, I’m one of the few people I know who work in tech that actually prefers light mode to do work. However, at night, I am all about dark mode on my phone. Nothing ruins your comfy gaming experience more than being temporarily blinded by your phone in the middle of the night. In Firefox on desktop and mobile it’s super easy to to switch modes by going to Settings > General > Language and Appearance. 

There are endless ways to make Firefox your own, whether you’re a gamer, a creative, a shopper, a minimalist, a (tab) maximalist or however you choose to navigate the internet. We want to know how you customize Firefox. Let us know and tag us on X or Instagram at @Firefox. 

Get Firefox

Get the browser that protects what’s important

The post Firefox tips and tricks for gamers appeared first on The Mozilla Blog.

Mozilla ThunderbirdOur First Thunderbird Contributor Highlight!

A stylized graphic with the Thunderbird logo and the words 'Contributor Highlight' in the upper right corner, with a large capital A and the name 'Arthur' centered.

Thunderbird wouldn’t be here today without its incredible and dedicated contributors. The people developing Thunderbird and all of its add-ons, testing new releases, and supporting fellow users, for example, are the wind beneath our wings. It’s time to give them the spotlight in our new Contributor Highlight series.

We kick things off with Arthur, who contributes to Thunderbird by triaging and filing bug reports at Bugzilla, as well as assisting others.

Arthur, Chicago USA

Why do you like using Thunderbird?

Thunderbird helps me organize my life and I could not function in this world without its Calendar feature. It syncs well with things I do on my Android device and I can even run a portable version of it on my USB drive when I don’t have physical access to my home or office PC. Try doing that with that “other” email client.

What do you do in the Thunderbird community and why do you enjoy it? What motivates you to contribute?

Being a user myself, I can help other users because I know where they’re coming from. Also, having a forum like Bugzilla allows regular users to bring bugs to the attention of the Devs and for me to interface with those users to see if I can reproduce bugs or help them resolve issues. Having a direct line to Mozilla is an amazing resource. If you don’t have skin in the game, you can’t complain about the direction in which a product goes.

How do you relate your professional background and volunteerism to your involvement in Thunderbird?

As an IT veteran of 33+ years, I am very comfortable in user facing support and working with app vendors to resolve app problems but volunteering takes on many forms and is good for personal growth. Some choose to volunteer at their local Food Panty or Homeless shelter. I’ve found my comfort zone in leveraging my decades of IT experience to make something I know millions of users use and help make it better.

Share Your Contributor Highlight (or Get Involved!)

A big thanks to Arthur and all our Thunderbird contributors who have kept us alive and are helping us thrive! We’ll be back soon with more contributor highlights to spotlight more of our community.

If you’re a contributor who would like to share your story, get in touch with us at community@thunderbird.net. If you’re reading this and want to know more about getting involved with Thunderbird, check out our new and improved guide to learn about all the ways to contribute your skills to Thunderbird.

The post Our First Thunderbird Contributor Highlight! appeared first on The Thunderbird Blog.

Don Martisome good recent links

Just in case you have a script for finding interesting links, here are some links from mine…

Parable of the Sofa It seems blindingly obvious that an economy with a higher proportion of lifestyle businesses is going to be more resilient, more humane, and immensely more pleasant than the one that the Leaders Of Industry are trying to build. How would we get there from here?

Lord Kelvin and His Analog Computer On Thomson’s tide-predicting machine, each of 10 components was associated with a specific tidal constituent and had its own gearing to set the amplitude. The components were geared together so that their periods were proportional to the periods of the tidal constituents. A single crank turned all of the gears simultaneously, having the effect of summing each of the cosine curves.

Solar Passes 100% of Power Demand in California! [UPDATED] (electricity prices going negative regularly is going to be a big opportunity)

What is the Cara app, and why are artists deleting Instagram for it? (nifty image sharing site with built-in poisoning for ML training)

Online Privacy and Overfishing What seems normal to us in the security community is whatever was commonplace at the beginning of our careers.

One Facebook Ad Promotes a For-Profit College; Another a State School. Which Ad Do Black Users See? (algorithmic discrimination is already a hard problem to track down—and so-called privacy-enhancing ad personalization systems are just making it harder.)

The Moral Economy of the Shire From everything we’re told, the Shire is a very agriculturally productive region, which helps explain the lack of debt-peonage or other forms of unfree labor. It also explains the relative “looseness” of the system we’re looking at here; the gap between the lower gentry and upper yeomanry isn’t very large, and most families are able to support themselves with only minimal assistance.

New zine: How Git Works! (memo to self: order this)

Amazon Sold Fake Copies of Hotly Anticipated UFO Book (more news from the Big Tech #pivotToCrime. When Microsoft needed support in its antitrust courtroom drama, the MS-Windows OEMs and ISVs were right there. Amazon, Meta, and Google need support now—but they’re telling the content business to go eat a turd.)

Video Shows China’s Rifle-Equipped Robot Dog Opening Fire on Targets (If you thought wow, AI means we don’t have to hire as many content moderators! was big news, get ready for wow, AI means our country’s army will be able to get by without conscription! stories)

Origins of the Lab Mouse The early supply of mice for research depended on a late-19th century community of hobbyists—fanciers—who collected, bred, and sold unusual mice varieties. These “fancy” varieties were then standardized in the 1920s…

WTf Quora: how a platform eats itself As Quora pursued AI-driven enhancements, things got weird. (At the very beginning of Quora, they tried populating it with bot-written questions. Then they cut back, it went through a human user phase, now back to crap.)

HouseFresh has virtually disappeared from Google Search results. Now what? (hard to swallow pills for Google management: employee hoarding and union contracts cost money, but are cheaper than trying to run a company in a layoffs-scheming-quality-revenue-layoffs doom spiral)

The Tigers at the Gate: Moving Privacy Forward Through Proactive Transparency GPC is easy to set up and listen for because it is a simple HTTP header. Perhaps too simple as it only conveys whether the choice mechanism is turned on or off (GPC=1 or GPC=0). Unlike the more complex IAB EU’s Transparency and Consent signal (TC String) signal itself does not encode information about the source of the opt-out signal, or provide details about how the signaling mechanism was implemented or presented to users. (imho this is a win. You have to respect GPC, but you can’t trust a sketchy site or adtech intermediary to set GPC correctly, therefore you can’t deal with sketchy sites or adtech intermediaries.

The deskilling of web dev is harming the product but, more importantly, it’s damaging our health – this is why burnout happens – Baldur Bjarnason You’re expected to follow half-a-dozen different specialties, each relatively fast-paced and complex in its own right, and you’re supposed to do it without cutting into the hours where you do actual paid web development. Worse yet, you’re not actually expected to use any of it directly. Instead you’re also supposed to follow the developments of framework abstractions that are layered on top of the foundation specialties, at least doubling the number of complex fields a web dev has to follow and understand, right out of the gate. (I don’t know about you, but this site has a basic HTML template, Pandoc, and a Makefile. That’s about it.)

A (tiny, incomplete, single user, write-only) ActivityPub server in PHP (lots of good news from the Fediverse. If it didn’t remind me of the original web already, it now has me pre-ordering from O’Reilly like it’s 1995 or something. Real 1995, not The Radiant Future! (Of 1995))

Google’s Protected Audience Protects Advertisers (and Google) More Than It Protects You (If you have Google Chrome, you can still turn this stuff off: Google Chrome ad features checklist)

Why I went back to buying CDs (and you should too) The integrity of my audio library had been corrupted, at least in small ways. Horns were easy to spot, but how many other songs or albums had been messed with, without my knowledge? It turns out, way more than I had thought.

Google Researchers Say AI Now Leading Disinformation Vector (and Are Severely Undercounting the Problem) As bad as the AI-generated images problem is according to the paper, it is very possibly much worse because the paper’s data sample is relying on public fact checker data, who are not selecting AI-generated image-based disinformation randomly. Sites like Snopes and Politifact, which have limited resources, are focusing on fact checking images that have earned some degree of virality or news coverage, so their fact checks serve a purpose or an audience.

Personal Blocklist (not by Google) (useful browser extension to remove sites from search results when they’re better at SEO than actual content)

Elon Musk’s Gifts to Web Scrapers (Guest Blog Post) [B]y providing a foil in litigation against both the Center for Countering Digital Hate (“CCDH”) and Bright Data (the world’s largest seller of scraped data), he’s given judges in the most important district court in the country for tech legal issues, the Northern District of California, plenty of motivation to rule against him. As a result, judges have provided two landmark opinions in the last 45 days in favor of web scrapers. This creates powerful new precedent that will make it easier for web scrapers to prevail in litigation and will make it much harder for websites to prevent scraping.

Mozilla Security BlogFirefox will upgrade more Mixed Content in Version 127

Most of the web already supports HTTPS: In fact, 93% of requests made by Firefox are already HTTPS. As a reminder, HTTP over TLS (HTTPS) fixes the security shortcoming of HTTP by creating a secure and encrypted connection. Oftentimes, when web applications enable encryption with HTTPS on their servers, legacy content may still contain references using HTTP, even though that content would also be available over a secure and encrypted connection. When such a document gets loaded over HTTPS but subresources like images, audio and video are loaded using HTTP, it is referred to as “mixed content”.

Starting with version 127, Firefox is going to automatically upgrade audio, video, and image subresources from HTTP to HTTPS.

Background

When introducing the notion of “mixed content” a long while ago, browsers used to make a fairly sharp distinction between active and passive mixed content: Loading scripts or iframes over HTTP can be really detrimental to the whole document’s security and has long since been blocked as “active mixed content”. Images and other resources were otherwise called “passive” or “display” mixed content. If a network attacker could modify them, they would not gain full control over the document. So, in hope of supporting most existing content, passive content had been allowed to load insecurely, albeit with a warning in the address bar.

Previous behavior, without upgrading: Degraded lock icon, with a warning sign in the lower right corner.

Previous behavior, without upgrading: Degraded lock icon, with a warning sign in the lower right corner.

With the web platform supporting many new and exciting forms of content (e.g., responsive images), that notion became a bit blurry: Responsive images are not active in a sense that a malicious responsive image can take over the whole web page. However, with an impetus toward a more secure web, since 2018, we require that new features are only available when using HTTPS.

Upgradable and blockable mixed content

Given these blurry lines between active and passive mixed content, the latest revision of the Mixed content standard distinguishes between blockable and upgradable content, where scripts, iframes, responsive images and really all other features are considered blockable. The formerly-called passive content types (<img>, <audio> and <video> elements) are now being upgraded by the browser to use HTTPS and are not loaded if they are unavailable via HTTPS.

This also introduces a behavior change in our security indicators: Firefox will no longer make use of the tiny warning sign in the lower right corner of the lock icon:

After our change. A fully secure lock icon. The image load was successfully upgraded or failed (e.g. Connection Reset)

After our change. A fully secure lock icon. The image load was successfully upgraded or failed (e.g., Connection Reset).

With Firefox 127, all mixed content will either be blocked or upgraded. Making sure that documents transferred with HTTPS remain fully secure and encrypted.

Enterprise Users

Enterprise users that do not want Firefox to perform an upgrade have the following options by changing the existing preferences:

  • Set security.mixed_content.upgrade_display_content to false, such that Firefox will continue displaying mixed content insecurely (including the degraded lock icon from the first picture).
  • Set security.mixed_content.block_display_content to true, such that Firefox will block all mixed content (including upgradable).

Reasons for changing these preferences might include legacy infrastructure that does not support a secure HTTPS experience. We want to note that neither of these options are recommended because with those, Firefox would deviate from an interoperable web platform. Furthermore, these preferences do not receive the amount of support, scrutiny and quality assurance as those available in our built-in settings page.

Outlook

We will continue our mission where privacy and security is not optional, to bring yet more HTTPS to the web: Next up, we are going to default all addresses from the URL bar to prefer HTTPS, with a fallback to HTTP if the site does not load securely. This feature is already available in Firefox Nightly.

We are also working on another iteration that upgrades more page loads with a fallback called “HTTPS-First” that should be in Firefox Nightly soon. Lastly, security-conscious users with a higher desire to not expose any of their traffic to the network over HTTP can already make use of our strict HTTPS-Only Mode, which is available through Firefox settings. It requires all resource loads to happen over HTTPS or else be blocked.

The post Firefox will upgrade more Mixed Content in Version 127 appeared first on Mozilla Security Blog.

The Mozilla BlogKeeping GenAI technologies secure is a shared responsibility

Generative artificial intelligence (GenAI) is reshaping our world, from streamlining work tasks like coding to helping us plan summer vacations. As we increasingly adopt GenAI services and tools, we also face the emerging risks of their malicious use. Security is crucial, as even one vulnerability can jeopardize users’ information or worse. However, securing GenAI is too vast and complex for a single entity to handle alone. Mozilla believes sharing this responsibility is essential to successfully keep people safe. 

The evolution of bug bounty programs

To combat both bugs and vulnerabilities, the concept of the bug bounty program – which incentivizes a community of independent participants to identify flaws and report them – was first launched in the mid-1990s by Netscape to crowd source bug discovery in the Netscape Navigator web browser. Fast forward to 2002 and the next generation of bounty programs was born when iDefense rolled out the Vulnerability Contributor Program (VCP), the first security-specific all-vendor public bounty program. Later, in 2005, TippingPoint introduced the Zero Day Initiative (ZDI) which follows the same model, allowing researchers from anywhere in the world to profit from their auditing research on nearly any technology vendor.

More recently, companies like HackerOne and BugCrowd have commoditized bounty programs, allowing participating companies to incentivize the community to report directly to them, versus going through an intermediary like the VCP or ZDI. Some GenAI companies are enrolled in these programs, providing bounties for defects found in supporting software, but not the models themselves. Others have hosted temporary model bounties while rapidly building their GenAI applications. However, this approach benefits their own models rather than the foundational technologies. As companies move at light speed to be the first to market, can we trust that they’ll work with the same scrutiny on security and consider future implications? History has demonstrated that this usually is an afterthought

0Din, the next generation bug bounty program 

As the technology landscape continues to evolve, we see the need for the next evolution in bug bounty programs, to further advance the GenAI ecosystem and address the flaws within the models themselves. These vulnerability classes include Prompt Injection, Training Data Poisoning, Denial of Service, and more. Today, we are investing in the next generation of GenAI security with the 0Day Investigative Network (0Din) by Mozilla, a bug bounty program for large language models (LLMs) and other deep learning technologies. 0Din expands the scope to identify and fix GenAI security by delving beyond the application layer with a focus on emerging classes of vulnerabilities and weaknesses in these new generations of models.

At Mozilla, we believe openness and collective participation are important in solving the emerging security challenges that lie ahead of us for GenAI. We have a long history of protecting users on the internet by building a secure and open-source browser, Firefox. We also have one of the first and longest-standing bug bounty programs on the web in order to encourage security researchers to report security vulnerabilities in the open. We know full well the power of working together as a community is one of the many ways to protect people. It’s been a part of our mission and we want to continue to advance this work. 

Our hope for this program is to help independent researchers with an opportunity to contribute to the development of new security frameworks and best practices tailored for large language models, attention-based systems and generative models. They will play a key role in defining and strengthening AI security standards thus shaping the future of secure GenAI technologies and how we use them in our daily lives. By addressing these challenges, Mozilla aims to protect users and inspire future generations of developers and researchers to make security and privacy a priority right from the start. 

Join our team to advance AI security

Researchers interested in submitting their findings to the program are welcome to start writing to us at 0din@mozilla.com (GPG key). If you’re looking to join the team, we are hiring! We’re looking for:

Advance GenAI security with us—apply now!

The post Keeping GenAI technologies secure is a shared responsibility appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 550

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X(formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Foundation
RustNL 2024
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is layoutparser-ort, a simplified port of LayoutParser for ML-based document layout element detection.

Despite there being no suggestions, llogiq is reasonably happy with his choice. Are you?

No matter what your answer is, please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation in projects were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (Formerly twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (Formerly twitter) or Mastodon!

Updates from the Rust Project

308 pull requests were merged in the last week

Rust Compiler Performance Triage

A quiet week; we did have one quite serious regression (#115105, "enable DestinationPropagation by default"), but it was shortly reverted (#125794). The only other PR identified as potentially problematic was rollup PR #125824, but even that is relatively limited in its effect.

Triage done by @pnkfelix. Revision range: a59072ec..1d52972d

3 Regressions, 5 Improvements, 6 Mixed; 4 of them in rollups 57 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team RFCs entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline RFCs entered Final Comment Period this week.
New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2024-06-05 - 2024-07-03 🦀

Virtual
Europe
North America
Oceania
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Every PR is Special™

Hieyou Xu describing being on t-compiler review rotation

Sadly, there was no suggestion, so llogiq came up with something hopefully suitable.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: May 2024 Progress Report

Featured graphic for "Thunderbird for Android May 2024 Progress Report" with stylized Thunderbird logo and K-9 Mail Android icon, resembling an envelope with dog ears.

Welcome reader! This is the place where we, the Thunderbird for Android team, inform you about what we worked on in May 2024.

We’ve been publishing monthly progress reports for quite a while now. If you haven’t subscribed to the RSS feed yet, now would be a good time to start. You can even use your favorite desktop app to do so – see Thunderbird + RSS: How To Bring Your Favorite Content To The Inbox.

And if you need a reminder on where we left off last month, head over to April’s progress report.

Material 3

The most noticeable development effort going on right now is the conversion of the user interface to the design system Material 3. You can follow our progress by becoming a beta tester and installing the K-9 Mail 6.9xx beta versions.

The first step consisted of changing the theme to Material 3. That changes things like the style of buttons and dialogs. 

Next, we replaced the many icons used throughout the app. But when using the beta version we — and some of you — noticed that not all of the new icons are a good fit. So we’ll update those icons in the next design iteration.

One of the main reasons for switching to Material 3 is the ability to support dynamic colors. It will allow the app to (optionally) use the system color scheme e.g. derived from the wallpaper. But in order for this to work properly, we need to update many places in the app that currently use fixed theme colors. This is an ongoing effort.

Targeting Android 14

As mentioned in April’s progress report, we’ve included the changes necessary to target Android 14 in the latest beta versions. So far we haven’t seen any crashes or bug reports related to these changes. So we plan to include them in the next maintenance release – K-9 Mail 6.804.

F-Droid metadata (part 3)

Unfortunately, this topic was part of the last two progress reports. So we’re very happy to report that the app description is now finally available again on our F-Droid app listing.

Other things we’ve worked on

Developer documentation

We’ve done some work on making our developer documentation more accessible. There’s now a table of contents and we have the capability to render it to HTML using mdbook. However, we haven’t set up automatic publishing yet. Until that happens, the documentation can be browsed on GitHub: K-9 Mail developer documentation.

Small IMAP improvements

We took some time to have a closer look at the communication between the app and the server when using the IMAP protocol and noticed a few places where the app could be more efficient. We’ve started addressing some of these inefficiencies. The result is that K-9 Mail can now perform some action with fewer network packets going back and forth between the app and the server.

Support for predictive back

Google is working on improving the user experience of the back gesture in Android. This effort is called predictive back. The idea is to reveal (part of) the screen to which a successful back gesture will navigate while the swipe gesture is still in progress.

In order for this to work properly, apps that currently intercept the back button/gesture will have to make some changes. We’ve started making the necessary modifications. But it’s still a work in progress.

Community Contributions

GitHub user Silas217209 added support for mailto: URIs on NFC tags (#7804). This was a feature a user requested in April.

Thank you for the contribution! ❤

Releases

In May 2024 we published the following stable release:

… and the following beta versions:

Thanks for reading, testing, and participating. We’ll see you next month!

The post Thunderbird for Android / K-9 Mail: May 2024 Progress Report appeared first on The Thunderbird Blog.

Firefox Add-on ReviewsWhat’s the best ad blocker for you?

So you’ve decided to do something about all those annoying ads you’re barraged with online. What pushed you over the edge? Auto-play video ads? Blaring banners? Tired of your music interrupted by a sudden sponsorship? Was it the realization they intentionally make the ‘Close’ buttons [x] on ads super tiny so you accidentally click the very thing you’re trying to avoid? 

There are a number of approaches you can take to blocking ads with a browser extension—it just depends on what you’re trying to achieve. Here are some of the best ad blockers based on different goals…

I just want an awesome, all-purpose ad blocker.

Keep in mind a benefit of any decent ad blocker is that you should experience a faster web, since fewer ads means there’s less content for your browser to load. It’s a win-win: ditch awful ads while speeding up your experience. 

Also know, however, that ad blockers can occasionally break web pages when innocent content gets caught in the ad blocking crossfire. Some websites will even detect ad blockers and restrict access until you disable the blocker.

uBlock Origin

By any measure uBlock Origin is one of the gold standards in ad blocking. Not only an elite ad blocker that stops nearly every type of ad by default—including video and pop-ups—uBlock Origin is lightweight, so it doesn’t consume much CPU and memory. 

Not much setup required. Works brilliantly out of the box with a matrix of built-in filters (though you can import your own), including a few that block more than just ads but hidden malware sources, as well. Clicking its toolbar icon activates the extension’s minimalist pop-up menu where at a glance you can see blocked tracking sources and how much of the overall page was nearly impacted by advertising. 

Unlike some ad blockers that allow what they consider “non-intrusive” ads through their filters, uBlock Origin has no advertising whitelist by default and tries to block all ads, unless you tell it otherwise.

AdBlock for Firefox

Refined extension design and strong content filters make AdBlock for Firefox a solid choice for people who don’t necessarily despise all ads (just the super annoying, invasive kind) and perhaps recognize that advertising, however imperfect it may be, provides essential compensation for your favorite content creators and platforms. 

AdBlock blocks all types of ads by default, but lets users opt in to Acceptable Ads by choice. Acceptable Ads is an independent vetting program where advertisers can participate to have their ads pass through content filters if they meet certain criteria, like only allowing display ads that fit within strict size parameters, or text ads that adhere to tight aesthetic restrictions. 

AdBlock also makes it easy for you to elect to accept certain niche types of advertising, like ads that don’t use third party tracking, or ads on your favorite YouTube and Twitch channels. 

<figcaption class="wp-element-caption">AdBlock makes it easy to allow ads on your favorite YouTube and Twitch channels.</figcaption>

AdBlock’s free tier works great, but indeed some of our favorite features—like device syncing and the ability to replace ads with custom pics of adorable animals!—sit behind a paid service.

I want ad blocking with a privacy boost.  

Arguably all ad blockers enhance your privacy and security, simply by virtue of the fact they block ads that have tracking tools embedded into them. Or even scarier than secretive corporate tracking is malvertising—ads maliciously infected with malware unbeknownst to even the advertising companies themselves, until it’s too late

So while all good ad blockers are privacy protective by nature, here are some that take additional steps…

AdGuard AdBlocker

Highly effective ad blocker and anti-tracker that even works well on Facebook and YouTube. AdGuard also smartly allows certain types of ads by default—like search ads (since you might be looking for a specific product or service) and “self promotion” ads (e.g. special deals on site-specific shopping platforms like “50% off today only!” sales, etc.)

AdGuard goes to great lengths to not only block the ads you don’t want, but the trackers trying to profile you. It automatically knows to block more than two million malicious websites and has one of the largest tracking filters in the game. 

Sick of social media ‘Like’ and ‘Share’ buttons crowding your page? Enable AdGuard’s social media filter and all social widgets are scrubbed away.

Ghostery

Block ads and hidden browser trackers by default. Ad blocking is but a part of Ghostery’s utility. 

Ghostery is quite powerful as a “privacy ad blocker,” but it also scores big points for being user-friendly and easy to operate. It’s simple to set Ghostery’s various core features, like enabling/disabling Enhanced Ad Blocking and Anti-Tracking. 

YouTube ads are out of control.

AdBlocker for YouTube

If you don’t want to bother with any ad blocking other than YouTube, AdBlocker for YouTube is the choice. 

It very simply and effectively removes both video and display ads from YouTube. Period. Enjoy a faster and more focused YouTube experience. 

I want pop-up ads to go away forever. 

Popup Blocker (strict)

This lightweight extension simply stops pop-ups from deploying. Popup Blocker (strict) conveniently holds them for you in the background—giving you the choice to interact with them if you want. 

You’ll see a notification alert when pop-ups are blocked. Just click the notification for options. 

My webmail is bloated with ads.

Webmail Ad Blocker

Tired of ads thrown in your face when all you want to do is check email? 

Remove ads and get more breathing room in and around your inbox. Webmail Ad Blocker not only blocks all the box ads crowding the edges of your webmail, it also obliterates those sneaky ads that appear as unread messages. Ugh, gross. 

These are some of our favorite ad blockers. Feel free to explore more privacy & security extensions on addons.mozilla.org.

Firefox NightlyIn a nutshell – These Weeks in Firefox: Issue 162

Highlights

  • The final patch for the new and improved text and layout menu in Reader Mode has landed. Try it out the full feature by flipping the improved_text_menu.enabled pref to true.

A panel in Firefox's Reader Mode is shown for controlling layout and text on the page. The panel lets users control the content width, line spacing, character spacing, word spacing, and text alignment of the text in reader mode.

  • As part of the ongoing work related to improving cross-browser compatibility for Manifest Version 3 extensions, starting from Firefox 128:
    • Context menus created by MV3 and MV2 WebExtensions with an event page are now persisted and preserved across browser updates – Bug 1771328
    • MV3 extensions can use optional_host_permissions to specify a set of additional host permissions that are expected to not be granted automatically as part of the install flow (unlike the ones specified in the host_permissions which are now optional but granted automatically at install time) – Bug 1766026
    • Many improvements on content script matching and support for the match_origin_as_fallback content script option:
    • Fixed issue with content scripts unable to attach sandboxed http/file pages – Bug 1411641
    • Added support for match_origin_as_fallback, and as a side effect of that the ability to inject content scripts into webpages loaded from data URLs – Bug 1475831 / Bug 1853411 / Bug 1897113
    • declarativeNetRequest API improvements:
      • accepts rules with unrecognized keys to aid cross-browser and backward compatibility of DNR rules – Bug 1886608
      • New API methods getDisabledRules / updateStaticRules to allow extensions to individually disable/enable rules part of static rulesets – Bug 1810762
    • webRequest chrome compatibility improvement:
      • Added support for asyncBlocking listeners for webRequest.onAuthRequired – Bug 1889897
    • Thanks to Dave Vandyke for following up with adding test coverage for runtime.onPerformanceWarning event on Android builds (in addition to have previously have implemented this new WebExtensions API event in Firefox) – Bug 1884584

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

New contributors (🌟 = first patch)

Project Updates

Accessibility

Lint, Docs and Workflow

Migration Improvements

  • Work is ongoing for settings / preferences UI
  • mconley is tackling bits and pieces for encrypting the backup file (Bug 1897278) and preparing the archive file (Bug 1896715 and Bug 1897498)

New Tab Page

  • Wallpapers v2 work is on going, adding categories and new wallpaper options
  • Weather location picker work is also ongoing, working with DISCO team to expand the accuweather API.(Bug 1893007)

Search and Navigation

  • Daisuke enabled the history flooding prevention feature in Nightly. Bug 1895831
  • Daisuke modified the favicons service when using setFaviconForPage to throw an error if the favicon was too large and can’t be optimized. Bug 1894821
  • Marco fixed a bug where opening the address bar would not hide already open menu popups. Bug 1626741
  • Marco also fixed a bug related to the trim https feature where selecting a partial string from the beginning of the URL would also select the protocol that appears when the input field is focused. Bug 1893871
  • Karandeep updated the onEngagement event listener in UrlbarProviders to have this event triggered for its own results. Bug 1857236

The Mozilla BlogBuilding open, private AI with the Mozilla Builders Accelerator

AI tools are more accessible than ever. Big tech companies have made this possible, but their focus on growth and monetization prioritizes large-scale products. This leaves smaller AI projects in the shadows, despite their potential to better serve individual needs.

That’s why we’re excited to introduce the Mozilla Builders Accelerator. This program is designed to empower independent AI and machine learning engineers with the resources and support they need to thrive. It aims to cultivate a more innovative AI ecosystem, and it’s one of Mozilla’s key initiatives to make AI meaningfully impactful — alongside efforts like Mozilla.ai, the Responsible AI Challenge and the Rise25 Awards.

The Mozilla Builders Accelerator’s inaugural theme is local AI, which involves running AI models and applications directly on personal devices like laptops, smartphones, or edge devices rather than depending on cloud-based services.

Up to $100,000 in funding 

Projects selected for the Mozilla Builders Accelerator are eligible to receive up to $100,000 in funding. We’re also creating an environment where independent developers can flourish. So in addition to financial backing, the Mozilla Builders Accelerator will offer mentorship, foster community engagement and provide increased visibility for projects. 

Participants will engage in a structured 12-week program focused on the design, build and test phases of their projects, followed by an alumni phase for ongoing support. The program will include instructional sessions, guided workshops and practical assignments. Additionally, there will be opportunities to showcase projects through Mozilla’s channels and events, culminating in a demo day.

Why local AI?

We chose Local AI as the theme for the Accelerator’s first cohort because it aligns with our core values of privacy, user empowerment, and open source innovation. This method offers several benefits including:

  • Privacy: Data stays on the local device, minimizing exposure to potential breaches and misuse.
  • Agency: Users have greater control over their AI tools and data.
  • Cost-effectiveness: Reduces reliance on expensive cloud infrastructure, lowering costs for developers and users.
  • Reliability: Local processing ensures continuous operation even without internet connectivity.

Mozilla’s commitment to open source and AI innovation

For over 25 years, Mozilla has championed the internet as a global public resource — open and accessible to everyone. This dedication has fostered a thriving community committed to innovation and collaboration. We believe the future of AI should be open, transparent and inclusive.

With initiatives like Mozilla.ai and Llamafile, we’ve made significant strides in advancing open source AI. The Mozilla Builders Accelerator is our next step in this journey. 

We invite AI and ML engineers dedicated to open source and local AI solutions to apply for the Mozilla Builders Accelerator. Join us in shaping the future of AI with openness and innovation at its core. Applications are open through Aug. 1, 2024. For more information and to apply, visit here.

The post Building open, private AI with the Mozilla Builders Accelerator appeared first on The Mozilla Blog.

Niko MatsakisThe borrow checker within

This post lays out a 4-part roadmap for the borrow checker that I call “the borrow checker within”. These changes are meant to help Rust become a better version of itself, enabling patterns of code which feel like they fit within Rust’s spirit, but run afoul of the letter of its law. I feel fairly comfortable with the design for each of these items, though work remains to scope out the details. My belief is that a-mir-formality will make a perfect place to do that work.

Rust’s spirit is mutation xor sharing

When I refer to the spirit of the borrow checker, I mean the rules of mutation xor sharing that I see as Rust’s core design ethos. This basic rule—that when you are mutating a value using the variable x, you should not also be reading that data through a variable y—is what enables Rust’s memory safety guarantees and also, I think, contributes to its overall sense of “if it compiles, it works”.

Mutation xor sharing is, in some sense, neither necessary nor sufficient. It’s not necessary because there are many programs (like every program written in Java) that share data like crazy and yet still work fine1. It’s also not sufficient in that there are many problems that demand some amount of sharing – which is why Rust has “backdoors” like Arc<Mutex<T>>, AtomicU32, and—the ultimate backdoor of them all—unsafe.

But to me the biggest surprise from working on Rust is how often this mutation xor sharing pattern is “just right”, once you learn how to work with it2. The other surprise has been seeing the benefits over time: programs written in this style are fundamentally “less surprising” which, in turn, means they are more maintainable over time.

In Rust today though there are a number of patterns that are rejected by the borrow checker despite fitting the mutation xor sharing pattern. Chipping away at this gap, helping to make the borrow checker’s rules a more perfect reflection of mutation xor sharing, is what I mean by the borrow checker within.

I saw the angel in the marble and carved until I set him free. — Michelangelo

OK, enough inspirational rhetoric, let’s get to the code.

Ahem, right. Let’s do that.

Step 1: Conditionally return references easily with “Polonius”

Rust 2018 introduced “non-lexical lifetimes” — this rather cryptic name refers to an extension of the borrow checker so that it understood the control flow within functions much more deeply. This change made using Rust a much more “fluid” experience, since the borrow checker was able to accept a lot more code.

But NLL does not handle one important case3: conditionally returning references. Here is the canonical example, taken from Remy’s Polonius update blog post:

fn get_default<'r, K: Hash + Eq + Copy, V: Default>(
    map: &'r mut HashMap<K, V>,
    key: K,
) -> &'r mut V {
    match map.get_mut(&key) {
        Some(value) => value,
        None => {
            map.insert(key, V::default());
            //  ------ 💥 Gets an error today,
            //            but not with polonius
            map.get_mut(&key).unwrap()
        }
    }
}  

Remy’s post gives more details about why this occurs and how we plan to fix it. It’s mostly accurate except that the timeline has stretched on more than I’d like (of course). But we are making steady progress these days.

Step 2: A syntax for lifetimes based on places

The next step is to add an explicit syntax for lifetimes based on “place expressions” (e.g., x or x.y). I wrote about this in my post Borrow checking without lifetimes. This is basically taking the formulation that underlies Polonius and adding a syntax.

The idea would be that, in addition to the abstract lifetime parameters we have today, you could reference program variables and even fields as the “lifetime” of a reference. So you could write ’x to indicate a value that is “borrowed from the variable x”. You could also write ’x.y to indicate that it was borrowed from the field y of x, and even '(x.y, z) to mean borrowed from either x.y or z. For example:

struct WidgetFactory {
    manufacturer: String,
    model: String,
}

impl WidgetFactory {
    fn new_widget(&self, name: String) -> Widget {
        let name_suffix: &name str = &name[3..];
                       // ——- borrowed from “name”
        let model_prefix: &self.model str = &self.model[..2];
                         // —————- borrowed from “self.model”
    }
}

This would make many of lifetime parameters we write today unnecessary. For example, the classic Polonius example where the function takes a parameter map: &mut Hashmap<K, V> and returns a reference into the map can be written as follows:

fn get_default<K: Hash + Eq + Copy, V: Default>(
    map: &mut HashMap<K, V>,
    key: K,
) -> &'map mut V {
    //---- "borrowed from the parameter map"
    ...
}

This syntax is more convenient — but I think its bigger impact will be to make Rust more teachable and learnable. Right now, lifetimes are in a tricky place, because

  • they represent a concept (spans of code) that isn’t normal for users to think explicitly about and
  • they don’t have any kind of syntax.

Syntax is useful when learning because it allows you to make everything explicit, which is a critical intermediate step to really internalizing a concept — what boats memorably called the dialectical ratchet. Anecdotally I’ve been using a “place-based” syntax when teaching people Rust and I’ve found it is much quicker for them to grasp it.

Step 3: View types and interprocedural borrows

The next piece of the plan is view types, which are a way to have functions declare which fields they access. Consider a struct like WidgetFactory

struct WidgetFactory {
    counter: usize,
    widgets: Vec<Widget>,
}

…which has a helper function increment_counter

impl WidgetFactory {
    fn increment_counter(&mut self) {
        self.counter += 1;
    }
}

Today, if we want to iterate over the widgets and occasionally increment the counter with increment_counter, we will encounter an error:

impl WidgetFactory {
    fn increment_counter(&mut self) {...}
    
    pub fn count_widgets(&mut self) {
        for widget in &self.widgets {
            if widget.should_be_counted() {
                self.increment_counter();
                // ^ 💥 Can't borrow self as mutable
                //      while iterating over `self.widgets`
            }
        }    
    }
}

The problem is that the borrow checker operates one function at a time. It doesn’t know precisely which fields increment_counter is going to mutate. So it conservatively assumes that self.widgets may be changed, and that’s not allowed. There are a number of workarounds today, such as writing a “free function” that doesn’t take &mut self but rather takes references to the individual fields (e.g., counter: &mut usize) or even collecting those references into a “view struct” (e.g., struct WidgetFactoryView<'a> { widgets: &'a [Widget], counter: &'a mut usize }) but these are non-obvious, annoying, and non-local (they require changing significant parts of your code)

View types extend struct types so that instead of just having a type like WidgetFactory, you can have a “view” on that type that included only a subset of the fields, like {counter} WidgetFactory. We can use this to modify increment_counter so that it declares that it will only access the field counter:

impl WidgetFactory {
    fn increment_counter(&mut {counter} self) {
        //               -------------------
        // Equivalent to `self: &mut {counter} WidgetFactory`
        self.counter += 1;
    }
}

This allows the compiler to compile count_widgets just fine, since it can see that iterating over self.widgets while modifying self.counter is not a problem.4

View types also address phased initialization

There is another place where the borrow checker’s rules fall short: phased initialization. Rust today follows the functional programming language style of requiring values for all the fields of a struct when it is created. Mostly this is fine, but sometimes you have structs where you want to initialize some of the fields and then invoke helper functions, much like increment_counter, to create the remainder. In this scenario you are stuck, because those helper functions cannot take a reference to the struct since you haven’t created the struct yet. The workarounds (free functions, intermediate struct types) are very similar.

Start with private functions, consider scaling to public functions

View types as described here have limitations. Because the types involve the names of fields, they are not really suitable for public interfaces. They could also be annoying to use in practice because one will have sets of fields that go together that have to be manually copied and pasted. All of this is true but I think something that can be addressed later (e.g., with named groups of fields).

What I’ve found is that the majority of times that I want to use view types, it is in private functions. Private methods often do little bits of logic and make use of the struct’s internal structure. Public methods in contrast tend to do larger operations and to hide that internal structure from users. This isn’t a universal law – sometimes I have public functions that should be callable concurrently – but it happens less.

There is also an advantage to the current behavior for public functions in particular: it preserves forward compatibilty. Taking &mut self (versus some subset of fields) means that the function can change the set of fields that it uses without affecting its clients. This is not a concern for private functions.

Step 4: Internal references

Rust today cannot support structs whose fields refer to data owned by another. This gap is partially closed through crates like rental (no longer maintained), though more often by modeling internal references with indices. We also have Pin, which covers the related (but even harder) problem of immobile data.

I’ve been chipping away at a solution to this problem for some time. I won’t be able to lay it out in full in this post, but I can sketch what I have in mind, and lay out more details in future posts (I have done some formalization of this, enough to convince myself it works).

As an example, imagine that we have some kind of Message struct consisting of a big string along with several references into that string. You could model that like so:

struct Message {
    text: String,
    headers: Vec<(&'self.text str, &'self.text str)>,
    body: &'self.text str,
}

This message would be constructed in the usual way:

let text: String = parse_text();
let (headers, body) = parse_message(&text);
let message = Message { text, headers, body };

where parse_message is some function like

fn parse_message(text: &str) -> (
    Vec<(&'text str, &'text str)>,
    &'text str
) {
    let mut headers = vec![];
    // ...
    (headers, body)
}

Note that Message doesn’t have any lifetime parameters – it doesn’t need any, because it doesn’t borrow from anything outside of itself. In fact, Message: 'static is true, which means that I could send this Message to another thread:

// A channel of `Message` values:
let (tx, rx) = std::sync::mpsc::channel();

// A thread to consume those values:
std:🧵:spawn(move || {
    for message in rx {
        // `message` here has type `Message`
        process(message.body);
    }
});

// Produce them:
loop {
    let message: Message = next_message();
    tx.send(message);
}

How far along are each of these ideas?

Roughly speaking…

  • Polonius – ‘just’ engineering
  • Syntax – ‘just’ bikeshedding
  • View types – needs modeling, one or two open questions in my mind5
  • Internal references – modeled in some detail for a simplified variant of Rust, have to port to Rust and explain the assumptions I made along the way6

…in other words, I’ve done enough work to to convince myself that these designs are practical, but plenty of work remains. :)

How do we prioritize this work?

Whenever I think about investing in borrow checker ergonomics and usability, I feel a bit guilty. Surely something so fun to think about must be a bad use of my time.

Conversations at RustNL shifted my perspective. When I asked people about pain points, I kept hearing the same few themes arise, especially from people trying building applications or GUIs.

I now think I had fallen victim to the dreaded “curse of knowledge”, forgetting how frustrating it can be to run into a limitation of the borrow checker and not know how to resolve it.

Conclusion

This post proposes four changes attacking some very long-standing problems:

  • Conditionally returned references, solved by Polonius
  • No or awkward syntax for lifetimes, solved by an explicit lifetime syntax
  • Helper methods whose body must be inlined, solved by view types
  • Can’t “package up” a value and references into that value, solved by interior references

You may have noticed that these changes build on one another. Polonius remodels borrowing in terms of “place expressions” (variables, fields). This enables an explicit lifetime syntax, which in turn is a key building block for interior references. View types in turn let us expose helper methods that can operate on ‘partially borrowed’ (or even partially initialized!) values.

Why these changes won’t make Rust “more complex” (or, if they do, it’s worth it)

You might wonder about the impact of these changes on Rust’s complexity. Certainly they grow the set of things the type system can express. But in my mind they, like NLL before them, fall into that category of changes that will actually make using Rust feel simpler overall.

To see why, put yourself in the shoes of a user today who has written any one of the “obviously correct” programs we’ve seen in this post – for example, the WidgetFactory code we saw in view types. Compiling this code today gives an error:

error[E0502]: cannot borrow `*self` as mutable
              because it is also borrowed as immutable
  --> src/lib.rs:14:17
   |
12 | for widget in &self.widgets {
   |               -------------
   |               |
   |               immutable borrow occurs here
   |               immutable borrow later used here
13 |     if widget.should_be_counted() {
14 |         self.increment_counter();
   |         ^^^^^^^^^^^^^^^^^^^^^^^^
   |         |
   |         mutable borrow occurs here

Despite all our efforts to render it well, this error is inherently confusing. It is not possible to explain why WidgetFactory doesn’t work from an “intuitive” point-of-view because conceptually it ought to work, it just runs up against a limit of our type system.

The only way to understand why WidgetFactory doesn’t compile is to dive deeper into the engineering details of how the Rust type system functions, and that is precisely the kind of thing people don’t want to learn. Moreover, once you’ve done that deep dive, what is your reward? At best you can devise an awkward workaround. Yay 🥳.7

Now imagine what happens with view types. You still get an error, but now that error can come with a suggestion:

help: consider declaring the fields
      accessed by `increment_counter` so that
      other functions can rely on that
 7 | fn increment_counter(&mut self) {
   |                      ---------
   |                      |
   |      help: annotate with accessed fields: `&mut {counter} self`

You now have two choices. First, you can apply the suggestion and move on – your code works! Next, at your leisure, you can dig in a bit deeper and understand what’s going on. You can learn about the semver hazards that motivate an explicit declaration here.

Yes, you’ve learned a new detail of the type system, but you did so on your schedule and, where extra annotations were required, they were well-motivated. Yay 🥳!8

Reifying the borrow checker into types

There is another theme running through here: moving the borrow checker analysis out from the compiler’s mind and into types that can be expressed. Right now, all types always represent fully initialized, unborrowed values. There is no way to express a type that captures the state of being in the midst of iterating over something or having moved one or two fields but not all of them. These changes address that gap.9

This conclusion is too long

I know, I’m like Peter Jackson trying to end “The Return of the King”, I just can’t do it! I keep coming up with more things to say. Well, I’ll stop now. Have a nice weekend y’all.


  1. Well, every program written in Java does share data like crazy, but they do not all work fine. But you get what I mean. ↩︎

  2. And I think learning how to work with mutation xor sharing is a big part of what it means to learn Rust. ↩︎

  3. NLL as implemented, anyway. The original design was meant to cover conditionally returning references, but the proposed type system was not feasible to implement. Moreover, and I say this as the one who designed it, the formulation in the NLL RFC was not good. It was mind-bending and hard to comprehend. Polonius is much better. ↩︎

  4. In fact, view types will also allow us to implement the “disjoint closure capture” rules from RFC 2229 in a more efficient way. Currently a closure using self.widgets and self.counter will store 2 references, kind of an implicit “view struct”. Although we found this doesn’t really affect much code in practice, it still bothers me. With view types they could store 1. ↩︎

  5. To me, the biggest open question for view types is how to accommodate “strong updates” to types. I’d like to be able to do let mut wf: {} WidgetFactory = WidgetFactory {} to create a WidgetFactory value that is completely uninitialized and then permit writing (for example) wf.counter = 0. This should update the type of wf to {counter} WidgetFactory. Basically I want to link the information found in types with the borrow checker’s notion of what is initialized, but I haven’t worked that out in detail. ↩︎

  6. As an example, to make this work I’m assuming some kind of “true deref” trait that indicates that Deref yields a reference that remains valid even as the value being deref’d moves from place to place. We need a trait much like this for other reasons too. ↩︎

  7. That’s a sarcastic “Yay 🥳”, in case you couldn’t tell. ↩︎

  8. This “Yay 🥳” is genuine. ↩︎

  9. I remember years ago presenting Rust at some academic conference and a friendly professor telling me, “In my experience, you always want to get that state into the type system”. I think that professor was right, though I don’t regret not prioritizing it (always a million things to do, better to ask what is the right next step now than to worry about what step might’ve been better in the past). Anyway, I wish I could remember who that was! ↩︎

Don Marticheese or woodstain?

It has come to my attention that any blog that mentions advertising must do a post including the expression Does Exactly What it Says on the Tin, so here is mine. Following up on the 30-40-30 rule, why are some people so fired up about personalized advertising, while others aren’t? Maybe it goes back to what kind of shopping use cases they’re optimizing for.

Phillip Nelson, in Advertising as Information, divides brand quaities into search qualities and experience qualities. A search quality is something you can check before buying the product, like tasting a sample of cheese. An experience quality is something you have to spend more time figuring out, like seeing if your woodstain dries in the time printed on the tin. Shopping for cheese and woodstain are a lot different.

Cheese shopping

  • Cheeses are similar as far as nutrition goes, so picking one is a matter of personal preference.

  • Cheese is easy to evaluate at the point of purchase. Mmm, sample cheese on a toothpick.

  • My own cheese-tasting palate is a better guide for me than the opinions of a cheese expert.

  • Cost of a mistake is low.

  • Top priority: getting the best-matched product among a set of alternatives in a narrow quality range.

Woodstain shopping

  • Has quality metrics that are not different from person to person.

  • Hard to evaluate at the point of purchase. You have to do your project and wait for it to dry (or not?)

  • The knowledge of a woodstain expert is more valuable to me than how I might feel about a certain brand of woodstain at the hardware store.

  • Cost of a mistake is high.

  • Top priority: avoiding a low-quality or deceptively sold product.

If you’re shopping for parts to build a PC, the mouse is cheese, the power supply is woodstain, and the video card is somewhere in the middle. If you’re buying a car, or a bike, or a pair of boots, it kind of depends on the ratio of your net worth and your budget for the item. Buyers who have a lot of money relative to the price of the product are more likely to be buying cheese, buyers who are sinking a lot of their assets into the purchase are buying woodstain.

Andrew Chen says that AI will reinvent marketing because it makes it possible to do a personalized, automated sales call for every possible purchase. The cost of personalization relative to the cost of the actual product goes down. This might be great for cheese shoppers. Imagine an AI that understands my cheese palate so well that it will suggest the yummiest possible cheese for me, every time. But automated, personalized communications sound terrible for woodstain shoppers. When it’s harder to evaluate the product, personalization just facilitates more kinds of deception, and public, signaling-based advertising might be more appropriate.

Related

privacy economics sources

improving web advertising

you’re on a customer journey, they’re on a marketer journey

Bonus links

Behind the Blog: Google’s Excuses and Facing Reality (fwiw, New Coke won a scientific taste test, too.)

‘Know Your Customer’ Law Comes For Ad Data Licensors (this #federalPrivacyLaw might be a bigger deal than it looks like. If some North Korean spies get busted for something else, and the FBI finds records of a company’s data sales/sharing to a North Korean owned shell company, they could be in big trouble. Publicly traded companies will have to disclose this as a risk, and put more gatekeeping around data sales/sharing.)

Washington State’s My Health My Data Act (coming into effect this summer, another area of compliance costs and friction. Get those Meta Pixels off your health-related content, or learn how good paralegals are at browser dev tools and word processor mail merge.)

The Mozilla BlogCassidoo, meme-maker and software developer, on her corner of the internet

A woman smiling against a blue and pink backdrop.<figcaption class="wp-element-caption">Cassidy Williams is a Chicago-based software developer building the AI-powered talk-out-loud app Brainstory at Contenda.</figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and what reclaiming the internet really looks like.

This month we chat with Cassidy Williams, known as Cassidoo on X and TikTok and is the CTO of the AI company Contenda. We talk about the forums that shaped her career, building community online and off, and her favorite subreddit for niche drama.

What is your favorite corner of the internet? 

I have a Discord group through my Patreon [and] Twitch that I originally made for resume reviews and interview prep, but it’s turned into my absolute favorite spot to just chat with friends on the daily! I think in the pandemic it was a really good “third place” for myself and a bunch of other folks and we’ve become really good friends talking about tech but also just helping each other out and learning together!

What is an internet deep dive that you can’t wait to jump back into?

There’s a subreddit called r/HobbyDrama where folks share detailed stories about hobby communities and the drama that happens within them, and I absolutely love it. It’s usually really specific hobbies and communities that I rarely knew existed, sharing dramatic stories of people being jerks, or some twist of fate changing things, or something along those lines. So many times I’ve ended up going deep into learning about a hobby purely because I have a bunch of specific information now that makes it more entertaining!

What is the one tab you always regret closing?

The MDN docs, I feel like for all the years I’ve been a developer, I always find myself checking on specific syntax, there!

What can you not stop talking about on the internet right now?

Walkability and mixed-use housing. I have been going *very* deep on that lately because I think countries outside of the U.S. are pretty good at building community outside of individual homes by not being too car-centric and by having it be the norm to walk everywhere, have easy access to public transit, and live in a place that has everything really close by. I sincerely think that it would improve nearly everything about our country in general if we focused on that more, and… I will not shut up about it, ha!

What was the first online community you engaged with?

Waaaay back in the early-to-mid 2000s, I was really active on some forums and message boards that taught me a ton about web design and tech in general. I don’t even remember how I initially discovered them, and most of them aren’t on the internet anymore, but those early forums of folks sharing knowledge totally changed the trajectory of my future career!

Also… Neopets, heh.

If you could create your own corner of the internet, what would it look like?

Outside of the Discord group I mentioned, I might have more of a content-sharing hub. I think we’re in a point on the internet where folks are very scattered, not all on Twitter, not all on Facebook, not all on Instagram, etc, and I would love to have a hub or feed of folks sharing with each other. RSS does fill that gap, a bit, so maybe a combo of chat + RSS? It sounds very old school!

What articles and/or videos are you waiting to read/watch right now?

The Manifesto for a Humane Web, and Maya Rudolph’s SNL episode!

In a recent blog post you compared living in Chicago to the internet of the past, where you made random but lasting friendships. What parts of the internet now make you optimistic about its future?

I do think that Discord servers right now are the closest things I’ve seen to tight-knit communities like that. Also, I *love* the series People & Blogs, where I’ve learned a ton about cool topics and writers I didn’t know before, and also the software and content from the folks at Good Enough!


Cassidy Williams is a Chicago-based software developer building the AI-powered talk-out-loud app Brainstory at Contenda. She’s also a startup advisor and investor, developer experience expert, and meme-maker on the internet. She enjoys building mechanical keyboards, playing music and teaching in her free time. You can subscribe to her newsletter about the world of web development and play her word game, Jumblie.

Save and discover the best articles, stories and videos on the web

Get Pocket

The post Cassidoo, meme-maker and software developer, on her corner of the internet appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgExperimenting with local alt text generation in Firefox Nightly

As discussed on Mozilla Connect, Firefox 130 will introduce an experimental new capability to automatically generate alt-text for images using a fully private on-device AI model. The feature will be available as part of Firefox’s built-in PDF editor, and our end goal is to make it available in general browsing for users with screen readers.

Why alt text?

Web pages have a fundamentally simple structure, with semantics that allow the browser to interpret the same content differently for different people based on their own needs and preferences. This is a big part of what we think makes the Web special, and what enables the browser to act as a user agent, responsible for making the Web work for people.

This is particularly useful for assistive technology such as screen readers, which are able to work alongside browser features to reduce obstacles for people to access and exchange information. For static web pages, this generally can be accomplished with very little interaction from the site, and this access has been enormously beneficial to many people.

But even for a simple static page there are certain types of information, like alternative text for images, that must be provided by the author to provide an understandable experience for people using assistive technology (as required by the spec). Unfortunately, many authors don’t do this: the Web Almanac reported in 2022 that nearly half of images were missing alt text.

Until recently it’s not been feasible for the browser to infer reasonably high quality alt text for images, without sending potentially sensitive data to a remote server. However, latest developments in AI have enabled this type of image analysis to happen efficiently, even on a CPU.

We are adding a feature within the PDF editor in Firefox Nightly to validate this approach. As we develop it further and learn from the deployment, our goal is to offer it for users who’d like to use it when browsing to help them better understand images which would otherwise be inaccessible.

Generating alt text with small open source models

We are using Transformer-based machine learning models to describe images. These models are getting good at describing the contents of the image, yet are compact enough to operate on devices with limited resources. While can’t outperform a large language model like GPT-4 Turbo with Vision, or LLaVA, they are sufficiently accurate to provide valuable insights on-device across a diversity of hardware.

Model architectures like BLIP or even VIT that were trained on datasets like COCO (Common Object In Context) or Flickr30k are good at identifying objects in an image. When combined with a text decoder like OpenAI’s GPT-2, they can produce alternative text with 200M or fewer parameters. Once quantized, these models can be under 200MB on disk, and run in a couple of seconds on a laptop – a big reduction compared to the gigabytes and resources an LLM requires.

Example Output

The image below (pulled from the COCO dataset) is described by:

  • FIREFOX – our 182M parameters model using a Distilled version of GPT-2 alongside a Vision Transformer (ViT) image encoder.
  • BASELINE MODEL – a slightly bigger ViT+GPT-2 model
  • HUMAN TEXT – the description provided by the dataset annotator.

 

A person is standing in front of a cake with candles.

Both small models lose accuracy compared to the description provided by a person, and the baseline model is confused by the hands position. The Firefox model is doing slightly better in that case, and captures what is important.

What matters can be suggestive in any case. Notice how the person did not write about the office settings or the cherries on the cake, and specified that the candles were long.

If we run the same image on a model like GPT-4o, the results are extremely detailed:

The image depicts a group of people gathered around a cake with lit candles. The focus is on the cake, which has a red jelly topping and a couple of cherries. There are several lit candles in the foreground. In the background, there is a woman smiling, wearing a gray turtleneck sweater, and a few other people can be seen, likely in an office or indoor setting. The image conveys a celebratory atmosphere, possibly a birthday or a special occasion.

But such level of detail in alt text is overwhelming and doesn’t prioritize the most important information. Brevity is not the only goal, but it’s a helpful starting point, and pithy accuracy in a first draft allows content creators to focus their edits on missing context and details.

So if we ask the LLM for a one-sentence description, we get:

A group of people in an office celebrates with a lit birthday cake in the foreground and a smiling woman in the background.

This has more detail than our small model, but can’t be run locally without sending your image to a server.

Small is beautiful

Running inference locally with small models offers many advantages:

  1. Privacy: All operations are contained within the device, ensuring data privacy. We won’t have access to your images, PDF content, generated captions, or final captions. Your data will not be used to train the model.
  2. Resource Efficiency: Small models eliminate the need for high-powered GPUs in the cloud, reducing resource consumption and making it more environmentally friendly.
  3. Increased Transparency: In-house management of models allows for direct oversight of the training datasets, offering more transparency compared to some large language models (LLMs).
  4. Carbon Footprint Monitoring: Training models in-house facilitates precise tracking of CO2 emissions using tools such as CodeCarbon.
  5. Ease of Improvement: Since retraining can be completed in less than a day on a single piece of hardware, it allows for frequent updates and enhancements of the model.

Integrating Local Inference into Firefox

Extending the Translations inference architecture

Firefox Translations uses the Bergamot project powered by the Marian C++  inference runtime. The runtime is compiled into WASM, and there’s a model file for each translation task.

For example, if you run Firefox in French and visit an English page, Firefox will ask if you want to translate it to French and download the English-to-French model (~20MiB) alongside the inference runtime. This is a one-shot download: translations will happen completely offline once those files are on disk.

The WASM runtime and models are both stored in the Firefox Remote Settings service, which allows us to distribute them at scale and manage versions.

The inference task runs in a separate process, which prevents the browser or one of its tabs from crashing if the inference runtime crashes.

ONNX and Transformers.js

We’ve decided to embed the ONNX runtime in Firefox Nightly along with the Transformers.js library to extend the translation architecture to perform different inference work.

Like Bergamot, the ONNX runtime has a WASM distribution and can run directly into the browser. The ONNX project has recently introduced WebGPU support, which will eventually be activated in Firefox Nightly for this feature.

Transformers.js provides a Javascript layer on top of the ONNX inference runtime, making it easy to add inference for a huge list of model architectures. The API mimics the very popular Python library. It does all the tedious work of preparing the data that is passed to the runtime and converting the output back to a usable result. It also deals with downloading models from Hugging Face and caching them.

From the project’s documentation, this is how you can run a sentiment analysis model on a text:

import { pipeline } from '@xenova/transformers';

// Allocate a pipeline for sentiment-analysis
let pipe = await pipeline('sentiment-analysis');
let out = await pipe('I love transformers!');

// [{'label': 'POSITIVE', 'score': 0.999817686}]

Using Transformers.js gives us confidence when trying out a new model with ONNX. If its architecture is listed in the Transformers.js documentation, that’s a good indication it will work for us.

To vendor it into Firefox Nightly, we’ve slightly changed its release to distribute ONNX separately from Transformers.js, dropped Node.js-related pieces, and fixed those annoying eval() calls the ONNX library ships with. You can find the build script here which was used to populate that vendor directory.

From there, we reused the Translation architecture to run the ONNX runtime inside its own process, and have Transformers.js run with a custom model cache system.

Model caching

The Transformers.js project can use local and remote models and has a caching mechanism using the browser cache. Since we are running inference in an isolated web worker, we don’t want to provide access to the file system or store models inside the browser cache. We also don’t want to use Hugging Face as the model hub in Firefox, and want to serve model files from our own servers.

Since Transformers.js provides a callback for a custom cache, we have implemented a specific model caching layer that downloads files from our own servers and caches them in IndexedDB.

As the project grows, we anticipate the browser will store more models, which can take up significant space on disk. We plan to add an interface in Firefox to manage downloaded models so our users can list them and remove some if needed.

Fine-tuning a ViT + GPT-2 model

Ankur Kumar released a popular model on Hugging Face to generate alt text for images and blogged about it. This model was also published as ONNX weights by Joshua Lochner so it could be used in Transformers.js, see https://huggingface.co/Xenova/vit-gpt2-image-captioning

The model is doing a good job – even if in some cases we had better results with https://huggingface.co/microsoft/git-base-coco – But the GIT architecture is not yet supported in ONNX converters, and with less than 200M params, most of the accuracy is obtained by focusing on good training data. So we have picked ViT for our first model.

Ankur used the google/vit-base-patch16-224-in21k image encoder and the GPT-2 text decoder and fine-tuned them using the COCO dataset, which is a dataset of over 120k labeled images.

In order to reduce the model size and speed it up a little bit, we’ve decided to replace GPT-2 with DistilGPT-2 — which is 2 times faster and 33% smaller according to its documentation.

Using that model in Transformers.js gave good results (see the training code at GitHub – mozilla/distilvit: image-to-text model for PDF.js).

We further improved the model for our use case with an updated training dataset and some supervised learning to simplify the output and mitigate some of the biases common in image to text models.

Alt text generation in PDF.js

Firefox is able to add an image in a PDF using our popular open source pdf.js library:

A screenshot of the PDF.js alt text modal window

Starting in Firefox 130, we will automatically generate an alt text and let the user validate it. So every time an image is added, we get an array of pixels we pass to the ML engine and a few seconds after, we get a string corresponding to a description of this image (see the code).

The first time the user adds an image, they’ll have to wait a bit for downloading the model (which can take up to a few minutes depending on your connection) but the subsequent uses will be much faster since the model will be stored locally.

In the future, we want to be able to provide an alt text for any existing image in PDFs, except images which just contain text (it’s usually the case for PDFs containing scanned books).

Next steps

Our alt text generator is far from perfect, but we want to take an iterative approach and improve it in the open. The inference engine has already landed in Firefox Nightly as a new ml component along with an initial documentation page.

We are currently working on improving the image-to-text datasets and model with what we’ve described in this blog post, which will be continuously updated on our Hugging Face page.

The code that produces the model lives in Github https://github.com/mozilla/distilvit and the web application we’re building for our team to improve the model is located at https://github.com/mozilla/checkvite. We want to make sure the models and datasets we build, and all the code used, are made available to the community.

Once the alt text feature in PDF.js has matured and proven to work well, we hope to make the feature available in general browsing for users with screen readers.

The post Experimenting with local alt text generation in Firefox Nightly appeared first on Mozilla Hacks - the Web developer blog.

Mozilla ThunderbirdThunderbird Monthly Development Digest: May 2024

Graphic with text "Thunderbird Dev Digest May 2024," featuring abstract ASCII art of a dark Thunderbird logo background.

Hello Thunderbird Community!

We’re tossing May behind our shoulders, which means we’re in the final sprint before the next ESR (Extended Support Release). During the next couple of weeks you can expect some official communication on all the things that are going in the next major release of Thunderbird. Until then, here are some appetizers on our most recent efforts.

Rust-enabled builds

Our build and release team is working hard to ship Rust enabled builds by default. The first beta version of 128 will ship with Rust enabled by default, which will allow all of you to test experimental features without needing to compile the code locally.

Microsoft Exchange support

We’re very very very close!

So far we have the main flow completed, and we’re able to set up an account, fetch folders, fetch messages, and display messages. We’re finalizing the outgoing flow in order to send messages, and after that we will start an audit to ensure that all the usual features you expect from interacting with your email are working.

Expect some future call to actions to test things and invites to switch the experimental pref ON.

Native Linux system tray support

Enabling Rust builds in Thunderbird also gives us the ability to implement some long awaited features much faster. We’re still testing and cleaning things up, but if you’re adventurous you can check out our GitHub repositories for Linux System Tray and DBus hooks and run them locally.

Folder multi-selection

Folder pane multi-selection is almost completed and it should land soon. There are still some rough edges we need to tackle, mostly due to some C++ code not liking multiple folders copy/move and undo actions, but we’re confident that we will have this done before the end of June.

You can check the code and follow the progress here.

Account color customization

Another requested feature we’re aiming to ship in 128 is the customization of account colors. This is the first patch of an upcoming stack that will add some nice visual cues in the message list and the compose window for users with multiple accounts.

Folder compaction

We shared this in our Daily mailing list, but in case you missed it, we rebuilt the Folder Compaction code from scratch. This should potentially solve all the issues of profiles bubbling up in size, or compact operations silently failing and piling up on each other.

These changes should be uplifted to Beta soon. Please test it as much as possible and report any bugs as soon as you encounter them.

Native Windows notifications

Another important achievement was the ability to completely support native Windows 10/11 notifications and make them fully functional.

You can already consume this feature on Daily, and moving forward Thunderbird will be using native OS notifications by default.

We plan to add some nice quick actions and improve the usefulness of native notifications in the future, so stay tuned!


As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.

See ya next month.

Alessandro Castellani (he, him)
Director, Desktop and Mobile Apps

If you’re interested in joining the technical discussion around Thunderbird development, consider joining one or several of our mailing list groups here.

The post Thunderbird Monthly Development Digest: May 2024 appeared first on The Thunderbird Blog.

The Mozilla BlogHere’s what we’re working on in Firefox

Last week we shared a number of updates with our community of users, and now we want to share them here:

At Mozilla, we work hard to make Firefox the best browser for you. That’s why we’re always focused on building a browser that empowers you to choose your own path, that gives you the freedom to explore without worry or compromises. We’re excited to share more about the updates and improvements we have in store for you over the next year.

Bringing you the features you’ve been asking for

We’ve been listening to your feedback, and we’re prioritizing the features you want most.

  • Productivity boosters like
    • Tab Grouping, Vertical Tabs, and our handy Sidebar will help you stay organized no matter how many tabs you have open — whether it’s 7 or 7,500. 
    • Plus, our new Profile Management system will help keep your school, work, and personal browsing separate but easily accessible. 
  • Customizable new tab wallpapers that will let you choose from a diverse range of photography, colors, and abstract images that suits you most. 
  • Intuitive privacy settings that deliver all the power of our world-class anti-tracking technologies in a simplified, easy-to-understand way.
  • More streamlined menus that reduce visual clutter and prioritize top user actions so you can get to the important things quicker.

Continuous work on speed, performance and compatibility

Speed is everything when you’re online, so we’re continuing to work hard to make Firefox as fast and efficient as possible. You can expect even faster, smoother browsing on Firefox, thanks to quicker page loads and startup times – all while saving more of your phone’s battery life. We’ve already improved responsiveness by 20 percent as measured by Speedometer 3, a collaboration we’ve spearheaded with other leading tech companies. And in that collaborative spirit, we’re also working with the Interop project to make it easy for people to build sites that work great across all browsers. We value your support in our efforts to improve cross-browser compatibility which is why we’ve added new features to easily report when websites aren’t working quite right; this feedback is critical as we look to address even small functionality issues that affect your day-to-day online experience.

Making the most of your time online — without sacrifice

Ensuring your privacy is core to everything we do at Firefox. Unlike other companies, who ask you to exchange your data in order to do even basic, everyday things online — you don’t have to give up your personal information to get a faster, more efficient browser experience with Firefox. Reading a news story in a different language or signing a form for school or work shouldn’t require you to give up your privacy. So, we’ve worked hard to make things like translation and PDF editing in Firefox happen locally on your device, so you don’t have to ship off your personal data to a server farm for a company to use it how they see fit — to keep tabs on you, sell your information to the highest bidder, or train their AI. With Firefox, you have a lot of choice — but you don’t have to choose between utility and privacy. Your data is secure, and most importantly, just yours.

We are approaching the use of AI in Firefox — which many, many of you have been asking about — in the same way. We’re focused on giving you AI features that solve tangible problems, respect your privacy, and give you real choice.

We’re looking at how we can use local, on-device AI models — i.e., more private — to enhance your browsing experience further. One feature we’re starting with next quarter is AI-generated alt-text for images inserted into PDFs, which makes it more accessible to visually impaired users and people with learning disabilities.

Join us on this journey

Our progress is driven by a vibrant community of users and developers like you. We encourage you to contribute to our open-source projects and to engage with us on Mozilla Connect or Discourse, and check out our recent AMA on Reddit. Your participation is crucial in shaping what Firefox becomes next.

Get Firefox

Get the browser that protects what’s important

The post Here’s what we’re working on in Firefox appeared first on The Mozilla Blog.

Mozilla ThunderbirdMaximize Your Day: Time Blocking with Thunderbird

This might be unexpected coming from an email app developer, but hear us out: we want you to spend the least amount of time possible in your inbox.

The Thunderbird Team wants to help you manage your most precious, nonrenewable resource: your time. This post kicks off a series of Thunderbird tips and tricks focusing on our favorite time management and productivity advice.

When we asked Director of Product, Ryan Sipes, what time management strategy he wanted to share first, he said time blocking. Time blocking is a favorite of author and productivity guru Cal Newport. This technique schedules your entire day to minimize focus-stealing activities and maximize deep work that requires your full attention.

This sounds daunting, but we’re on this productivity journey with you! All you need to start is your calendar or planner, whether it’s in Thunderbird, another app, or using pen and paper. Personally, we’re fans of having our notebook and laptop on hand when we schedule our day ahead.

Don’t worry that an all-day schedule won’t leave time for fun or impromptu plans. You’ll be more present when you’re off work, whether it’s cherished time with loved ones or working on that novel you always wanted to write. And since you’re adjusting your schedule as you go, you’ll be able to add plans without overwhelming yourself.

With that, let’s get started time blocking with Thunderbird!

Get Your Calendars in One Place

First, have all your calendars (work, personal, school, etc.) in one place. Thunderbird can combine online calendars from different accounts for you (and let you customize their colors)! This SUMO article explains — with screenshots — how to add your calendars and create new ones.

Suggestions for Getting Started

<figcaption class="wp-element-caption">Example of a time-blocked Thunderbird Calendar. For other examples, see Todoist.</figcaption>

It’s hard to change some fixed blocks of time, like team meetings or scheduled personal obligations. But the spaces in between are a blank canvas waiting to be filled with everything you need and want to do. How you fill them is up to you, but here are some suggestions for time blocking with Thunderbird:

  1. Know when you do your best work. If you can focus more in the morning, or after a 30-minute walk, schedule blocks of deep work around that. If you don’t know when you do your best work, observe how you work for a week or two! Take notes on your energy and focus levels during the day.
  2. Use your professional and personal priorities to fill out your time blocks. Whether your planning exists in project management software or handwritten notes, identify your urgent and important tasks that need your focus and time.
  3. Use breaks between longer blocks for less urgent and potentially distracting tasks like checking your email or catching up on chats. If you limit the amount of time you spend on these tasks (a technique known as time boxing), and minimize or turn off their notifications, your day becomes a lot more productive.
  4. Adjust your schedule whenever you need it, not just at the end of the day or week. Move blocks, shorten or lengthen them, etc. As you learn to be more aware of how you use your time, you’ll become better at estimating how long you need for tasks, and the best time of day to do them.

Time Blocking and Beyond

Thanks for joining us for this first productivity newsletter. We hope this post and the ones to come help you reclaim the time for things you need and want to do. Next month we’ll share more advice, and techniques you can use in Thunderbird to maximize your valuable time.

We’re on this journey with you, learning new skills and working them into our lives until they become habits. Making changes, even for the better, is hard. If a day or two or seven go by and you’re losing track of your time again, it’s okay. Make this the cue to start your productivity training montage, and let us be the awesome 80s rock soundtrack to support you.

Until next time, stay productive!

The post Maximize Your Day: Time Blocking with Thunderbird appeared first on The Thunderbird Blog.

The Servo BlogThis month in Servo: new CSS units, color emoji, servoshell, and more!

servoshell with three boxes arranged around a large water radical (水), each the same size as that character by being 1ic x 1ic. <figcaption>‘ic’ units are now supported.</figcaption>

Servo now supports several CSS features in its nightly builds:

  • as of 2024-04-29, ‘start’, ‘end’, and ‘space-evenly’ values in ‘align-content’ and ‘justify-content’ (@nicoburns, #31724), in flexbox layouts when the experimental feature is enabled (--pref layout.flexbox.enabled)
  • as of 2024-04-30, ‘white-space-collapse’, ‘text-wrap-mode’, and the new ‘white-space’ shorthand (@Loirooriol, #32146)
  • as of 2024-05-03, ‘ch’ and ‘ic’ font-relative units (@andreubotella, #32171)
  • as of 2024-05-19, basic support for ‘border-collapse’ (@mrobinson, @Loirooriol, #32309)
  • as of 2024-05-22, ‘empty-cells’ (@Loirooriol, #32331)
  • as of 2024-05-22, ‘visibility: collapse’ on table parts (@Loirooriol, @mrobinson, #32333)
Two pixel art smileys made from table cells, one with magenta eyes and background <figcaption>Left: ‘empty-cells: show’.
Right: ‘empty-cells: hide’.</figcaption>

Several DOM properties are now accessible, which should improve compatibility with scripts even though their effects are not yet implemented:

We’ve also landed the first patch towards making Servo’s event loop comply with the HTML spec (@gterzian, #31505). This will hopefully address some complex timing issues between the renderer and other kinds of tasks like requestAnimationFrame and ResizeObserver callbacks.

Together with correct sizing for floating tables (@Loirooriol, #32150) and empty list items (@mrobinson, @Loirooriol, #32152), as well as correct ‘line-height’ based on the first font (@mrobinson, #32165), Servo has made some big strides in the Web Platform Tests this month:

  • 90.8% (+1.6pp) in the CSS2 floats tests
  • 68.7% (+5.7pp) in the CSS2 and CSS tables tests
  • 53.3% (+4.0pp) in the CSS text tests
  • 48.8% (+3.3pp) in the CSS position tests

Font system changes

sbix CBDT COLR
Windows ❌︎ ❌︎ ❌︎
macOS ❌︎ ❌︎
Linux ❌︎ ❌︎
<figcaption>Overview of Servo’s current color emoji support by format and platform.</figcaption>

Servo now supports the ‘font-weight’, ‘font-style’, ‘font-stretch’, and ‘unicode-range’ descriptors in @font-face, correctly matching fonts split by ‘unicode-range’ (@mrobinson, @mukilan, #32164) and correctly selecting the nearest weights and styles (@mrobinson, @mukilan, #32366).

We also now support font fallback on OpenHarmony (@jschwe, #32141), and bitmap color emoji on Linux and macOS (@mrobinson, #32203, #32278). Note that the layered COLR format is not yet supported, and that on macOS, we currently only support sbix (like in Apple Color Emoji), not CBDT (like in Noto Color Emoji).

Our font system rework continues, saving up to 40 MB of memory when loading servo.org by sharing font data and metadata across threads (@mrobinson, @mukilan, #32205). We’ve fixed a bug where web fonts in one document can clobber fonts with the same name in other documents (@mrobinson, @mukilan, #32303), and a bug where the font cache leaks unused web fonts (@mrobinson, @mukilan, #32346).

servoshell changes

servoshell showing the URL of a hovered link at the bottom of the window. <figcaption>servoshell now shows the URL of hovered links near the bottom of the window.</figcaption>

servoshell now handles all known keycodes, passing them to Servo where appropriate (@Nopey, #32228), goes back and forward when pressing the mouse side buttons (@Nopey, #32283), and shows the link URL in a status tooltip when hovering over links (@iterminatorheart, @atbrakhi, #32011).

Adding support for the mouse side buttons required a winit upgrade, but we ultimately ended up embarking on a three-month overhaul to upgrade a bunch of other deps (@Nopey, @mrobinson, #31278), including egui, glow, nix, raqote, font-kit, harfbuzz-sys, core-graphics, core-text, raw-window-handle, and jni (@delan, @mrobinson, @mukilan, #32216)!

This in turn involved upgrading those deps in surfman (@Nopey, surfman#275, surfman#280, surfman#283), font-kit (@Nopey, font-kit#234), and webrender (@Nopey, webrender#4838), as well as several improvements being contributed upstream:

Other changes

Servo for Android now builds on aarch64 (@mukilan, #32137), no longer crashes on startup (@mukilan, #32273), and now supports the SpiderMonkey JIT on 64-bit builds (@mukilan, #31134).

Servo should no longer cause intermittent errors and panics when exiting (@mrobinson, #32207), and ShowWebView no longer fails if sent too quickly after a webview is created (@wusyong, #32163).

We’ve also landed several dev changes:

  • You can now pass --skip-platform to mach bootstrap to install taplo and crown only (@mrobinson, #32176)
  • mach build no longer fails on Windows due to STATUS_DLL_NOT_FOUND in crown (@sagudev, #32301)
  • mach build no longer fails on Windows Server 2019 due to UnsupportedPlatform in notifypy (@delan, #32352)

Donations

Thanks again for your generous support! We are now receiving 1630 USD/month (+20.9% over April) in recurring donations. We are still receiving donations from 15 people on LFX, and we’re working on transferring the balance to our new fund, but we will stop accepting donations there soon — please move your recurring donations to GitHub or Open Collective.

As always, use of these funds will be decided transparently in the Technical Steering Committee. Our first proposal hopes to spend some of these funds on a dedicated CI server, which should make tryjobs and merge builds much faster!

We’ve also updated our Sponsorship page with advice about how to make your donations most effective. In short, donating via GitHub Sponsors is the best option, with 96% of the amount going to Servo in almost all cases. Donations on Open Collective give Servo around 80% to 90%, depending on the amount and payment method.

1630 USD/month
10000

Conferences and events

Recordings are now available for three recent talks about Servo:

We’ll also be running a Servo breakout session at the Web Engines Hackfest 2024 in A Coruña on 4 June at 15:00 local time (13:00 UTC). Remote participation is welcome!

This Week In RustThis Week in Rust 549

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X(formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request.

Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is pulso, a simple metrics collector for TCP/IP.

Thanks to guapodero for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (Formerly twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (Formerly twitter) or Mastodon!

Updates from the Rust Project

397 pull requests were merged in the last week

Rust Compiler Performance Triage

A relatively quiet week, with few large changes, the largest driven by further increasing the scope of unsafe precondition checking.

Triage done by @simulacrum. Revision range: 1d0e4afd..a59072ec

2 Regressions, 3 Improvements, 5 Mixed; 3 of them in rollups 51 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team RFCs entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline RFCs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-05-29 - 2024-06-26 🦀

Virtual
Africa
Europe
North America
Oceania
South America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I’ve said it before and I’ll say it again: as a child of OCaml and C++, Rust currently is the best language for production compiler-shaped things.

Alex Kladov on lobste.rs

Thanks to Stephan Sokolow for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla Localization (L10N)Transforming Translations: How LLMs Can Help Improve Mozilla’s Pontoon

A futuristic AI generated city

Image generated by DALL-E 3

Imagine a world where language barriers do not exist; a tool so intuitive that it can understand the subtleties of every dialect and the jargon of any industry.

While we’re not quite there yet, advancements in Large Language Models (LLMs) are bringing us closer to this vision.

What are LLMs: Beyond the Buzz

2024 is buzzing with talk about “AI,” but what does it actually mean? Artificial Intelligence, especially LLMs, isn’t just a fad — it’s a fundamental shift in how we interface with technology. You’ve likely interacted with AI without even realizing it — when Google auto-completes your searches, when Facebook suggests who to tag in a photo, or when Netflix recommends what you should watch next.

LLMs are a breed of AI designed to understand and generate human language by analyzing vast amounts of text. They can compose poetry, draft legal agreements, and yes, translate languages. They’re not just processing language; they’re understanding context, tone, and even the subtext of what’s being written or said.

The Evolution of Translation: From Machine Translation to LLMs

Remember the early days of Google Translate? You’d input a phrase in English and get a somewhat awkward French equivalent. This was typical of statistical machine translation, which relied on vast amounts of bilingual text to make educated guesses. It was magic for its time, but it was just the beginning.

As technology advanced, we saw the rise of neural machine translation, which used AI to better understand context and nuance, resulting in more accurate translations. However, even these neural models have their limitations.

Enter LLMs, which look at the big picture, compare multiple interpretations, and can even consider cultural nuances before suggesting a translation.

Pontoon: The Heart of Mozilla’s Localization Efforts

Pontoon isn’t just any translation tool; it’s the backbone of Mozilla’s localization efforts, where a vibrant community of localizers breathes life into strings of text, adapting Mozilla’s products for global audiences. However, despite integrating various machine translation sources, these tools often struggle with capturing the subtleties essential for accurate translation.

How do we make localizers’ jobs easier? By integrating LLMs to assist not just in translating text but in understanding the spirit of what’s being conveyed. And crucially, this integration doesn’t replace our experienced localizers who supervise and refine these translations; it supports and enhances their invaluable work.

Leveraging Research: Making the Case for LLMs

Our journey began with a question: How can we enhance Pontoon with the latest AI technologies? Diving into research, we explored various LLM applications, from simplifying complex translation tasks to handling under-represented languages with grace.

To summarize the research:

  • Performance in Translation: Studies like “Large Language Models Are State-of-the-Art Evaluators of Translation Quality” by Tom Kocmi and Christian Federmann demonstrated that LLMs, specifically GPT-3.5 and larger models, exhibit state-of-the-art capabilities in translation quality assessment. These models outperform other automatic metrics in quality estimation without a reference translation, especially at the system level.
  • Robustness and Versatility: The paper “How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation” by Amr Hendy et al. highlighted the competitive performance of GPT models in translating high-resource languages. It also discussed the limited capabilities for low-resource languages and the benefits of hybrid approaches that combine GPT models with other translation systems.
  • Innovative Approaches: Research on new trends in machine translation, such as “New Trends in Machine Translation using Large Language Models: Case Examples with ChatGPT” explored innovative directions like stylized and interactive machine translation. These approaches allow for translations that match specific styles or genres and enable user participation in the translation process, enhancing accuracy and fluency.

The findings were clear — LLMs present a significant opportunity to enhance Pontoon and improve translation quality.

Why We Chose This Path

Why go through this transformation? Because language is personal. Take the phrase “Firefox has your back.” In English, it conveys reliability and trust. A direct translation might miss this idiomatic expression, interpreting it literally as “someone has ownership of your back”, which could confuse or mislead users. LLMs can help maintain the intended meaning and nuance, ensuring that every translated phrase feels as though it was originally crafted in the user’s native language.

We can utilize the in-context learning of LLMs to help with this. This is a technique that informs the model about your data and preferences as it generates its responses via an engineered prompt.

Experimenting: A Case Study with ChatGPT and GPT-4

To illustrate the effectiveness of our approach, I conducted a practical experiment with OpenAI’s ChatGPT, powered by GPT-4. I asked ChatGPT to translate the string “Firefox has your back” to Bengali. The initial translation roughly translates to “Firefox is behind you”, which doesn’t convey the original meaning of the string.

Screenshot of first interaction with ChatGPT

Asking GPT-4 to translate the string “Firefox has your back” to Bengali.

Now, it seems our friendly ChatGPT decided to go rogue and translated “Firefox” despite being told not to! Additionally, instead of simply providing the translation as requested, it gave a verbose introduction and even threw in an English pronunciation guide. This little mishap underscores a crucial point: the quality of the output heavily depends on how well the input is framed. It appears the AI got a bit too eager and forgot its instructions.

This experiment shows that even advanced models like GPT-4 can stumble if the prompt isn’t just right. We’ll dive deeper into the art and science of prompt engineering later, exploring how to fine-tune prompts to guide the model towards more accurate and contextually appropriate translations.

Next, I asked ChatGPT to translate the same string to Bengali, this time I specified to keep the original meaning of the string.

Screenshot of second interaction with ChatGPT

Asking GPT-4 to translate the string “Firefox has your back” to Bengali, while maintaining the original meaning of the string.

Adjusting the prompt, the translation evolved to “Firefox is with you”—a version that better captured the essence of the phrase.

I then used Google Translate to translate the same string.

Using Google Translate to translate the string “Firefox has your back” to Bengali.

Using Google Translate to translate the string “Firefox has your back” to Bengali.

For comparison, Google Translate offered a similar translation to the first attempt by GPT-4, which roughly translates to “Firefox is behind you”. This highlights the typical challenges faced by conventional machine translation tools.

This experiment underscores the potential of stylized machine translation to enhance translation quality, especially for idiomatic expressions or specific styles like formal or informal language.

The Essential Role of Prompt Engineering in AI Translation

Building on these insights, we dove deeper into the art of prompt engineering, a critical aspect of working with LLMs. This process involves crafting inputs that precisely guide the AI to generate accurate and context-aware outputs. Effective prompt engineering enhances the accuracy of translations, streamlines the translation process by reducing the need for revisions, and allows for customization to meet specific cultural and stylistic preferences.

Working together with the localization team, we tested a variety of prompts in languages like Italian, Slovenian, Japanese, Chinese, and French. We assessed each translation on its clarity and accuracy, categorizing them as unusable, understandable, or good. After several iterations, we refined our prompts to ensure they consistently delivered high-quality results, preparing them for integration into Pontoon’s Machinery tab.

How It Works: Bringing LLMs to Pontoon

LLM feature demonstration

Above is a demonstration of using the “Rephrase” option on the string “Firefox has your back” for the Italian locale. The original suggestion from Google’s Machine Translation meant “Firefox covers your shoulders”, while the rephrased version means “Firefox protects you”.

After working on the prompt engineering and implementation, we’re excited to announce the integration of LLM-assisted translations into Pontoon. For all locales utilizing Google Translate as a translation source, a new AI-powered option is now available within the ‘Machinery’ tab — the reason for limiting the feature to these locales is to gather insights on usage patterns before considering broader integration. Opening this dropdown will reveal three options:

  • REPHRASE: Generate an alternative to this translation.
  • MAKE FORMAL: Generate a more formal version of this translation.
  • MAKE INFORMAL: Generate a more informal version of this translation.

After selecting an option, the revised translation will replace the original suggestion. Once a new translation is generated, another option SHOW ORIGINAL will be available in the dropdown menu. Selecting it will revert to the original suggestion.

The Future of Translation is Here

As we continue to integrate Large Language Models (LLMs) into Mozilla’s Pontoon, we’re not just transforming our translation processes — we’re redefining how linguistic barriers are overcome globally. By enhancing translation accuracy, maintaining cultural relevance, and capturing the nuances of language through the use of LLMs, we’re excited about the possibilities this opens up for users worldwide.

However, it’s important to emphasize that the role of our dedicated community of localizers remains central to this process. LLMs and machine translation tools are not used without the supervision and expertise of experienced localizers. These tools are designed to support, not replace, the critical work of our localizers who ensure that translations are accurate and culturally appropriate.

We are eager to hear your thoughts. How do you see this impacting your experience with Mozilla’s products? Do the translations meet your expectations for accuracy? Your feedback is invaluable as we strive to refine and perfect this technology. Please share your thoughts and experiences in the comments below or reach out to us on Matrix, or file an issue. Together, we can make the web a place without language barriers.

Firefox Developer ExperienceDeprecating CDP Support in Firefox: Embracing the Future with WebDriver BiDi

Starting with Firefox 129 support for the Chrome DevTools Protocol (CDP) in Firefox will be deprecated. Users of CDP in Firefox are encouraged to migrate to the W3C WebDriver BiDi protocol. This now offers a superset of the features that were provided by Firefox’s CDP implementation.

During the depreciation period users have two choices to continue to use CDP with Firefox:

  • Use the ESR 128 release, which will continue to have the same behavior as today throughout its entire support period of about a year.
  • Set the remote.active-protocols preference to 2 before starting Firefox.
  • If WebDriver BiDi should be enabled as well, set the remote.active-protocols preference to 3.

Our current expectation is to remove the CDP support entirely at the end of 2024.

Notably, Puppeteer users are not affected by the deprecation of CDP, as Puppeteer automatically sets the necessary preference based on the selected protocol (CDP or BiDi)

If you have a client or tool that currently uses CDP and you wish to continue supporting Firefox, we would appreciate a brief notification, even if you do not need any assistance. Having a list of such clients will help us reach out with important information ahead of any future protocol changes.

Also in case of any questions about transitioning to WebDriver BiDi, or concerns about this change, please reach out to us.

Fore more details about the background to this change, read on.

The History of CDP Support in Firefox

The implementation of the Chrome DevTools Protocol (CDP) in Firefox began as an initiative to provide access to modern web testing tools like Puppeteer, Cypress, and Browsertime, which are widely used and largely rely on the CDP protocol. The development kicked off in 2019 with the introduction of the Remote Agent component in Firefox, and by early 2020 experimental support for CDP was made available in Firefox Nightly builds. A significant milestone was reached in February 2021 with the release of Firefox 86, where CDP support was enabled by default, supporting 82 APIs – a subset of the full range of APIs available through CDP.

Enhancing Cross-browser Testing with Firefox

In a two-part series on Mozilla Hacks, we already explored the challenges and advancements in cross-browser testing.

The first post, Cross-browser Testing, Part 1: Web App Testing Today, discusses the complexities of ensuring web applications function seamlessly across different browsers. It highlights the need for improved testing tools to address inconsistencies caused by varied web standard implementations.

The second post, Improving Cross-browser Testing, Part 2: New Automation Features in Firefox Nightly, introduces new automation features in Firefox Nightly. These enhancements include improved WebDriver support and the new WebDriver BiDi protocol, aimed at providing more robust and reliable automation tools for developers. Our efforts in these areas are designed to simplify cross-browser testing and improve interoperability between browsers.

Adopting WebDriver BiDi in Puppeteer

With the rise of WebDriver BiDi, the Puppeteer team began integrating support for this new protocol. Today, WebDriver BiDi in Firefox offers a broader range of features than its experimental CDP implementation, making it a more powerful and flexible option for web automation and testing. This includes features like using pre-load scripts to monitor DOM updates, network interception, and better logging, which are critical for modern web applications.

We’ve been working with the Puppeteer team to enable BiDi-based support for Firefox, offering more features and higher quality than the previous CDP-based implementation. This shift reflects the growing industry preference for a standardized, cross-browser automation protocol. We believe this change will significantly improve automation workflows and ensure compatibility across different browser environments. Stay tuned as we finalize the implementation of the remaining required APIs. Our goal is to provide a seamless and enhanced automation experience for all users.

Deprecation of CDP

As Firefox moves towards fully embracing WebDriver BiDi, CDP support in Firefox was put into limited maintenance mode. This means that no new features, aside from the existing automation-related APIs, have been or will be added any more. Also with tools like Selenium, Browsertime, and Puppeteer adopting WebDriver BiDi as their communication protocol, we see no reason to continue supporting CDP in Firefox. Therefore, starting with Firefox 129, the CDP protocol will no longer be enabled by default. However, Firefox 128 ESR will continue to have CDP enabled for another year, providing a transition period for users.

Please note that the deprecation of CDP will not impact Puppeteer users, as we will wait until Puppeteer uses WebDriver BiDi by default for Firefox.

However, if you are using a different CDP client, you can manually enable the protocol by setting the Firefox preference remote.active-protocols to 2 before starting Firefox or by restarting the browser. If WebDriver BiDi should be enabled as well, set the remote.active-protocols preference to 3. Nonetheless, we strongly recommend migrating to WebDriver BiDi, as support for CDP is scheduled to be fully removed by the end of 2024.

Support and Collaboration

If you have any questions about WebDriver BiDi, please feel free to reach out to us. You can contact us via the Google dev-webdriver mailing list or join the conversation on Matrix in the #webdriver channel. We are here to answer your questions and provide guidance on using WebDriver BiDi to communicate with Firefox effectively.

Don Martihey kids, site search

Just added Pagefind to this site. It’s a site search for sites built with a static site generator, works in the browser. There should now be a search box under the similar posts list.

This site is built with make, so I just needed to

  • add a dependency to have it automatically rebuild when some content changes

  • add a little markup to the article template to have it know to use the post title and not the blog title as the title in the results.

That was about it. Does not add nearly as much to the site build time as some of my own code did. Still planning to pitch a talk for SCALE 2025 on fun small projects that people can string together to make a site.

Bonus links

Surviving the SEO Shake-Up: Publishers vs. Google’s New Game [Google] gained $12.6 billion in Search revenue YoY, and only lost $1.5 billion in the Google Network. Google footnotes that the The overall growth [in Search & other revenue] was driven by interrelated factors including increases in search queries resulting from growth in user adoption and usage on mobile devices; growth in advertiser spending; and improvements we have made in ad formats and delivery. Got that last part? You’re not going crazy. Total search queries are up, traffic to publishers is down, and Search revenue is up. Google is here for the ads, their own ads.

Google just updated its algorithm. The Internet will never be the same AI Overviews are just one of a slew of dramatic changes Google has made to its core product over the past two years. The company says its recent effort to revamp Search will usher in an exciting new era of technology and help solve many of the issues plaguing the web. But critics say the opposite may be true. As Google retools its algorithms and uses AI to transition from a search engine to a search and answer engine, some worry the result could be no less than an extinction-level event for the businesses that make much of your favourite content.

Google AI said to put glue in pizza — so I made a pizza with glue and ate it (anybody can make a listicle of funny Google AI answers, this takes dedication. See also: Google Is Paying Reddit $60 Million for Fucksmith to Tell Its Users to Eat Glue)

You searched Google. The AI hallucinated an answer. Who’s legally responsible? (This is something I have been wondering about. If Google is the publisher or speaker of an AI search result, how do they get Section 230 protection?)

Does One Line Fix Google? (yes, but for how long? This is as if Coca-Cola came out with New Coke, but the Coke machines still had a secret button combo that would get you the real thing. Enjoy this for as long as they leave it up. The post remove AI from Google Search on Firefox has links to tools and advice for doing this on other platforms too.)

Don Martiboring bots ftw

Scott Alexander writes, about bots on prediction markets,

Most of these bots are boring. They’re bots programmed to automatically buy some market once the price gets low enough, or to arbitrage basically-identical markets, or do some other technical finance maneuver.

From the point of view of a active prediction market, where a lot of well-informed traders are speculating about well-known events, then those bots are not especially interesting.

The place where the boring bots do make a big difference, though, is in incentivization markets.

An incentivization market is like a prediction market except that one trading strategy is to make an event being traded on either happen or not happen.

Some markets can be both. (A use case for Policy Analysis Market would have been for someone with advanced knowledge of a terrorist attack to trade on their knowledge, and create a price signal that could prevent the attack.)

One problem for incentivization markets to get over is the large number of thinly traded contracts. So a boring bot would be just what you need for things like:

  • trade across likely duplicate and dependent issues to create fewer and more lucrative opportunities for human experts (can be within or across projects)

  • bid up the price of FIXED based on encouraging CI results, enabling developers to get out of all or part of a position early

  • front-run issues on behalf of a developer based on their interests and available time

The market helps compensate for erratic LLM behavior—an unproductive bot will lose its stake and get shut down. A bot doesn’t have to be run for max earnings, either. An arbitrage bot, for example, could break even or get subsidized to lose a little to keep the market smooth.

Incentivization market needs more noisy traders, LLMs need a cheap way to evaluate whether they’re doing something sensible. Seems like a cookies and milk situation.

Bonus links

Making steel with electricity

Uber and Lyft will stay in Minnesota in an ‘amazing victory for drivers’

We can have a different web

Monopoly Round-Up: Did Texas Join OPEC?

This building wants to be the Swiss army knife of urban living

Here are 13 other explanations for the adolescent mental health crisis. None of them work.

xz, Tidelift, and paying the maintainers

Redis’ license change and forking are a mess that everybody can feel bad about

Daniel StenbergA history of a logo with a colon and two slashes

In the 2015 time frame I had come to the conclusion that the curl logo could use modernization and I was toying with ideas of how it could be changed. The original had served us well, but it definitely had a 1990s era feel to it.

On June 11th 2015, I posted this image in the curl IRC channel as a proof of concept for a new curl logo idea I had: since curl works with URLs and all the URLs curl supports have the colon slash slash separator. Obviously I am not a designer so it was rough. This was back in the day when we still used this logo:

Frank Gevarts had a go at it. He took it further and tried to make something out of the idea. He showed us his tweaked take.

When we met up at the following FOSDEM in the end of January 2016, we sat down together and discussed the logo idea a bit to see if we could make it work somehow. Left from that exercise is this version below. As you can see, basically the same one. It was hard to make it work.

Later that spring, I was contacted by Soft Dreams, a designer company, who offered to help us design a new logo at no cost to us. I showed them some of these rough outlines of the colon slash slash idea and we did a some back-and-forthing to see if we could make something work with it, but we could not figure out a way to get the colon slash slash sequence actually into the word curl in a way that would look good. It just kept on looking different kinds of weird. Eventually we gave that up and we ended up putting it after the word, making it look like curl is a URL scheme. It was ended up much easier and ultimately the better and right choice for us. The new curl logo was made public in May 2016. Made by Adrian Burcea.

Just months later in 2016, Mozilla announced that they were working on a revamp of their logo. They made several different skews and there was a voting process during which they would eventually pick a winner. One of the options used colon slash slash embedded in the name and during the process a number of person highlighted the fact that the curl project just recently changed logo to use the colon slash slash.

In the Mozilla all-hands meeting in Hawaii in December 2016, I was approached by the Mozilla logo design team who asked me if I (we?) would have any issues with them moving forward with the logo version using the colon slash slash.

I had no objections. I think that was the coolest of the new logo options they had and I also thought that it sort of validated our idea of using the symbols in our logo. I was perhaps a bit jealous how Mozilla is a better word to actually integrate the symbol into the name…. the way we tried so hard to do for curl, but had to give up.

In January 2017 Mozilla announce their new logo. With the colon slash slash.

And now you too know how this happened.

Cameron KaiserDonnie Darko uses OS X

I think it's been previously commented upon, but we were watching Donnie Darko over the weekend (controversial opinion: we prefer the director's cut, we think it's an improvement) and noticed that Donnie's reality is powered by a familiar processor and operating system. These are direct grabs from the Blu-ray.
The entirety of the crash dump can't be seen and the scenes in which it/they appear are likely a composite of several unrelated traces, but the first two shots have a backtrace showing symbols from Unsanity Application Enhancer (APE), used for adding extra functionality to the OS like altering the mouse cursor and system menus. However, its infamous in-memory monkeypatching technique could sometimes make victim applications unstable and was unsurprisingly a source of some early crash reports in TenFourFox. (I never supported it for that reason, refused to even use it on principle, and still won't.) As a result, it wouldn't have been difficult for the art department to gin up a genuine crash backtrace as an insert. The second set of grabs appears when the Artifact returns to the Primary Universe and the Tangent Universe is purged (not a spoiler because it will make no sense to anyone who hasn't seen the movie).

All four are specific to the director's cut that premiered theatrically in May 2004. While APE was available at least as far back as Puma, i.e., OS X 10.1, Puma didn't come out until September 2001, months after the movie premiered in January of that year. In fact, the original movie is too early even for the release of Cheetah (10.0) in March. The first two images don't give an obvious version number but the second set shows a Darwin kernel version of 6.1, which corresponds to Jaguar 10.2.1 from September 2002. Although Panther 10.3 came out in October 2003, the recut movie would have moved to post-production (in its fashion) by then, and the shots may well have been done near the beginning of production when early versions of Jag remained current.

I'm waiting on the next Firefox ESR (128) in July, and there will be at least some maintenance updates then, so watch for that.

Wil ClouserRetiring BrowserID on Mozilla Accounts

The tl;dr here is that Mozilla Accounts is turning off SyncStorage BrowserID support and it probably doesn’t affect you at all.

A little history

In 2011, when Mozilla Accounts (called “Firefox Accounts” back then) was first built it used BrowserID identity certificates in its authentication model. The BrowserID protocol never took off and Mozilla’s work on it ended in 2016. However, the sync service in Firefox continued to use BrowserID even as OAuth support was added to Mozilla Accounts as an alternative for all other relying parties.

Over time, we recognized BrowserID was becoming a maintenance liability. As a non-standard protocol it created significant complexity in our codebase. Therefore, we decided to migrate the Firefox clients off of it in favor of OAuth.

This was an enormous effort, and while much more could be written about this transition, the main takeaway is that Firefox Sync’s BrowserID support ended with Firefox 78, which shipped in June 2020 and reached its end of life in November 2021.

Present day

We’ve been waiting a long time for the usage of Firefox 78 to drop.

Aside from being an ESR version there are a couple of other reasons for its extra longevity:

  • It was the last version of the browser to support Flash
  • It was the last version of the browser to support OS X versions < 10.12

With Flash now largely obsolete on the web and traffic from older operating systems becoming rarer, we’ve decided that now is the appropriate time to turn off support for this legacy protocol.

To avoid surprises and not leave anyone behind, we attempted to email anyone still using that endpoint earlier this year. We didn’t receive any feedback and we continued with the plan.

Our method

Our plan is simple: BrowserID requests are the only traffic hitting our /v1/certificate/sign endpoint. We’ll begin returning HTTP 404 replies to a small percentage of traffic from that endpoint and monitor for any issues. Our testing showed no concerns but it’s challenging to be comprehensive with so many combinations of browser versions and operating systems. Over the next few weeks we’ll continue to ramp up the percentage of 404s until we can remove the endpoint completely and let the traffic bounce off the front-end like any other 404.

Current status

Surprise! I’m a few weeks late with this post. We started returning 404s on May 1 and are currently up to ~66% of traffic on that endpoint. So far there haven’t been any unexpected complications. We’ll continue to increase over the next few weeks and aim to have all the code removed this summer.

Don Martiremove AI from Google Search on Firefox

Update: There is an easier way to do this now.

  1. Install the udm14 extension.

  2. If you want to make this the default, go to Settings → Search and choose udm14 as your default search engine.

All done. You may wish to enjoy a cool beverage without HFCS to celebrate.

Some helpful related extensions are:

I signed up for Google Search Console, and, wow, this site is getting like 200% more search clicks since I posted this. The Google algorithm is really into this blog post for some reason.<figcaption>I signed up for Google Search Console, and, wow, this site is getting like 200% more search clicks since I posted this. The Google algorithm is really into this blog post for some reason.</figcaption>

Original version of this post:

This seems to work to remove “AI” stuff from the top of Google search results on Firefox. (Tested on desktop Firefox for Linux.)

  1. Go to the hamburger menu → Settings → Search and remove “Google Search.”

  2. Do a regular Google search for a word.

  3. Bookmark the search result page.

  4. Go to the hamburger menu → Bookmarks → Manage Bookmarks.

  5. (optional) Make a new folder for search and put the new bookmark in it.

  6. Edit the bookmark to include udm=14 as a URL parameter, like this: https://www.google.com/search?q=%s&udm=14

  7. Add a keyword or keywords (I use @gg).

Related

Revolutionary New Google Feature Hidden Under ‘More’ Tab Shows Links to Web Pages

Bye Bye, AI: How to turn off Google’s annoying AI overviews and just get search results | Tom’s Hardware Article that covers how to remove “AI” material on Google Chrome and mobile Firefox.

How to Turn Off Google AI Overview and Set “Web” as Default Another list of browsers and instructions.

How I Made Google’s “Web” View My Default Search

Dark Visitors - A List of Known AI Agents on the Internet is a good site for keeping track of “AI” crawlers if you want to block them in ads.txt. (This doesn’t work for blocking underground “AI” but will put the big companies on notice.)

Google Chrome ad features checklist (For Google Chrome users, prevent Google AI from classifying you in ways that are hard to figure out)

&udm=14 | the disenshittification Konami code is a site with a Google search form. Makes it easy to try this the first time before choosing to set as default.

Bonus links

The Ukraine war is driving rapid innovation in drone technology Of course, there are new legal and moral questions that arise from giving drones the power to kill. But the CEO of this company points out there is a cost to not developing the technology. And in any case, this push to innovate—and defeat the invading enemy—has pushed off those questions for now. (imho this is going to be the number one immediate issue for AI in Europe. The only credible alternative to returning to large-scale conscription in European countries that have phased it out is for some European alliance to reach global leadership in autonomous military AI. Which explains why they’re putting civilian AI and surveillance businesses on a tight leash—to free up qualified developers for defense jobs.)

How Google harms search advertisers in 20 slides They’re not raising prices, they’re coming up with better prices or more fair prices, where those new prices are higher than the previous ones. lol

React, Electron, and LLMs have a common purpose: the labour arbitrage theory of dev tool popularity Why do some software frameworks and libraries grow in adoption while others don’t?…It’s not about output or productivity.

Meta’s ‘set it and forget it’ AI ad tools are misfiring and blowing through cash Small businesses have seen their ad dollars get wiped out and wasted as a result, and some have said the bouts of overspending are driving them from Meta’s platforms. (considering where Meta ad money goes— child safety and mental health concerns are just the latest—this seems like a good thing. Also Meta could face further squeeze on surveillance ads model in EU)

Microsoft Deleted Its LLM Because It Didn’t Get a Safety Test, But Now It’s Everywhere 404 Media has not tested the model and we don’t know if it is easily producing harmful or “toxic” answers, or if Microsoft only took it down because it didn’t check either way. Since the model is open source, it is also possible other people could have downloaded it and create uncensored versions of the model that would produce controversial answers anyway, as we’ve reported people have done previously. (underground AI is less capable but more predictable than big company AI APIs. From the point of view of an API caller, the AI you were using gets randomly nerfed because the provider is acting on a moderation issue you weren’t aware of.)

Firefox NightlyToday’s Forecast: Browser Improvements – These Weeks in Firefox: Issue 161

Highlights

  • Volunteer contributor tamas.beno12 has fixed a 5 digit (25 year old) bug! The patch for the bug makes it easier to create transparent windows
  • The newtab team is experimenting with a weather widget! It’s still early days, but you can turn it on in Nightly with a set of 2 prefs found in about:config:
    • Set the following to true:
      • browser.newtabpage.activity-stream.showWeather
      • browser.newtabpage.activity-stream.system.showWeather
    • If you notice any bugs with it, you can file them under Firefox :: New Tab Page
  • Some nice updates to Picture-in-Picture:
  • Bounce Tracking Protection has been enabled in Nightly (Bug 1846492)
    • What is bounce tracking / redirect tracking?
    • The feature detects bounce trackers based on redirect behaviour and periodically purges their cookies & site data to prevent tracking.
    • If you notice that you lose site data or get logged out of sites more than usual please file a bug under Core :: Privacy: Anti-Tracking so we can investigate
    • The feature is still in development so detected trackers are not yet counted as part of our regular ETP stats or on about:protections.
    • Advanced: If you want to see which bounce trackers get detected and purged you can enable the logging by going to about:logging and adding the following logger: BounceTrackingProtection:3
  • Niklas made the screenshots initial state (crosshairs) keyboard accessible
    • The arrow keys can be used to move the cursor around the content area. Enter will select the current hovered region and space will start the dragging state to draw a region.

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Itiel

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • As already anticipated in this meeting, starting from Firefox 127, installing new single-signed add-ons is disallowed. The QA verification has been completed and this restriction is now enabled on all channels and riding the Firefox 127 release train (Bug 1886160).
WebExtension APIs
  • Starting from Firefox 127, the installType property returned by the management API (e.g. management.getSelf) will be set to ”admin” for extensions that are installed through Enterprise Policy Settings (Bug 1895341)
    • Thanks to mkaply for working on this API improvement for Enterprise Firefox add-ons!
  • As part of the ongoing work related to improving cross-browser compatibility for Manifest Version 3 extensions, starting from Firefox 127:
    • Host permissions requested by Manifest V3 extensions will be listed in the install dialog and granted as part of the add-on installation flow (Bug 1889402).
    • Extensions using the ”incognito”: “split” mode will be allowed to install successfully in Firefox (Bug 1876924)
      • Incognito split mode is still not supported in Firefox, and so the extensions using this mode will not be allowed access to private browsing tabs.
    • The new runtime.getContexts API method is now supported (Bug 1875480).
      • This new API method allows extensions to discover their existing Extensions contexts (but unlike runtime.getViews it returns a json representation of the metadata for the related extension contexts).

Developer Tools

DevTools
  • Pier Angelo Vendrame prevented new request data to be persisted in Private Browsing (#1892052)
  • Arai fixed exceptions that could happen when evaluating Services.prompt and Services.droppedLinkHandler in the Browser Console (#1893611)
  • Nicolas fixed an issue that was preventing users to see stacktrace from WASM-issued error messages (#1888645)
  • Alexandre managed to tackle an issue that would prevent DevTools to be initialized when a page was using Atomics.wait , e.g. stackblitz.com (#1821250)
  • Nicolas added the new textInput event to the Event Listener Breakpoints in the Debugger (#1892459)
  • Hubert is making good progress migrating the Debugger to CodeMirror 6 (#1887649, #1889277, #1889283, #1894379, #1894659, #1889276)
  • Nicolas made sure that ::backdrop pseudo-element rules are visible in the Rules view for popover elements (#1893644), as well as @keyframes rules nested in other at-rules (#1894603)
  • Nicolas fixed performance issue in the Inspector when displaying deeply nested rule (#1844446)
    • for example, a 15-level deep rule was taking almost 9 seconds to be displayed, now it’s only a few milliseconds 
  • Julian removed code that was forcing the Performance tab to be always enabled in the Browser Toolbox, even if the user disabled it in a previous session (#1895434)
WebDriver BiDi
  • Thanks to Victoria Ajala for replacing the usage of the “isElementEnabled” selenium atom with a custom implementation which is more lightweight and maintainable (#1798464)
  • Sasha implemented the permissions.setPermission command which allows clients to set permissions such as geolocation, notifications, … (#1875065)
  • Sasha fixed a bug where wheel scroll actions would not use the provided modifiers (eg shift, ctrl, …) (#1885542)
  • Sasha improved the implementation of the browsingContext.locateNodes command to also accept Document objects as the root to locate nodes. Previously this was restricted to Elements only, but Puppeteer relies heavily on using Document for this command. (#1893922)
  • Henrik fixed a bug where the WebDriver classic GetElementText command would fail to capitalise text containing a underscore (#1888004)

Migration Improvements

New Tab Page

  • Newtab wallpaper experiment going out either this release (next week) or next release, depending on some telemetry bug fix uplifts.
    • To enable wallpapers on HNT, set the following to TRUE:
      • browser.newtabpage.activity-stream.newtabWallpapers.enabled
  • Newtab wallpapers are getting some updates soon. A bunch more wallpapers as options, and some tweaks to the customize menu, a nested menu, to better organize the wallpapers so it’s easier to explore as we add more options.

Picture-in-Picture

  • Thanks to Joseph Webster for adding PiP captions support for more sites with our JWPlayer wrapper (bug)

Performance

Screenshots

Search and Navigation

  • Clipboard suggestions have been temporarily disabled in nightly as it was possible to freeze Firefox on Windows – we’re moving the feature to asynchronous clipboard API – 1894614
  • Features for an update to the urlbar UX codenamed scotchBonnet have started landing, secondary Actions have landed and dedicated search button + others are in progress. These will be enabled in nightly at some point so keep an eye out. Meta bug tracking @ 1891857
  • Mandy fixed an issue with stripping a leading question mark when the urlbar is already in search mode @ 1837624
  • Marco fixed protocols being trimmed when copying urls @ ​​1893871
    • In Nightly, when https stripping is enabled, the loaded URL will gain back the trimmed protocol when the user interacts with the urlbar input field text
  • Marco changed domain inline completion, so that when permanent private browsing is active domain will be picked based on the number of bookmarks to that domain. Bug 1893840
  • The new search configuration (aka search consolidation) is now rolling out in FF 126 release.
    • Ebay support in Poland has been added to application provided engines in the new search configuration and so will become available during FF 126 @ 1885391
  • For Places, Daisuke removed the ReplaceFaviconData() and ReplaceFaviconDataFromDataURL() APIs, replacing their use with a new SetFaviconForPage() API accepting a data URL for the favicon. Long term this is the API we want to use, Places should never fetch from the Network, only store data.

Storybook/Reusable Components

  • Work has started on form components with an eye for the Sidebar Settings feature (and potentially the Experiments section of preferences). Initial components: moz-checkbox, moz-radio-group and moz-fieldset

The message-bar component has been fully removed from the codebase (replaced by moz-message-bar) Bug 1845151 – Remove all code associated with the message-bar component – Thanks Anna!

Firefox Developer ExperienceFirefox DevTools Newsletter — 126

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 126 Nightly release cycle.


Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla, like Artem Manushenkov who added a setting that can be used to disable the split console (#1731635).

Firefox DevTools settings panel. Along side many items, there a new "Enable Split Console" checkbox in the Web Console section

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Performance

As announced in previous newsletters, we’re focusing on performance for a few months to make our tools as fast as they can be.

A few years back, we got a report from a user telling us that modifying a property in a rule from the Inspector was very slow (#1644138). The stylesheet they were using was massive, with 185K lines of code and a total size of approximately 4 MB, and our machinery to replace the rule content was not handling this well. After rewriting some old Javascript code in Rust (#1882964), the function call that was taking more than 500ms on my machine now only takes about 10ms. Yes, that’s 50 times faster! This also shows in less extreme cases: our performance tests are reporting an almost 10% improvement to display Rules in the Inspector 🎉

Chart where x is the time and y is duration, where we can see the values going from 750ms to 700ms around April 8th<figcaption class="wp-element-caption">Performance test duration going from ~750ms to ~700ms</figcaption>

We also came across an issue that showed a pretty bad mistake when handling rules using pseudo-elements (#1886947), and fixing it, alongside some minor tweak (#1886818), got us another ~10% improvement when displaying Rules in the Inspector.

Finally, we realized we could have some unnecessary computation when editing a rule (#1888079, #1888081), so we fixed that for an even smoother experience.

Custom State

Firefox 126 adds support for CustomStateSet (mostly done by an external contributor, Keith Cirkel):

The CustomStateSet interface of the Document Object Model stores a list of states for an autonomous custom element, and allows states to be added and removed from the set.

The interface can be used to expose the internal states of a custom element, allowing them to be used in CSS selectors by code that uses the element.

MDN

The MDN page has some nice examples on how this can be used to style custom elements based on a specific state. Rules using the :state() pseudo-class are displayed in the Inspector and its properties can be modified like any other rules.

Firefox DevTools Inspector panel. The markup view has a `<label-checkbox>` custom element selected. In the rules view, we can see a few rules using `:state(checked)`, which are using to style the element

You can quickly see which states are in the set when logging a CustomStateSet instance in the console (#1862896).

Firefox DevTools Console with the following code being executed: `document.querySelector("labeled-checkbox")._internals.states`  The results shows an object whose header is`CustomStateSet [ "checked" ]`. The object is expanded, and we can see that it has a `<entries>` node, which contains one item, which is `"checked"`


And more…

  • We’re currently working on migrating our CodeMirror usage to CodeMirror 6 (#1773246), which we hope will allow for performance improvement in the Debugger. This is a pretty big task and we’ll report progress in the next newsletters!
  • We added support for Wasm exception handling proposal in the Debugger (#1885589)
  • We’re now showing the color swatch when a CSS custom property is used in color definition (#1718894)
  • In order to enable debugging Firefox on Android devices, we maintain an ADB extension,  we finally released a new version of the DevTools ADB extension that is used by about:debugging. The extension is now shipping with notarized binaries and can be used on recent macOS versions (#1890843)

That’s all folks, see you in June for the 127 newsletter!

Mozilla ThunderbirdThe New Thunderbird Website Has Hatched

Thunderbird.net has a new look, but the improvements go beyond that. We wanted a website where you could quickly find the information you need, from support to contribution, in clear and easy to understand text. While staying grateful to the many amazing contributors who have helped build and maintain our website over the past 20 years, we wanted to refresh our information along with our look. Finally, we wanted to partner with Freehive’s Ryan Gorley for their sleek, cohesive design vision and commitment to open source.

We wanted a website that’s ready for the next 20 years of Thunderbird, including the upcoming arrival of Thunderbird on mobile devices. But you don’t have to wait for that future to experience the new website now.

The New Thunderbird.net

The new, more organized framework starts with the refreshed Home page. All the great content you’ve relied on is still here, just easier to find! The expanded navigation menu makes it almost effortless to find the information and resources you need.

Resources provide a quick link to all the news and updates in the Thunderbird Blog and the unmatched community assistance in Mozilla Support, aka SUMO. Release notes are linked from the download and other options page. That page has also been simplified while still maintaining all the usual options. It’s now the main way to get links to download Beta and Daily, and in the future any other apps or versions we produce.

The About section introduces the values and the people behind the Thunderbird project, which includes our growing MZLA team. Our contact page connects you with the right community resources or team member, no matter your question or concern. And if you’d like to join us, or just see what positions are open, you’ll find a link to our career page here.

Whether it’s giving your time and skill or making a financial donation, it’s easy to discover all the ways to contribute to the project. Our new and improved Participate page shows how to get involved, from coding and testing to everyday advocacy. No matter your talents and experience, everyone can contribute!

If you want to download the latest stable release, or to donate and help bring Thunderbird everywhere, those options are still an easy click from the navigation menu.

Your Feedback

We’d love to have your thoughts and feedback on the new website. Is there a new and improved section you love? Is there something we missed? Let us know in the comments below. Want to see all the changes we made? Check the repository for the detailed commit log.

The post The New Thunderbird Website Has Hatched appeared first on The Thunderbird Blog.

The Rust Programming Language BlogFaster linking times on nightly on Linux using `rust-lld`

TL;DR: rustc will use rust-lld by default on x86_64-unknown-linux-gnu on nightly to significantly reduce linking times.

Some context

Linking time is often a big part of compilation time. When rustc needs to build a binary or a shared library, it will usually call the default linker installed on the system to do that (this can be changed on the command-line or by the target for which the code is compiled).

The linkers do an important job, with concerns about stability, backwards-compatibility and so on. For these and other reasons, on the most popular operating systems they usually are older programs, designed when computers only had a single core. So, they usually tend to be slow on a modern machine. For example, when building ripgrep 13 in debug mode on Linux, roughly half of the time is actually spent in the linker.

There are different linkers, however, and the usual advice to improve linking times is to use one of these newer and faster linkers, like LLVM's lld or Rui Ueyama's mold.

Some of Rust's wasm and aarch64 targets already use lld by default. When using rustup, rustc ships with a version of lld for this purpose. When CI builds LLVM to use in the compiler, it also builds the linker and packages it. It's referred to as rust-lld to avoid colliding with any lld already installed on the user's machine.

Since improvements to linking times are substantial, it would be a good default to use in the most popular targets. This has been discussed for a long time, for example in issues #39915 and #71515, and rustc already offers nightly flags to use rust-lld.

By now, we believe we've done all the internal testing that we could, on CI, crater, and our benchmarking infrastructure. We would now like to expand testing and gather real-world feedback and use-cases. Therefore, we will enable rust-lld to be the linker used by default on x86_64-unknown-linux-gnu for nightly builds.

Benefits

While this also enables the compiler to use more linker features in the future, the most immediate benefit is much improved linking times.

Here are more details from the ripgrep example mentioned above: linking is reduced 7x, resulting in a 40% reduction in end-to-end compilation times.

Before/after comparison of a ripgrep debug build

Most binaries should see some improvements here, but it's especially significant with e.g. bigger binaries, or when involving debuginfo. These usually see bottlenecks in the linker.

Here's a link to the complete results from our benchmarks.

If testing goes well, we can then stabilize using this faster linker by default for x86_64-unknown-linux-gnu users, before maybe looking at other targets.

Possible drawbacks

From our prior testing, we don't really expect issues to happen in practice. It is a drop-in replacement for the vast majority of cases, but lld is not bug-for-bug compatible with GNU ld.

In any case, using rust-lld can be disabled if any problem occurs: use the -Z linker-features=-lld flag to revert to using the system's default linker.

Some crates somehow relying on these differences could need additional link args. For example, we saw <20 crates in the crater run failing to link because of a different default about encapsulation symbols: these could require -Clink-arg=-Wl,-z,nostart-stop-gc to match the legacy GNU ld behavior.

Some of the big gains in performance come from parallelism, which could be undesirable in resource-constrained environments.

Summary

rustc will use rust-lld on x86_64-unknown-linux-gnu nightlies, for much improved linking times, starting in tomorrow's rustup nightly (nightly-2024-05-18). Let us know if you encounter problems, by opening an issue on GitHub.

If that happens, you can revert to the default linker with the -Z linker-features=-lld flag. Either by adding it to the usual RUSTFLAGS environment variable, or to a project's .cargo/config.toml configuration file, like so:

[target.x86_64-unknown-linux-gnu]
rustflags = ["-Zlinker-features=-lld"]

Support.Mozilla.OrgKitsune Release Notes – May 15, 2024

See full platform release notes on GitHub

New

Description of new features, how it benefits the user, and any relevant details.

  • Group messaging: Staff group members can send messages to groups as well as individual users.
  • Staff group permissions: We are now using a user’s membership in the Staff group rather than the user’s is_staff attribute to determine elevated privileges like being able to send messages to groups or seeing restricted KB articles
  • In-product link on article page: You’ll now see an indicator on the KB article page for articles that are the target of in-product links. This is visible to users in the Staff group.

Screenshot of the in-product indicator in a KB article

Changed

Explanation of the enhancements or changes to existing features, including performance improvements, user interface changes, etc.

  • Conversion from GA3 to GA4 data API for gathering Google Analytics data: We recently migrated SUMO’s Google Analytics (GA) from GA3 to GA4. This has temporarily impacted our access to historical data on the SUMO KB Dashboard. Data will now be pulled from GA4, which only has data since April 10, 2024. The number of “Visits” for the “Last 90 days” and “Last year” will only reflect the data gathered since this date. Stay tuned for additional dashboard updates, including the inclusion of GA3 data.

Screenshot of the Knowledge Base Dashboard in SUMO

Screenshot of how the new SUMO inbox looks like

  • Removed New Contributors link from the Contributor Tools: Discussions section of the top main menu (#1746)

Fixed

Brief description of the bug and how it was fixed, possibly including affected components.

 

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 126-127)

Hello and welcome to our newest newsletter. As the northern hemisphere warms and the southern hemisphere cools, we write to talk about what’s happened in the world of SpiderMonkey in the Firefox 126-127 timeline.

🚀 Performance

Though Speedometer 3 has shipped, we cannot allow ourselves get lax with our performance. It’s important that SpiderMonkey be fast so Firefox can be fast!

🔦 Contributor Spotlight

This newsletter, we’d like to Spotlight Jonatan Klemets. In his own words,

A full-stack web developer by day and a low-level enthusiast by night who likes tinkering with compilers, emulators, and other low-level projects

Jonatan has been helping us for a few years now and has been the main force of late driving forwards our work on the Import Attributes proposal. Pushing this proposal forward has required jumping into many different parts of Firefox, and Jonatan has done really well, and we are very thankful for the effort he has put into working on the project.

⚡ Wasm

🕸️ Web Features Work

👷🏽‍♀️ Other Work

Mozilla Addons BlogManifest V3 Updates

Greetings add-on developers! We wanted to provide an update on some exciting engineering work planned for the next few Firefox releases in support of Manifest V3. The team continues to implement API changes that were previously defined in agreement with other browser vendors that participate in the WECG, ahead of Chrome’s MV2 deprecation. Another top area of focus has been around addressing some developer and end user friction related to MV3 host permissions.

The table below details some MV3 changes that are going to be available in the Firefox release channel soon.

Version Manifest V3 engineering updates Nightly Beta Release
126 Chrome extension porting API enhancements:

3/18 4/15 5/14
127 Updating MV3 host permissions on both desktop and mobile. 4/15 5/13 6/11
128 Implementing the UI necessary to control optional permissions and supporting host permissions on Android that landed in 127. 5/13 6/10 7/9

The Chrome extension porting API work that will land beginning in 126 will help ensure a higher level of compatibility and reduce friction for add-on developers supporting multiple browsers.

Beginning with Firefox 127, users will be prompted to grant MV3 host permissions as part of the install flow (similar to MV2 extensions). We’re excited to deliver this work as based on feedback from Firefox users and extension developers, this has been a major hurdle for MV3 extensions in Firefox.

However, unlike the host permission granted at install time for MV2 extensions, MV3 host permissions can still be revoked by the user at any time from the about:addons page on Firefox Desktop. Given that, MV3 extensions should still leverage the permissions API to ensure that the permissions required are already granted.

Lastly, in Firefox for Android 128, the Add-ons Manager will include a new permissions UI as shown below — this new UI will allow users to do the same as above on Firefox for Android with regards to host permissions, while also granting or revoking other optional permissions on MV2 and MV3 extensions.

                             

We also wanted to take this opportunity to address a couple common questions we’ve been seeing in the community, specifically around the webRequest API and MV2:

  1. The webRequest API is not on a deprecation path in Firefox
  2. Mozilla has no current plans to deprecate MV2 as mentioned in our previous MV3 update

For more information on adopting MV3, please see our migration guide. Another great resource is the FOSDEM presentation a couple Mozilla engineers delivered recently, Firefox, Android, and Cross-browser WebExtensions in 2024.

If you have questions or feedback on our Manifest V3 plans we would love to hear from you in the comments section below or if you prefer, drop us an email.

The post Manifest V3 Updates appeared first on Mozilla Add-ons Community Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter — 126

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 126 release cycle.

Contributions

With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.

WebDriver BiDi

New: Support for the “contexts” argument for the “network.addIntercept” command

Since the introduction of the network.addIntercept command in Firefox 124, users could only apply network interceptions globally, affecting all open web pages across various tabs and windows. This necessitated the setup of specific filters to limit the impact to tabs requiring interception. However, this approach adversely affected performance, particularly when client code didn’t run locally, leading to increased data transmission over the network.

To address these issues and simplify the use of network interception for specific tabs, we’ve added the contexts argument in the network.addIntercept command. This enhancement facilitates the targeting of specific top-level browsing contexts, enabling the restriction of network request interception to individual tabs even with the same web page open in multiple tabs.

Bug fixes

Firefox NightlyScreenshots++ – These Weeks in Firefox: Issue 160

Highlights

  • The screenshots component pref just got enabled and is riding the trains in 127! This is a new implementation of the screenshots feature with a number of usability, accessibility and performance improvements over the original.
  • Thanks to Joseph Webster for creating a brand new JWPlayer video wrapper (bug) and for adding more sites under this wrapper to expand Picture-in-Picture captions support (bug).
    • New supported sites include AOL, C-SPAN, CPAC, CNBC, Reuters, The Independent, Yahoo and more!
  • Irene landed the first part of refreshed text formatting controls for Reader Mode. Check them out by toggling reader.improved_text_menu.enabled (bug 1880658)
    • A panel in Firefox's Reader Mode is shown for controlling layout and text on the page. The panel lets users control the content width, line spacing, character spacing, word spacing, and text alignment of the text in reader mode.
  • New tab wallpapers have landed in Nightly and will be released as an experiment in en-US. If you’d like to enable wallpapers, set browser.newtabpage.activity-stream.newtabWallpapers.enabled to true.
    • Firefox's New Tab page with a beautiful image of the aurora borealis set as the background wallpaper

      Set a new look for new tabs!

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Camille
  • gravyant
  • Itiel Joseph
  • Webster
  • Magnus Melin [:mkmelin]
  • Meera Murthy
  • Steve P

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Starting from Firefox 127, installing new single-signed add-ons is disallowed (while already installed single-signed add-ons are still allowed to run). This behavior is currently only enabled in Nightly (Bug 1886157) but it is expected to be extended to all channels later in the 127 cycle (Bug 1886160)
  • Fixed a styling issue hit by extensions options pages embedded in about:addons when the Dark mode is enabled (Bug 1888866)
WebExtensions APIs
  • As part of the ongoing work related to improving cross-browser compatibility for Manifest Version 3 extensions:
    • Customized keyboard shortcuts associated to _execute_browser_action command for Manifest Version 2 extensions will be automatically associated to the _execute_action command when the same extension migrates to Manifest Version 3 (Bug 1797811). This way, the custom keyboard shortcut will keep working as expected from a user perspective.
    • DNR rule limits have been raised to match the limits enforced by other browsers (Bug 1803370)
    • DNR getDynamicRules and getSessionRules API methods will be accepting the additional ruleIds filter as a parameter and improve compatibility with DNR API in more recent Chrome versions (Bug 1820870)
  • Improved errors logged when a content script file does not exist (Bug 1891502)
    • the error is now expected to look like Unable to load script: moz-extension://UUID/path/to/script.js

Developer Tools

DevTools
  • Julian reverted a change a few months ago so DevTools screenshots are saved in the same location as Firefox screenshots (#1845037)
  • Alex fixed a Debugger crash (#1891699)
  • Nicolas fixed a visual glitch in the Debugger (#1891681)
  • Alex fixed an issue where Network request from iframe sent just before document destruction were not displayed in the Netmonitor (#1887852)
  • Nicolas replaced DevTools JS-based CSS lexer with a Rust-based version, using the same cssparser crate than Stylo (#1887638, #1892895)
    • This brought a ~10% performance improvement when displaying rules in the inspector (#1888607 + #1890552)
  • Thanks to :willdurand, we finally released a new version of the DevTools ADB extension used by about:debugging. The extension is now shipping with notarized binaries and can be used on recent macOS versions. (#1821449)
WebDriver BiDi
  • Thanks to gravyant who implemented a new helper Assert.isInstance to check whether objects are instances of specific classes (#1870880)
  • Henrik updated mozrunner/mozprocess to use “psutil” and support the new application restart mechanism on macos (#1884401)
  • Sasha added support for the a11y attributes locator for the browsingContext.locateNodes command (#1885577)
  • Sasha added support for the devicePixelRatio parameter for the browsingContext.setViewport command (#1857961)
  • Henrik improved the way we check if an element is disabled when using the WebDriver ElementClear command (#1863266)
  • Julian updated the vendored puppeteer version to v22.6.5, which enables new network interception features in Puppeteer using WebDriver BiDi (#1891762)

Migration Improvements

New Tab Page

  • Work continues on a weather widget for new tab (borrowing logic from URL bar). Stay tuned!

Privacy & Security

  • We’re working on a new anti-tracking feature: Bounce Tracking Protection. It works similar to the existing Cookie Purging feature in Firefox, but instead of a tracker list it relies on heuristics to detect bounce trackers.
    • It’s based on the navigational-tracking-protections spec draft in the PrivacyCG
    • Bug 1877432 first enabled the feature in Nightly in “dry run mode” where we don’t purge tracker storage but only collect telemetry. We’re looking to fully enable it in Nightly soon once we think it’s stable enough.

Profile Management (new this week!)

  • We’re getting underway with improvements to multiple profiles support in Firefox!
  • Eng discussion on Matrix: #fx-profile-eng
  • Backend work in toolkit/profile behind a build flag (MOZ_SELECTABLE_PROFILES)
  • Frontend work in browser/components/profiles behind a pref (browser.profiles.enabled)
  • Metabug is here: 1882882
  • Bugs landed so far:
    • Mossop added telemetry to record the version of the profiles database on startup and the number of profiles in it (bug 1878339)
    • Niklas added the profiles browser component (bug 1883143)
    • Niklas added profiles menu items to the app menu (bug 1883155)
  • Coming soon: Docs, final UX, and good-first-bugs

Screenshots

Search and Navigation

Storybook/Reusable Components

Anne van KesterenUndue base URL influence

The URL parser has many quirks due to its origins in a time where conformance test suites were atypical and implementation requirements were hidden in the examples section. Some consider these quirks deeply problematic, but personally I don’t really mind that one can write a hundred slashes after a scheme instead of two and get identical results. Sure, it would be better if that were not the case, but in the end it is something that is normalized away and therefore does not impact the fundamental aspects of the URL ecosystem.

I was reminded the other day that there is one quirk however that does yield rather undesirable results. In particular for certain (non-conforming) inputs, the result will not be failure, but the exact URL returned will depend on the presence and type of base URL. This might be best explained with examples:

InputBase URL (serialized)Output (serialized)
https:testhttps://test/
https:testhttp://example/https://test/
https:testhttps://example/https://example/test
hello:testhello:test
hello:testbye://example/hello:test
hello:testhello://example/hello:test

This quirk only impacts so-called special schemes, which include http and https. And only when they match between the input and base URL. As a user of URLs you could work around this quirk by first parsing without a base URL and only if that returns failure, parse a second time with a base URL. That does have the unfortunate side effect of being inconsistent with the web platform (for non-conforming input), but depending on your use case that might be okay.

I remember looking into whether this could be removed completely many years ago, but websites relied on it and end users trump theory.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: April 2024 Progress Report

Welcome to our monthly report on turning K-9 Mail into Thunderbird for Android! Last month you could read about how we found and fixed bugs after publishing a new stable release. This month we start with… telling you that we fixed even more bugs.

Fixing bugs

After the release of K-9 Mail 6.800 we dedicated some time to fixing bugs. We published the first bugfix release in March and continued that work in April.

K-9 Mail 6.802

The second bugfix release contained these changes:

  • Push: Notify user if permission to schedule exact alarms is missing
  • Renamed “Send client ID” setting to “Send client information”
  • IMAP: Added support for the \NonExistent LIST response attribute
  • IMAP: Issue EXPUNGE command after moving without MOVE extension
  • Updated translations; added Hebrew translation

I’m especially happy that we were able to add back the Hebrew translation. We removed it prior to the K-9 Mail 6.800 release due to the translation being less than 70% complete (it was at 49%). Since then volunteers translated the missing bits of the app and in April the translation was almost complete.

Unfortunately, the same isn’t true for the Korean translation that was also removed. It was 69% complete, right below the threshold. Since then there has been no significant change. If you are a K-9 Mail user and a native Korean speaker, please consider helping out.

F-Droid metadata (again?)

In the previous progress report we described what change had led to the app description disappearing on F-Droid and how we intended to fix it. Unfortunately we found out that our approach to fixing the issue didn’t work due to the way F-Droid builds their app index. So we changed our approach once again and hope that the app description will be restored with the next app release.

Push & the permission to schedule alarms

K-9 Mail 6.802 notifies the user when Push is enabled in settings, but the permission to schedule exact alarms is missing. However, what we really want to do is ask the user for this permission before we allow them to enable Push.

This change was completed in April and will be included in the next bugfix release, K-9 Mail 6.803.

Material 3

As briefly mentioned in March’s progress report, we’ve started work on switching the app to Google’s latest version of Material Design – Material 3. In April we completed the technical conversion. The app is now using Material 3 components instead of the Material Design 2 ones.

The next step is to clean up the different screens in the app. This means adjusting spacings, text sizes, colors, and sometimes more extensive changes. 

We didn’t release any beta versions while the development version was still a mix of Material Design 2 and Material 3. Now that the first step is complete, we’ll resume publishing beta versions.

If you are a beta tester, please be aware that the app still looks quite rough in a couple of places. While the app should be fully functional, you might want to leave the beta program for a while if the look of the app is important to you.

Targeting Android 14

Part of the necessary app maintenance is to update the app to target the latest Android version. This is required for the app to use the latest security features and to cope with added restrictions the system puts in place. It’s also required by Google in order to be able to publish updates on Google Play.

The work to target Android 14 is now mostly complete. This involved some behind the scenes changes that users hopefully won’t notice at all. We’ll be testing these changes in a future beta version before including them in a K-9 Mail 6.8xx release.

Building two apps

If you’re reading this, it’s probably because you’re excited for Thunderbird for Android to be finally released. However, we’ve also heard numerous times that people love K-9 Mail and wished the app would stay around. That’s why we’ve announced in December to do just that.

We’ve started work on this and are now able to build two apps from the same source code. Thunderbird for Android already includes the fancy new Thunderbird logo and a first version of a blue theme.

But as you can see in the screenshots above, we’re not quite done yet. We still have to change parts of the app where the app name is displayed to use a placeholder instead of a hard-coded string. Then there’s the About screen and a couple of other places that require app-specific behavior.

We’ll keep you posted.

Releases

In April 2024 we published the following stable release:

The post Thunderbird for Android / K-9 Mail: April 2024 Progress Report appeared first on The Thunderbird Blog.

Mozilla Addons BlogDeveloper Spotlight: Port Authority

Port Authority gives you intuitive control over global block settings, notifications, and allow-list customization.

A few years ago a developer known as ACK-J stumbled onto a tech article that revealed eBay was secretly port scanning their customers (i.e. scanning their users’ internet-facing devices to learn what apps and services are listening on the network). The article further claimed there was nothing anyone could do to prevent this privacy compromise. ACK-J took that as a challenge. “After going down many rabbit holes,” he says, “I found that this script, which was port scanning everyone, is in my opinion, malware.”

We spoke with ACK-J to better understand the obscure privacy risks of port scanning and how his extension Port Authority offers unique protections.

Why does port scanning present a privacy risk?

ACK-J: There is a common misconception/ignorance around how far websites are able to peer into your private home network. While modern browsers limit this to an extent, it is still overly permissive in my opinion. The privacy implications arise when websites, such as google.com, have the ability to secretly interact with your router’s administrative interface, local services running on your computer and discover devices on your home network. This behavior should be blocked by the same-origin policy (SOP), a fundamental security mechanism built into every web browser since the mid 1990’s, however due to convenience it appears to be disabled for these requests. This caught a lot of people by surprise, including myself, and is why I wanted to make this type of traffic “opt-in” on my devices.

Do you consider port scanning “malware”? 

ACK-J: I don’t necessarily consider port scanning malware, port scanning is commonplace and should be expected for any computer connected to the internet with a public IP address. On the other hand, devices on our home networks do not have public IP addresses and instead are protected from this scanning due to a technology called network address translation (NAT). Due to the nature of how browsers and websites work, the website code needs to be rendered on the user’s device (behind the protections put in place by NAT). This means websites are in a privileged position to communicate with devices on your home network (e.g. IOT devices, routers, TVs, etc.). There are certainly legitimate use cases for port scanning even on internal networks, the most common being communicating with a program running on your PC such as Discord. I prefer to be able to explicitly allow this type of behavior instead of leaving it wide open by default.

Is there a way to summarize how your extension addresses the privacy leak of port scanning?

ACK-J: Port Authority acts in a similar manner to a bouncer at a bar, whenever your computer tries to make a request, Port Authority will verify that the request is not trying to port scan your private network. If the request passes the check it is allowed in and everything functions as normal. If it fails the request is dropped. This all happens in a matter of milliseconds, but if a request is blocked you will get a notification.

Should Port Authority users expect occasional disruptions using websites that port scan, like eBay?

ACK-J: Nope, I’ve been using it for years along with many friends, family, and 1,000 other daily users. I’ve never received a single report that a website would not allow you to login, check-out, or other expected functionality due to the extension blocking port scans. There are instances where you’d like your browser to communicate with an app on your PC such as Discord, in this case you’ll receive an alert and could add Discord to an allow-list or simply click the “Blocking” toggle to disable blocking temporarily.

Do you see Port Authority growing in terms of a feature set, or do you feel it’s relatively feature complete and your focus is on maintenance/refinement?

ACK-J: I like extensions that serve a specific purpose so I don’t see it growing in features but I’d never say never. I’ve added an allow-list to explicitly permit certain domains to interact with services on your private network. I haven’t enabled this feature on the public extension yet but will soon.

Apart from Port Authority, do you have any plans to develop other extensions?

ACK-J: I actually do! I just finished writing up an extension called MailFail that checks the website you are on for misconfigurations in their email server that would allow someone to spoof emails using their domain. This will be posted soon!


Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey.

The post Developer Spotlight: Port Authority appeared first on Mozilla Add-ons Community Blog.

Firefox Developer ExperienceFirefox DevTools Newsletter — 125

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 125 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla, like Artem Manushenkov who updated the Debugger Watch Expressions panel input field placeholder (#1619201)

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Pop it up!

Firefox 125 adds support for the Popover API, which is now supported across all major browsers 🎉. As said on the related MDN page:

The Popover API provides developers with a standard, consistent, flexible mechanism for displaying popover content on top of other page content. Popover content can be controlled either declaratively using HTML attributes, or via JavaScript.

In HTML, popover elements can be declared with a popover attribute. The popover can then be toggled from a button element which specifies a popovertarget attribute referencing the id of the popover element.


Firefox DevTools Inspector markup view. We can see a button with a popovertarget attribute and next to it a "select element" button. A div element with a popover attribute is displayed as well<figcaption class="wp-element-caption">Inspector displayed on https://mdn.github.io/dom-examples/popover-api/blur-background/</figcaption>

In the Inspector markup view, an icon is displayed next to the popovertarget attribute so you can quickly jump to the popover element.
Popover element can be toggled in Javascript HTMLElement.showPopover, HTMLElement.hidePopover and HTMLElement.
togglePopover. beforetoggle and toggle elements are fired when a popover element is toggled, and the Debugger provides those events in the Event Listeners Breakpoints panel.

Note that we don’t display ::backdrop pseudo-element rules yet, but will be soon (target is Firefox 127, see #1893644)

Performance

As announced in the last newsletter, we’re focusing on performance for a few months to provide a fast and snappy experience to our beloved users. We’re happy to report that the Style Editor panel is now up to 20% faster to open (#1884072).

Chart where x is the time and y is duration, where we can see the values going from 750ms to 600ms around March 14th<figcaption class="wp-element-caption">Performance test duration going from ~750ms to ~600ms</figcaption>

We also improved the Debugger opening when a page contains a lot of Javascript sources (#1880809). In a specific case, we could spend around 9 whole seconds to process the different sources and populate the sources tree (see the 124 Firefox profile). In 125, it now only take a bit more than 600 milliseconds, meaning it’s now 14 times faster (see the 125 Firefox profile).

Firefox Profiler Flame chart screenshot for the same function, on Firefox 124 and 125.

This also shows up on less extreme cases: our performance tests reported an average of 3% improvement on Debugger opening.

Debugger

There is now a button indicating if the opened file is an original file or a bundle, or if there was an issue when trying to retrieve the Source Map file (#1853899).

Firefox debugger with a tsx file opened. At the bottom of the file text, there a button saying "original file". A popup menu is opened and has the following items: - Enable Source Maps - Show and open original location by default - Jump to the related original source - Open the Source Map file in a new tab

Clicking on the button opens a menu dedicated to Source Map, where you can:

  • enable or disable Source Map
  • indicate if the Debugger should open original files by default
  • select the related original/bundle source
  • open the .map file in a new Firefox tab

We also fixed a glitch around text selection and line highlighting (#1878698), as well as an issue which was preventing the Outline panel to work properly (#1879322). Finally we added back the preference that allows to disable the paused debugger overlay (#1865439). If you want to do so, go to about:config , search for devtools.debugger.features.overlay and toggle it to false.

Miscellaneous

  • CSP error messages in the Console now provide the effective directive (#1848315)
  • Infinity wasn’t visible in the Console auto-completion menu (#1698260)
  • Clicking on a relative URL of an image in the Inspector now honor the document’s base URL (#1871391)
  • An issue that could provoke crashes of the Network Monitor is now fixed (#1884571)

Thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂

The Rust Programming Language BlogRust participates in OSPP 2024

Similar to our previous announcements of the Rust Project's participation in Google Summer of Code (GSoC), we are now announcing our participation in Open Source Promotion Plan (OSPP) 2024.

OSPP is a program organized in large part by The Institute of Software Chinese Academy of Sciences. Its goal is to encourage college students to participate in developing and maintaining open source software. The Rust Project is already registered and has a number of projects available for mentorship:

Eligibility is limited to students and there is a guide for potential participants. Student registration ends on the 3rd of June with the project application deadline a day later.

Unlike GSoC which allows students to propose their own projects, OSPP requires that students only apply for one of the registered projects. We do have an #ospp Zulip stream and potential contributors are encouraged to join and discuss details about the projects and connect with mentors.

After the project application window closes on June 4th, we will review and select participants, which will be announced on June 26th. From there, students will participate through to the end of September.

As with GSoC, this is our first year participating in this program. We are incredibly excited for this opportunity to further expand into new open source communities and we're hopeful for a productive and educational summer.

Support.Mozilla.OrgMake your support articles pop: Use the new Firefox Desktop Icon Gallery

Hello, SUMO community!

We’re thrilled to roll out a new tool designed specifically for our contributors: the Firefox Desktop Icon Gallery. This gallery is crafted for quick access and is a key part of our strategy to reduce cognitive load in our Knowledge Base content. By providing a range of inline icons that accurately depict interface elements of Firefox Desktop, this resource makes it easier for readers to follow along without overwhelming visual information.

We want your feedback! Join the conversation in our SUMO forum thread to ask questions or suggest new icons. Your feedback is crucial for improving this tool.

Thanks for helping us support the Firefox community. We can’t wait to see how you use these new icons to enrich our Knowledge Base!

Stay engaged and keep rocking the helpful web!

 

The Rust Programming Language BlogAnnouncing Rustup 1.27.1

The Rustup team is happy to announce the release of Rustup version 1.27.1. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rustup installed, getting Rustup 1.27.1 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:

$ rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

$ rustup update

If you don't have it already, you can get Rustup from the appropriate page on our website.

What's new in Rustup 1.27.1

This new Rustup release involves some minor bug fixes.

The headlines for this release are:

  1. Prebuilt Rustup binaries should be working on older macOS versions again.
  2. rustup-init will no longer fail when fish is installed but ~/.config/fish/conf.d hasn't been created.
  3. Regressions regarding symlinked RUSTUP_HOME/(toolchains|downloads|tmp) have been addressed.

Full details are available in the changelog!

Rustup's documentation is also available in the Rustup Book.

Thanks

Thanks again to all the contributors who made Rustup 1.27.1 possible!

  • Anas (0x61nas)
  • cuiyourong (cuiyourong)
  • Dirkjan Ochtman (djc)
  • Eric Huss (ehuss)
  • eth3lbert (eth3lbert)
  • hev (heiher)
  • klensy (klensy)
  • Chih Wang (ongchi)
  • Adam (pie-flavor)
  • rami3l (rami3l)
  • Robert (rben01)
  • Robert Collins (rbtcollins)
  • Sun Bin (shandongbinzhou)
  • Samuel Moelius (smoelius)
  • vpochapuis (vpochapuis)
  • Renovate Bot (renovate)

The Rust Programming Language BlogAutomatic checking of cfgs at compile-time

The Cargo and Compiler team are delighted to announce that starting with Rust 1.80 (or nightly-2024-05-05) every reachable #[cfg] will be automatically checked that they match the expected config names and values.

This can help with verifying that the crate is correctly handling conditional compilation for different target platforms or features. It ensures that the cfg settings are consistent between what is intended and what is used, helping to catch potential bugs or errors early in the development process.

This addresses a common pitfall for new and advanced users.

This is another step to our commitment to provide user-focused tooling and we are eager and excited to finally see it fixed, after more than two years since the original RFC 30131.

A look at the feature

Every time a Cargo feature is declared that feature is transformed into a config that is passed to rustc (the Rust compiler) so it can verify with it along with well known cfgs if any of the #[cfg], #![cfg_attr] and cfg! have unexpected configs and report a warning with the unexpected_cfgs lint.

Cargo.toml:

[package]
name = "foo"

[features]
lasers = []
zapping = []

src/lib.rs:

#[cfg(feature = "lasers")]  // This condition is expected
                            // as "lasers" is an expected value
                            // of the `feature` cfg
fn shoot_lasers() {}

#[cfg(feature = "monkeys")] // This condition is UNEXPECTED
                            // as "monkeys" is NOT an expected
                            // value of the `feature` cfg
fn write_shakespeare() {}

#[cfg(windosw)]             // This condition is UNEXPECTED
                            // it's supposed to be `windows`
fn win() {}

cargo check:

cargo-check

Expecting custom cfgs

UPDATE: This section was added with the release of nightly-2024-05-19.

In Cargo point-of-view: a custom cfg is one that is neither defined by rustc nor by a Cargo feature. Think of tokio_unstable, has_foo, ... but not feature = "lasers", unix or debug_assertions

Some crates might use custom cfgs, like loom, fuzzing or tokio_unstable that they expected from the environment (RUSTFLAGS or other means) and which are always statically known at compile time. For those cases, Cargo provides via the [lints] table a way to statically declare those cfgs as expected.

Defining those custom cfgs as expected is done through the special check-cfg config under [lints.rust.unexpected_cfgs]:

Cargo.toml

[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(loom)', 'cfg(fuzzing)'] }

Custom cfgs in build scripts

On the other hand some crates use custom cfgs that are enabled by some logic in the crate build.rs. For those crates Cargo provides a new instruction: cargo::rustc-check-cfg2 (or cargo:rustc-check-cfg for older Cargo version).

The syntax to use is described in the rustc book section checking configuration, but in a nutshell the basic syntax of --check-cfg is:

cfg(name, values("value1", "value2", ..., "valueN"))

Note that every custom cfgs must always be expected, regardless if the cfg is active or not!

build.rs example

build.rs:

fn main() {
    println!("cargo::rustc-check-cfg=cfg(has_foo)");
    //        ^^^^^^^^^^^^^^^^^^^^^^ new with Cargo 1.80
    if has_foo() {
        println!("cargo::rustc-cfg=has_foo");
    }
}

Each cargo::rustc-cfg should have an accompanying unconditional cargo::rustc-check-cfg directive to avoid warnings like this: unexpected cfg condition name: has_foo.

Equivalence table

cargo::rustc-cfg cargo::rustc-check-cfg
foo cfg(foo) or cfg(foo, values(none()))
foo="" cfg(foo, values(""))
foo="bar" cfg(foo, values("bar"))
foo="1" and foo="2" cfg(foo, values("1", "2"))
foo="1" and bar="2" cfg(foo, values("1")) and cfg(bar, values("2"))
foo and foo="bar" cfg(foo, values(none(), "bar"))

More details can be found in the rustc book.

Frequently asked questions

Can it be disabled?

For Cargo users, the feature is always on and cannot be disabled, but like any other lints it can be controlled: #![warn(unexpected_cfgs)].

Does the lint affect dependencies?

No, like most lints, unexpected_cfgs will only be reported for local packages thanks to cap-lints.

How does it interact with the RUSTFLAGS env?

You should be able to use the RUSTFLAGS environment variable like it was before. Currently --cfg arguments are not checked, only usage in code are.

This means that doing RUSTFLAGS="--cfg tokio_unstable" cargo check will not report any warnings, unless tokio_unstable is used within your local crates, in which case crate author will need to make sure that that custom cfg is expected with cargo::rustc-check-cfg in the build.rs of that crate.

How to expect custom cfgs without a build.rs?

UPDATE: Cargo with nightly-2024-05-19 now provides the [lints.rust.unexpected_cfgs.check-cfg] config to address the statically known custom cfgs.

There is currently no way to expect a custom cfg other than with cargo::rustc-check-cfg in a build.rs.

Crate authors that don't want to use a build.rs and cannot use [lints.rust.unexpected_cfgs.check-cfg], are encouraged to use Cargo features instead.

How does it interact with other build systems?

Non-Cargo based build systems are not affected by the lint by default. Build system authors that wish to have the same functionality should look at the rustc documentation for the --check-cfg flag for a detailed explanation of how to achieve the same functionality.

  1. The stabilized implementation and RFC 3013 diverge significantly, in particular there is only one form for --check-cfg: cfg() (instead of values() and names() being incomplete and subtlety incompatible with each other).

  2. cargo::rustc-check-cfg will start working in Rust 1.80 (or nightly-2024-05-05). From Rust 1.77 to Rust 1.79 (inclusive) it is silently ignored. In Rust 1.76 and below a warning is emitted when used without the unstable Cargo flag -Zcheck-cfg.

Don Martian easy experiment to support behavioral advertising

This is a follow-up to a previous post on how a majority of US residents surveyed are now using an ad blocker, and how the survey found that privacy concerns are now the number one reason to block ads.

Almost as long as Internet privacy tools have been a thing, so have articles from personalized ad proponents telling us not to use them, because personalized ads are good actually. The policy debate over personalized (or surveillance, or cross-context behavioral, or tracking-based, or whatever you want to call it) advertising seems to keep repeating an endless argument that on the one hand, personalized advertising causes some risk or cost, I’m not going to summarize the risks or costs here, go read Bob Hoffman’s books or Microtargeting as Information Warfare for more info but on the other hand we have to somehow balance that against the benefits of personalized advertising.

Benefits? Let’s see them. Cross-context behavioral advertising is good for consumers should be straightforward to test. If ad personalization really helps match buyers and sellers in a market, then users of privacy tools and privacy settings must be buying worse products and services. Research should show that the more privacy options you pick, the less happy you are with your stuff. And the more personalized your ad experience is, the more satisfied of a customer you are. This is different from asking whether or not people prefer to have ad personalization turned on. That has been pretty extensively covered, and the answer is that some people do, and some people don’t. This question isn’t about whether people like personalized ads or not, it’s about whether people who get more personalized ads are happier with how they spend their money.

This should be a fairly low-cost project because in general, the companies that do the most personalized advertising are in the best position to do the research to support it. Are users of privacy tools and settings more or less satisfied with the products and services they buy than people who leave the personalized ad options on?

  • Do privacy-protected users give lower ratings to the products they buy?

  • Do privacy-protected users return or stop using more of their purchases?

  • Are privacy-protected users more likely to buy a replacement, competing product after an unsuccessful first purchase in a category?

  • Are privacy-protected users more likely to agree with general statements about a decline in quality and trustworthiness in business in general?

The correlation between more privacy and less satisfied consumer would be detectable from a variety of angles. Vendors of browsers with preferences that affect ad targeting should be able to show that people who turn on the privacy settings are somehow worse off than people who don’t. Anti-adblock companies do research on ad blocker users—so how are shopping experiences different for those users? Any product that connects to a server for updates or telemetry is providing data on how long the buyer chooses to keep using it. And—the biggest opportunity here—any company that has an Apple iOS app (and that’s a lot of companies) should be able to compare satisfaction metrics between customers with App Tracking Transparency (ATT) on or off.

Ad platforms, search engines, social network companies, and online retailers all have access to the needed info on ads, privacy settings, locations, and purchases. Best of all, they’re constantly running customer surveys and experiments of all kinds. It would be straightforward for any of these companies to run yet another user satisfaction survey, to prove what should be an obvious, measurable effect. I’m really looking for any kind of research here, whether it’s a credit card company running a SQL query on existing data to point out that customers with iOS app tracking turned off have more chargebacks, or a longer-term customer satisfaction study, anything.

looking at the data we do have

Update 16 May 2024: Balancing User Privacy and Personalization by Malika Korganbekova and Cole Zuber. This study simulated the effects of a privacy feature by truncating browsing history for some Wayfair shoppers, and found that people who were assigned to the personalized group and chose a product personalized to them were 10% less likely to return it than people in the non-personalized group.

The Welfare Effects of Ad Blocking by Lin et al. was different—members of the treatment group got an ad blocker affecting all sites, not just one retail site.

[P]articipants that were asked to install an ad-blocker become less likely to regret recent purchases, while participants that were asked to uninstall their ad-blocker report lower levels of satisfaction with their recent purchases.

The ad blockers used in that study, however, were multi-purpose ones such as uBlock Origin that block ads in general, not just personalization.

The effect of privacy settings on scams goes two ways: you can avoid being specifically targeted for a scam, but more likely you can also just get more scam ads by default if you feed in too little info to be targeted for the good ads.

The Internet as a whole is much more various in seller honesty level than the Wayfair platform is, which might help explain the difference in customer satisfaction seen between the Korganbekova and Zuber paper and the Lin et al. paper. Lin et al. showed that people were more satisfied as customers when receiving fewer ads in total, but they might have been even less satisified if they received more of the lower-quality ads that you’re more likely to get if adtech firms don’t have enough data to target you for a bigger-budget campaign.

Another related paper is Behavioral advertising and consumer welfare: An empirical investigation.

The presence of low quality vendors, along with the recent increase in the use of ad blockers, makes it increasingly difficult for new, high quality vendors, to reach new clients. Consumers benefit from having access to new sellers that are able to meet their needs through behavioral ads, as long as they are good sellers.

but

targeted ads are more likely to be associated with lower quality vendors, and higher prices for identical products, compared to competing alternatives found in organic search results

If you look back on the history of advertising, there has never been an ad medium that required so much legal and technical complexity to try to get people to accept it. Why is Meta going to so much trouble to try to come up with a legal way to require people in the EU to accept personalized ads? If ad personalization is so good for consumers, won’t they pick it on their own? Anyway, I’m looking for research on how personalization and privacy choices affect customer satisfaction.

Related

free riding on future web ads?

Reputation, signaling, and targeted ads

B L O C K in the U S A

banning surveillance advertising

privacy economics sources

When can deceptive sellers outbid honest sellers for ad impressions?

Adrian GaudebertThe challenges of teaching a complex game

When I was 13, my mom bought me Civilization III from a retail shop, then went on to do some more shopping. I stayed in the car, with this elegant box in my hands, craving to play the game it contained. I opened the box, and there discovered something magical: the Civilization III Manual. Having nothing better to do, I started reading it…

The Civilization III manual <figcaption>That game manual was THICK.</figcaption>

More than 20 years later, I still remember how great reading that book felt. I was propelled into the game, learning about its systems and strategies, discovering screens of foggy maps and world wonders. It made me love the game before I had even played it! Since then I've played all Civilization games that came out — including Humankind, the unofficial 7th episode — and loved all of them. Would I have had the same connection to these games had I not read the manual? Impossible to tell. Would I have read that book had I not been trapped in a car with the game box on my laps? Definitely not! Even the developers of the game knew that nobody was reading those texts:

A quote from the Civilization III manual <figcaption>“The authors and developers of computer games know too well that most players never read the manual.”</figcaption>

Here's me now, 20-something years later, having made a game of my own and needing to teach it to potential players… Should I write a full-blown game manual, hoping that a little 13-years old will read it on a parking lot?

Heck no! Ain't nobody got time for that!

Let's make a tutorial instead

Dawnmaker has been built almost like a board game, in the sense that it has complex rules that you have to learn before you can play. Physical board game players are used to that: someone has to go through the rules before they can explain them to the rest of their players group. But video games are a different beast, and we've long moved away from reading… well, almost anything at all, really, and certainly not rules. You can't put each player into a car on a parking lot with nothing else to do other than reading the rules of your game. If you were to present the video game player with a rules book, in today's world of abundance, they would just move on to the next game in their unending backlog.

Teaching a game is thus incredibly difficult: it has to have as little text as possible, it has to be fun and rewarding, and it has to hook the player so that, by the end of the teaching phase, they still want to play the actual game.

It's with all those things in mind that I started building Dawnmaker's tutorial. I set two main rules in place: first, use as little words as possible, and second, make the player learn while doing. The first iteration of the tutorial was very terse: you only had a small goal written at the top of the screen, and almost no explanations whatsoever about what you were to do, or why. It turns out, that didn't work too well. Players were lost, especially when it came to the most complex actions or features of the game. Past a certain point in the tutorial, almost all of the players stopped reading the objectives at the top of the screen. And finally, they were also lacking a sense of purpose.

So for all my good intents, I had to revise my approach and write more words. The second iteration, which is now live in the game and demo, has a lot of small tooltips that pop up around the screen as the interface shows itself. I've tried to load information as slowly as possible, giving the player only what they need at a given moment. I think I approximately quadrupled the number of words in the tutorial, but such is the reality of teaching a complex game.

The other big change I made was to give the player a better sense of progression in the tutorial. The objectives now stay visible in a box on the left-hand side of the screen. They have little animations and sounds that reward the player when they complete a task. Seeing that list grow shows how the player has progressed and is also rewarding by itself.

Teaching the game doesn't only happen in the tutorial though, but also on the various signs and feedback we put around the game. Here's an example: during the tutorial, new players did not understand what was happening with the new building choice that was presented. The solution to this was not to explain with words what those buildings where, but to show a feedback. Now, whenever you gain a new building, you see that same building popping up in the center of the board, then moving towards the buildings roster. It's a double win: they understand that the building goes somewhere, they see where, and they are inclined to check that place and see what it is. I guess one feedback is worth a thousand words?

This version of the tutorial is still far from perfect. But it is the first thing players interact with, and thus it is a piece of the game that really has to shine. We'll keep collecting feedback from new players, and use that to polish the tutorial until, like Eclairium, it shines bright.

BTW: unlike Eclairium, diamonds do not shine, they simply reflect light. Rihanna has been lying to us all.

Next event: Geektouch in Lyon

If you're in Lyon or close to it, come and meet us at the Geektouch / Japan Touch festival in Eurexpo on May 4th and 5th! We'll have a stand on the Indie Game Lab space (lot A87). You will of course get to play with the latest version of Dawnmaker. We hope to see you there!


This piece was initially sent out to the readers of our newsletter. Wanna join in on the fun? Head out to Dawnmaker's presentation page and fill the form. You'll receive regular stories about how we're making this game, the latest news of its development, as well as an exclusive access to Dawnmaker's alpha version!

Join our community!

Wil ClouserI made a new hack poster

I was feeling nostalgic a couple months ago and built a hack poster out of plywood. It’s mostly modeled after the original but I added the radio tower and changed the words. “This technology could fall into the right hands” still makes me smile when I see it out in the world.

Poster hanging on the wall Close-up of radio tower Close-up of lettering

Mozilla Addons Blog1000+ Firefox for Android extensions now available

The new open ecosystem of extensions on Firefox for Android launched in December with just over 400 extensions. Less than five months later we’ve surpassed 1,000 Firefox for Android extensions. That’s an impressive achievement by this developer community! It’s exciting to see so many developers embrace the opportunity to explore new creative possibilities for mobile browser customization.

If you’re a developer intrigued to learn more about building extensions on Firefox for Android, here’s a great place to get started. Or maybe you already have some feedback about missing API’s on Firefox for Android?

What are some of your favorite new Firefox for Android extensions? Drop some props in the comments below.

The post 1000+ Firefox for Android extensions now available appeared first on Mozilla Add-ons Community Blog.