Niko MatsakisEasing tradeoffs with profiles

Rust helps you to build reliable programs. One of the ways it does that is by surfacing things to your attention that you really ought to care about. Think of the way we handle errors with Result: if some operation can fail, you can’t, ahem, fail to recognize that, because you have to account for the error case. And yet often the kinds of things you care about depend on the kind of application you are building. A classic example is memory allocation, which for many Rust apps is No Big Deal, but for others is something to be done carefully, and for still others is completely verboten. But this pattern crops up a lot. I’ve heard and like the framing of designing for “what do you have to pay attention to” – Rust currently aims for a balance that errs on the side of paying attention to more things, but tries to make them easy to manage. But this post is about a speculative idea of how we could do better than that by allowing programs to declare a profile.

Profiles declare what you want to pay attention to

The core idea is pretty simple. A profile would be declared, I think, in the Cargo.toml. Profiles would never change the semantics of your Rust code. You could always copy and paste code between Rust projects with different profiles and things would work the same. But it would adjust lint settings and errors. So if you copy code from a more lenient profile into your more stringent project, you might find that it gets warnings or errors it didn’t get before.

Primarily, this means lints

In effect, a profile would be a lot like a lint group. So if we have a profile for kernel development, this would turn on various lints that help to detect things that kernel developers really care about – unexpected memory allocation, potential panics – but other projects don’t. Much like Rust-for-linux’s existing klint project.

So why not just make it a lint group? Well, actually, maybe we should – but I thought Cargo.toml would be better because it would allow us to apply more stringent checks to what dependencies you use, which features they use, etc. For example, maybe dependencies could declare that some of their features are not well suited to certain profiles, and you would get a warning if your application winds up depending on them. I imagine would select a profile when running cargo new.

Example: autoclone for Rc and Arc

Let’s give an example of how this might work. In Rust today, if you want to have many handles to the same value, you can use a reference counted type like Rc or Arc. But whenever you want to get a new handle to that value, you have to explicit clone it:

let map: Rc<HashMap> = create_map();
let map2 = map.clone(); // 👈 Clone!

The idea of this clone is to call attention to the fact that custom code is executing here. This is not just a memcpy1. I’ve been grateful for this some of the time. For example, when optimizing a concurrent data structure, I really like knowing exactly when one of my reference counts is going to change. But a lot of the time, these calls to clone are just noise, and I wish I could just write let map2 = map and be done with it.

So what if we modify the compiler as follows. Today, when you move out from a variable, you effectively get an error if that is not the “last use” of the variable:

let a = v; // move out from `v` here...
...
read(&v); // 💥 ...so we get an error when we use `v`.

What if, instead, when you move out from a value and it is not the last use, we introduce an auto-clone operation. This may fail if the type is not auto-cloneable (e.g., a Vec), but for Rc, Arc, and other O(1) clone operations, it would be equivalent to x.clone(). We could designate which types can be auto-cloneable by extra marker traits, for example. This means that let a = v above would be equivalent to let a = v.clone().

Now, here comes the interesing part. When we introduce an auto-clone, we would also introduce a lint: implicit clone operation. In the higher-level profile, this lint would be allow-by-default, but in the profile for lower-level code, if would be deny-by-default, with an auto-fix to insert clone. Now when I’m editing my concurrent data structure, I still get to see the clone operations explicitly, but when I’m writing my application code, I don’t have to think about it.

Example: dynamic dispatch with async trait

Here’s another example. Last year we spent a while exploring the ways that we can enable dynamic dispatch for traits that use async functions. We landed on a design that seemed like it hit a sweet spot. Most users could just use traits with async functions like normal, but they might get some implicit allocations. Users who cared could use other allocation strategies by being more explicit about things. (You can read about the design here.) But, as I described in my blog post The Soul of Rust, this design had a crucial flaw: although it was still possible to avoid allocation, it was no longer easy. This seemed to push Rust over the line from its current position as a systems language that can claim to be a true C alternative into a “just another higher-level language that can be made low-level if you program with care”.

But profiles seem to offer another alternative. We could go with our original design, but whenever the compiler inserted an adapter that might cause boxing to occur, it would issue a lint warning. In the higher-level profile, the warning would be allow-by-default, but in the lower-level profile, it would by deny-by-default.

Example: panic effects or other capabilities

If you really want to go crazy, we can use annotations to signal various kinds of effects. For example, one way to achieve panic safety, we might allow functions to be annotated with #[panics], signaling a function that might panic. Depending on the profile, this might require you to declare that the caller may panic (similar to how unsafe works now).

Depending how far we want to go here, we would ultimately have to integrate these kind of checks more deeply into the type system. For example, if you have a fn-pointer, or a dyn Trait call, we would have to introduce “may panic” effects into the type system to be able to track that information (but we could be conservative and just assume calls by pointer may panic, for example). But we could likely still use profiles to control how much you as the caller choose to care.

Changing the profile for a module or a function

Because profiles primarily address lints, we can also allow you to change the profile in a more narrow way. This could be done with lint groups (maybe each profile is a lint group), or perhaps with a #![profile] annotation.

Why I care: profiles could open up design space

So why am I writing about profiles? In short, I’m looking for opportunities to do the classic Rust thing of trying to have our cake and eat it too. I want Rust to be versatile, suitable for projects up and down the stack. I know that many projects contain hot spots or core bits of the code where the details matter quite a bit, and then large swaths of code where they don’t matter a jot. I’d like to have a Rust that feels closer to Swift that I can use most of the time, and then the ability to “dial up” the detail level for the code where I do care.

Conclusion: the core principles

I do want to emphasize that this idea is speculation. As far as I know, nobody else on the lang team is into this idea – most of them haven’t even heard about it!

I also am not hung up on the details. Maybe we can implement profiles with some well-named lint groups. Or maybe, as I proposed, it should go in Cargo.toml.

What I do care about are the core principles of what I am proposing:

  • Defining some small set of profiles for Rust applications that define the kinds of things you want to care about in that code.
    • I think these should be global and not user-defined. This will allow profiles to work more smoothly across dependencies. Plus we can always allow user-defined profiles or something later if want.
  • Profiles never change what code will do when it runs, but they can make code get more warnings or errors.
    • You can always copy-and-paste code between applications without fear that it will behave differently (though it may not compile).
    • You can always understand what Rust code will do without knowing the profile or context it is running in.
  • Profiles let us do more implicit things to ease ergonomics without making Rust inapplicable for other use cases.
    • Looking at Aaron Turon’s classic post introducing the lang team’s Rust 2018 ergonomics initiative, profiles let users dial down the context dependence and applicability of any particular change.

  1. Back in the early days of Rust, we debated a lot about what ought to be the rule for when clone was required. I think the current rule of “memcpy is quiet, everything else is not” is pretty decent, but it’s not ideal in a few ways. For example, an O(1) clone operation like incrementing a refcount is not the same as an O(n) operation like cloning a vector, and yet they look the same. Moreover, memcpy’ing a giant array (or Future) can be a real performance footgun (not to mention blowing up your stack), and yet we let you do that quite quietly. This is a good example of where profiles could help, I believe. ↩︎

Mozilla Localization (L10N)Localizer Spotlight: Meet Reza (Persian locale)

Welcome to our second localizer spotlight, presenting this time Reza from our Persian community.

Q. What first drew you to want to volunteer with Mozilla’s localization program?

The growing community of Persian users highlighted the need for a browser created by the people for the people. Thus, I began assisting the community in translating Firefox into Persian. Subsequently, we expanded our efforts to include other products like Firefox for phones.

Q. What have been some of the most rewarding or impactful projects you’ve localized for Mozilla?

The entire endeavor with Mozilla was driven by volunteering and a strong motivation to provide safe and open-source tools to the community. Given the substantial Persian (Farsi) population of over 110 million people, ensuring their access to interactive and helpful tools became a significant priority. We also focused on addressing issues related to Mozilla extensions, particularly the text-reader (Readaloud), to assist individuals with visual disabilities.

We discovered that a substantial number of people with visual impairments were utilizing Mozilla’s text-reader because it was one of the few free and open tools that catered to their specific needs. One day, I received an email from a Persian user with visual impairment, in which she highlighted the widespread utility of such tools for her and her friends. This instance made me realize that we needed to broaden our perspective beyond ordinary users, especially concerning localization, and emphasize accessibility as a key aspect of our work.

Q. What are some of the biggest challenges you’ve faced in translating Mozilla projects? How did you overcome them?

Translating a product is often not sufficient, especially when dealing with Right-To-Left (RTL) languages. It’s imperative to consider usability, accessibility, and how people with diverse language backgrounds perceive the product. Therefore, addressing all the UI/UX challenges and ensuring the product is user-friendly for the end users proved to be quite challenging.

Q. What skills or background do you think helps most for becoming an effective Mozilla translator?

I’m a computer scientist with a passion for open-source software. Naturally, my technical knowledge was sufficient to embark on this journey. However, I found it crucial to put myself in the shoes of end users, understanding how they wish to perceive the product and how we can create a better experience for them.

Q. What advice would you give to someone new wanting to get involved in localizing for Mozilla?

Think about the broader impact that your work has on the community. Translating can be challenging and sometimes even tedious, but we must remember that these small pieces of work drive the community forward and present new opportunities for them.

Interested in featuring in these spotlights? Or know someone you think we should interview? Fill out this form, or reach out directly to delphine at mozilla dot com. Interested in contributing to localization? Head on over here for more details!

Useful Links

 

Niko MatsakisPolonius revisited, part 2

In the previous Polonius post, we formulated the original borrow checker in a Polonius-like style. In this post, we are going to explore how we can extend that formulation to be flow-sensitive. In so doing, we will enable the original Polonius goals, but also overcome some of its shortcomings. I believe this formulation is also more amenable to efficient implementation. As I’ll cover at the end, though, I do find myself wondering if there’s still more room for improvement.

Running example

We will be working from the same Rust example as the original post, but focusing especially on the mutation in the false branch1:

let mut x = 22;
let mut y = 44;
let mut p: &'0 u32 = &x;
y += 1;
let mut q: &'1 u32 = &y; // Borrow `y` here (L1)
if something() {
    p = q;  // Store borrow into `p`
    x += 1;
} else {
    y += 1; // Mutate `y` on `false` branch
}
y += 1;
read_value(p); // May refer to `x` or `y`

There is no reason to have an error on this line. There is a borrow of y, but on the false branch that borrow is only stored in q, and q will never be read again. So there cannot be undefined behavior (UB).

Existing borrow checker flags an error

The existing borrow checker, however, is not that smart. It sees read_value(p) at the end and, because that line could potentially read x or y, it flags the y += 1 as an error. When expressed this way, maybe you can have some sympathy for the poor borrow checker – it’s not an unreasonable conclusion! But it’s wrong.

The core issue of the existing borrow check stems from its use of a flow insensitive subset graph. This in turn is related to how it does the type check. In Polonius today, each variable has a single type and hence a single origin (e.g., q: &'1 u32). This causes us to conflate all the possible loans that the variable may refer to throughout execution. And yet as we have seen, this information is actually flow dependent.

The borrow checker today is based on a pretty standard style of type checker applied to the MIR. Essentially there is an environment that maps each variable to a type.

Env  = { X -> Type }
Type = scalar | & 'Y T | ...

Then we have type-checking inference rules that thread this same environment everywhere. Conceptually the structure of the the rules is as follows:

construct Env from local variable declarations
Env |- each basic block type checks
--------------------------
the MIR type checks

Type-checking a place then uses this Env, bottoming out in an inference rule like:

Env[X] = T
-------------
Env |- X : T

Flow-sensitive type check

The key thing that makes the borrow checker flow insensitive is that we use the same environment at all points. What if instead we had one environment per program point:

EnvAt = { Point -> Env }

Whenever we type check a statement at program point A, we will use EnvAt[A] as its environment. When program point A flows into point B, then the environment at A must be a subenvironment of the environment at B, which we write as EnvAt[A] <: EnvAt[B].

The subenvironment relationship Env1 <: Env2 holds if

  • for each variable X in Env2:
    • X appears in Env1
    • Env1[X] <: Env2[X]

There are two interesting things here. The first is that the set of variables can change over time. The idea is that once a variable goes dead, you can drop it from the environment. The second is that the type of the variable can change according to the subtyping rules.

You can think of flow-sensitive typing as if, for each program variable like q, we have a separate copy per program point, so q@A for point A and q@B for point at B. When we flow from one point to another, we assign from q@A to q@B. Like any assignment, this would require the type of q@A to be a subtype of the type of q@B.

Flow-sensitive typing in our example

Let’s see how this idea of a flow-sensitive type check plays out for our example. First, recall the MIR for our example from the previous post:

flowchart TD
  Intro --> BB1
  Intro["let mut x: i32\nlet mut y: i32\nlet mut p: &'0 i32\nlet mut q: &'1 i32"]
  BB1["BB1:\np = &x\ny = y + 1;\nq = &y\nif something goto BB2 else BB3"]
  BB1 --> BB2
  BB1 --> BB3
  BB2["BB2\np = q;\nx = x + 1;\n"]
  BB3["BB3\ny = y + 1;"]
  BB2 --> BB4;
  BB3 --> BB4;
  BB4["BB4\ny = y + 1;\nread_value(p);\n"]

  classDef default text-align:left,fill-opacity:0;
  

One environment per program point

In the original, flow-insensitive type check, the first thing we did was to create origin variables ('0, '1) for each of the origins that appear in our types. You can see those variables in the chart above. So we effectively had an environment like

Env_flow_insensitive = {
    p: &'0 i32,
    q: &'1 i32,
}

But now we are going to have one environment per program point. There is one program point in between each MIR statement. So the point BB1_0 would be the entry to basic block BB1, and BB1_1 would be after the first statement. So we have Env_BB1_0, Env_BB1_1, etc. We are going to create distinct origin variables for each of them:

Env_BB1_0 = {
    p: &'0_BB1_0 i32,
    q: &'1_BB1_0 i32,
}

Env_BB1_1 = {
    p: &'0_BB1_1 i32,
    q: &'1_BB1_1 i32,
}

...

Type-checking the edge from BB1 to BB2

Let’s look at point BB1_3, which is the final line in BB1, which in MIR-speak is called the terminator. It is an if terminator (if something goto BB2 else BB3). To type-check it, we will take the environment on entry (Env_BB1_3) and require that it is a sub-environment of the environment on entry to the true branch (Env_BB2_0) and on entry to the false branch (Env1_BB3_0).

Let’s start with the true branch. Here we have the environment Env_BB2_0:

Env_BB2_0 = {
    q: &'1_BB2_0 i32,
}

You should notice something curious here – why is there no entry for p? The reason is that the variable p is dead on entry to BB2, because its current value is about to be overridden. The type checker knows not to include dead variables in the environment.

This means that…

  • Env_BB1_3 <: Env_BB2_0 if the type of q at BB1_3 is a subtype of the type of q at BB2_0
  • …so &'1_BB1_3 i32 <: &'1_BB2_0 i32 must hold…
  • …so '1_BB1_3 : '1_BB2_0 must hold.

What we just found then is that, because of the edge from BB1 to BB2, the version of '1 on exit from BB1 flows into '1 on entry to BB2.

Type-checking the p = q assignment

let’s look at the assignment p = q. This occurs in statement BB2_0. The environment before we just saw:

Env_BB2_0 = {
    q: &'1_BB2_0 i32,
}

For an assignment, we take the type of the left-hand side (p) from the environment after, because that is what we are storing into. The environment after is Env_BB2_1:

Env_BB2_1 = {
    p: &'0_BB2_1 i32,
}

And so to type check the statement, we get that &'1_BB2_0 i32 <: &'0 BB2_1 i32, or '1_BB2_0 : '0_BB2_1.

In addition to this relation from the assignment, we also have to make the environment Env_BB2_0 be a subenvironment of the env after Env_BB2_1. But since the set of live variables are disjoint, in this case, that doesn’t add anything to the picture.

Type-checking the edge from BB1 to BB3

As the final example, let’s look at the false edge from BB1 to BB3. On entry to BB3, the variable q is dead but p is not, so the environment looks like

Env_BB3_0 = {
    p: &'0_BB3_0 i32,
}

Following a similar process to before, we conclude that '0_BB1_3 : '1_BB3_0.

Building the flow-sensitive subset graph

We are now starting to see how we can build a flow-sensitive version of the flow graph. Instead of having one node in the graph per origin variable, we now have one node in the graph per origin variable per program point, and we create an edge N1 -> N2 between two nodes if the type check requires that N1 : N2, just as before. Basically the only difference is that we have a lot more nodes.

Putting together what we saw thus far, we can construct a subset graph for this program like the following. I’ve excluded nodes that correspond to dead variables – so for example there is no node '1_BB1_0, because '1 appears in the variable q, and q is dead at the start of the program.

flowchart TD
    subgraph "'0"
        N0_BB1_0["'0_BB1_0"]
        N0_BB1_1["'0_BB1_1"]
        N0_BB1_2["'0_BB1_2"]
        N0_BB1_3["'0_BB1_3"]
        N0_BB2_1["'0_BB2_1"]
        N0_BB3_0["'0_BB3_0"]
        N0_BB4_0["'0_BB4_0"]
        N0_BB4_1["'0_BB4_1"]
    end

    subgraph "'1"
        N1_BB1_2["'1_BB1_2"]
        N1_BB1_3["'1_BB1_3"]
        N1_BB2_0["'1_BB2_0"]
    end
    
    subgraph "Loans"
        L0["{L0} (&x)"]
        L1["{L1} (&y)"]
    end
    
    L0 --> N0_BB1_0
    L1 --> N1_BB1_2
    
    N0_BB1_0 --> N0_BB1_1 --> N0_BB1_2 --> N0_BB1_3
    N0_BB1_3 --> N0_BB3_0
    N0_BB3_0 --> N0_BB4_0 --> N0_BB4_1
    N0_BB2_1 --> N0_BB4_0

    N1_BB1_2 --> N1_BB1_3
    N1_BB1_3 --> N1_BB2_0
    
    N1_BB2_0 --> N0_BB2_1
  

Just as before, we can trace back from the node for a particular origin O to find all the loans contained within O. Only this time, the origin O also indicates a program point.

In particular, compare '0_BB3_0 (the data reachable from p on the false branch of the if) to '0_BB4_0 (the data reachable after the if finishes). We can see that in the first case, the origin can only reference L0, but afterwards, it could reference L1.

Active loans

Just as in described in the previous post, to complete the analysis we compute the active loans. Active loans are defined in almost exactly the same way, but with one twist. A loan L is active at a program point P if there is a path from the borrow that created L to P where, for each point along the path…

  • there is some live variable whose type at P may reference the loan; and,
  • the place expression that was borrowed by L (here, x) is not reassigned at P.

See the bolded test? We are now taking into account the fact that the type of the variable can change along the path. In particular, it may reference distinct origins.

Implementing using dataflow

Just as in the previous post, we can compute active loans using dataflow. In particular, we gen a loan when it is issued, and we kill a loan L at a point P if (a) there are no live variables whose origins contain L or (b) the path borrowed by L is assigned at P.

Applying this to our running example

When we apply this to our running example, the unnecessary error on the false branch of the if goes away. Let’s walk through it.

Entry block

In BB1, we gen L0 and L1 at their two borrow sites, respectively. As a result, the active loans on exit from BB1 wil be {L0, L1}:

flowchart TD
  Start["..."]
  BB1["BB1:
       p = &x // Gen: L0
       y = y + 1;
       q = &y // Gen: L1
       if something goto BB2 else BB3
  "]
  BB2["..."]
  BB3["..."]
  BB4["..."]
 
  Start --> BB1
  BB1 --> BB2
  BB1 --> BB3
  BB2 --> BB4
  BB3 --> BB4
 
  classDef default text-align:left,fill:#ffffff;
  classDef highlight text-align:left,fill:yellow;
  class BB3 highlight
  
The false branch of the if

On the false branch of the if (BB3), the only live reference is p, which will be used later on in BB4. In particular, q is dead.

In the flow insensitive version, when the borrow checker looked at the type of p, it was p: &'0 i32, and '0 had the value {L0, L1}, so the borrow checker concluded that both loans were active.

But in the flow sensitive version we are looking at now, the type of p on entry to BB3 is p: &'0_BB3_0 i32. And, consulting the subset graph shown earlier in this post, the value of '0_BB3_0 is just {L0}. So there is a kill for L1 on entry to the block. This means that the only active loan is L0, which borrows x. This in turn means that y = y + 1 is not an error.

flowchart TD
  Start["
    ...
  "]
  BB1["
      BB1:
      p = &x // Gen: L0
      ...
      q = &y // Gen: L1
      ...
  "]
  BB2["
      BB2:
      ...
  "]
  BB3["
      BB3:
      // Kill `L1` (no live references)
      // Active loans: {L0}
      y = y + 1;
  "]
  BB4["
      BB4:
      ...
      read_value(p); // later use of `p`
  "]
 
  Start --> BB1
  BB1 --> BB2
  BB1 --> BB3
  BB2 --> BB4
  BB3 --> BB4
 
  classDef default text-align:left,fill:#ffffff;
  classDef highlight text-align:left,fill:yellow;
  class BB3 highlight
  

The role of invariance: vec-push-ref

I didn’t highlight it before, but invariance plays a really interesting role in this analysis. Let’s see another example, a simplified version of vec-push-ref from polonius:

let v: Vec<&'v u32>;
let p: &'p mut Vec<&'vp u32>;
let x: u32;

/* P0 */ v = vec![];
/* P1 */ p = &mut v; // Loan L0
/* P2 */ x += 1; // <-- Expect NO error here.
/* P3 */ p.push(&x); // Loan 1
/* P4 */ x += 1; // <-- 💥 Expect an error here!
/* P5 */ drop(v);

What makes this interesting? We create a reference p at point P1 that points at v. We then insert a borrow of x into the reference p. After that point, the reference p is dead, but the loan L1 is still active – this is because it is also stored in x. This connection between p and v is what is key about this example.

The way that this connection is reflected in the type system is through variance. In particular, a type &mut T is invariant with respect to T. This means that when you assign one reference to another, the type that they reference must be exactly the same.

In terms of the subset graph, invariance works out to creating bidirectional edges between origins. Take a look at the resulting subset graph to see what I mean. To keep things simple, I am going to exclude nodes for p: the interesting origins here at 'v (the data in the vector v) and 'vp (the data in the vector referenced by p – which is also v).

flowchart TD
    subgraph "Loans"
      L1["L1 (&x)"]
    end
    
    subgraph "'v"
      V_P0["'v_P0"]
      V_P1["'v_P1"]
      V_P2["'v_P2"]
      V_P3["'v_P3"]
      V_P4["'v_P4"]
      V_P5["'v_P5"]
    end

    subgraph "'vp"
      VP_P1["'vp_P1"]
      VP_P2["'vp_P2"]
      VP_P3["'vp_P3"]
    end

    V_P0 --> V_P1 --> V_P2 --> V_P3 --> V_P4 --> V_P5
    
    V_P1 <---> VP_P1
    VP_P1 <---> VP_P2 <---> VP_P3
        
    L1 --> VP_P3
  

The key part here are the bidirectional arrows between v_P1 and vp_P1 and between vp_P1 and vp_P3. How did those come about?

  • The first edge resulted from p = &mut v. The type of v (at P1) is Vec<&'v_P1 u32>, and that type had to be equal to the referent of p (Vec<&'vp_P1 u32>). Since the types must be equal, that means 'v_P1: 'vp_P1 and vice versa, hence a bidirectional arrow.
  • The second edge resulted from the flow from P1 to P3. The variable p is live across that edge, so its type before (&'p_P1 mut Vec<&'vp_P1 u32>) must be a subtype of its type after (&'p_P3 mut Vec<&'vp_P3 u32>). Because &mut references are invariant with respect to their referent types, this implies that 'vp_P1 and 'vp_P3 must be equal.

Put all together, and we see that L1 can reach 'v_P4 and 'v_P5, even though it only flowed into an earlier point in the graph. That’s cool! We will get the error we expect.

On the other hand, we can also see that there is some imprecision introduced through invariance. The loan L1 is introduced at point P3, and yet it appears to flow from 'vp_P3 backwards in time to 'vp_P2, 'vp_P1, over to 'v_P1, and downward from there. If we were only looking at the subset graph, then, we would conclude that both x += 1 statements in this program are illegal, but in fact only the second one causes a problem.

Active loans to the rescue (again)

The imprecision we see here is very similar to the imprecision we saw in the original polonius. Effectively, invariance is taking away some of our flow sensitivity. Interestingly, the active loans portion of the analysis makes up for this, in the same way that it did in the previous post. In vec-push-ref, L1 will only be generated at P3, so even though it can reach 'v_P2 via the subset graph, it is not considered active at P2. But once it is generated, it is not killed, even when p goes dead, because it can flow into 'v_P4. Therefore we get the one error we expect.

Conclusion

I’m going to stop this post here. I’ve described a version of polonius where we give variables distinct types at each program point and then relate those types together to create an improved subset graph. This graph increases the precision of the active loans analysis such that we don’t get as many false errors, but it is still imprecise in some ways.

I think this formulation is interesting for a few reasons. First, the most expensive part of it is going to be the subset graph, which has a LOT of nodes and edges. But that can be compressed significantly with some simple heuristics. Moreover, the core operation we perform on that graph is reachability, and that can be implemented quite efficiently as well (do a strongly connected components computation to reduce the graph to a tree, and then you can assign pre- and post-orderings and just compare indices). So I believe it could scale in practice.

I have worked through a few more classic examples, and I may come back to them in future posts, so far this analysis seems to get the results I expect. However, I would also like to go back and compare it more deeply to the original polonius, as well as to some of the formulations that came out of academia. There is still something odd about leaning on the dataflow check. I hope to talk about some of that in follow-up posts (or perhaps on Zulip or elsewhere with some of you readers!).


  1. If this particular example feels artificial, that’s because it is. But similar errors cause more common errors, most notably Problem Case #3↩︎

Mozilla ThunderbirdThunderbird Podcast #5: Remote Work Tips + Thunderbird Send

The Thunderbird logo is shown embracing a microphone. Underneath is the text "Episode 5: Remote Work 101"

The Thunderbird team is a remote-first, globally distributed group, so it made perfect sense to devote an episode to Remote Work! Join Heather, Chris, and Jason for some useful tips and tricks to make your daily remote work more enjoyable and more productive. We also include tips from ThunderCast listeners Pedro and Mike, who emailed us at podcast@thunderbird.net. (You can do the same if something’s on your mind.)

Plus: An inside look at the upcoming Thunderbird Send service, some fascinating origin stories, and geeky Raspberry Pi solutions for weather and BBQ.

<figcaption class="wp-element-caption">Listen to this episode on PeerTube</figcaption>

Subscribe To The Podcast

Software & Articles We Mentioned:

Chapters:

  • (00:00) – Intro
  • (01:05) – Meet Heather
  • (02:10) – Heather’s Origin Story
  • (04:40) – Your notes can help everyone!
  • (06:38) – Meet Chris
  • (07:37) – Chris’s Origin Story
  • (11:30) – Geeking out
  • (20:45) – Thunderbird 115 Updates & Flatpak
  • (22:04) – Thunderbird Send explainer
  • (31:48) – Remote Work Tips & Tricks
  • (51:44) – Community Voice: Tip from Pedro
  • (55:53) – Community Voice: Tip from Mike
  • (58:32) – Outro

The post Thunderbird Podcast #5: Remote Work Tips + Thunderbird Send appeared first on The Thunderbird Blog.

Mozilla Addons BlogTest Firefox Android extensions and help developers prepare for an open mobile ecosystem in December

In August we encouraged developers to start preparing their desktop extensions for Firefox Android open availability on addons.mozilla.org (AMO). The project is progressing well and we’re on track to launch the open mobile ecosystem on AMO in December. We have more infrastructure development and testing to complete in the coming weeks, but as we move toward release we’ll keep you informed of the project’s status right here on this blog, add-ons forums, and social channels.

To help our developer community prepare for Firefox Android open extension availability on AMO — and to ensure Firefox Android users have an exciting selection of extensions to choose from — we’ve compiled a list of popular desktop extensions (with mobile API compatibility) we’re inviting the add-ons community to help test on Firefox Android. If you’re intrigued to try some new extensions on your Firefox Android phone and offer feedback, we’d love to hear your thoughts.

How to test Firefox Android extensions (Beta, Nightly)

Test extensions are only currently discoverable on AMO via 119 Beta and 120 Nightly versions of Firefox Android. If you’re not already on Beta or Nightly, please follow these links for installing Firefox Android Beta and Nightly.

Once you’re ready to roll with Firefox Android (Beta/Nightly) on your phone, just follow these simple test steps:

  1. Check out this spreadsheet of test extensions. They were compiled because they possess a combination of Android API compatibility and relative popularity on Firefox desktop.
  2. Find a test extension that interests you and navigate to addons.mozilla.org on your Firefox Android (Beta/Nightly) phone and search for the extension you want to test, then install it.
  3. Follow the testing guide on this feedback form and play around with the extension.
  4. Report your impressions of the experience on the feedback form.

Then feel free to repeat testing with as many other test extensions as you like. Have fun with it! The feedback you provide will be extremely helpful to developers hoping to optimize their desktop extensions for Android usage.

Are you a developer hoping to make your extension available on Firefox Android?

If you have a desktop extension you want to prepare for Android availability on AMO, a good place to start is checking your desktop extension’s APIs against those supported for Android. It is also important that developers migrate to non-persistent background pages. In order to mark your extension as compatible with Firefox Android, add the gecko_android key inside browser_specific_settings (more info) in your manifest.json file (this is also a requirement when submitting your extension using the AMO API, e.g. with the web-ext tool). During this period you are welcome to update your extension on AMO to address issues while running in Firefox Android; and mark your extension as Android compatible to be ready for discoverability on AMO in December.

Please note — once you’re ready to test the mobile version of your extension, create a collection on AMO and test it on Firefox for Android Nightly (you’ll need to make a one-time change to Nightly’s advanced settings; please see the “Enable general extension support setting in Nightly” section of this post for details). If you’d prefer to polish your extension before publishing it on AMO, you can also debug and run the extension with web-ext.

It’s been exciting to see so many developers embrace this moment to make their desktop extensions available for a new mobile audience. When AMO opens the general availability of Android extensions in December, Firefox Android users will be thrilled at all of the innovative ways they’ll be able to customize their mobile browsing experience.

If you’re a developer with technical questions about mobile extension migration, please visit our support forum for Firefox Android extensions. You can also book office hours support every Monday and Tuesday.

The post Test Firefox Android extensions and help developers prepare for an open mobile ecosystem in December appeared first on Mozilla Add-ons Community Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter — 118

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 118 release cycle.

Contributions

With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette and geckodriver.

General

Bug fixes

In Firefox 118, a bug was simultaneously fixed for WebDriver BiDi and Marionette:

WebDriver BiDi

Let’s have a look at our WebDriver BiDi updates for Firefox 118, with several improvements and additions to the browsingContext module.

New: “browsingContext.activate” command

Back in Firefox 117, we updated browsingContext.create to support a background argument and to open new contexts in the foreground by default. In this release we are adding the browsingContext.activate command which can move an existing tab to the foreground and focus its document. If the tab’s window was in the background, it will also be moved to the front.

<figcaption class="wp-element-caption">Using browsingContext.activate to change the focused tab</figcaption>

New: “browsingContext.userPromptOpened” event

browsingContext.userPromptOpened is a new event which is emitted whenever a prompt of type “alert”, “confirm” or “prompt” is opened. The event’s payload contains the context where the dialog is displayed as well as the dialog’s type and message.

This event will also support "beforeunload" type dialogs in the future, but they are not handled at the moment.

New: “browsingContext.handleUserPrompt” command

User prompts created using window.alert, window.confirm or window.prompt can now be handled using the new browsingContext.handleUserPrompt command.

This command first takes a context argument, which corresponds to the tab displaying a prompt. Then you can provide an accept boolean to accept or decline dialogs of type "confirm" and "prompt", as well as a userText string to set the content for "prompt" types. “alert” type dialogs are simply dismissed regardless of the provided arguments. In case there is no prompt in that tab, a NoSuchAlertError error will be thrown.

At the moment, this command cannot handle "beforeunload" prompts, but this should be fixed in an upcoming release.

New: “type” field in the JSON payload

To easily differentiate JSON payloads coming from the remote end, all JSON payloads now include a "type" field which can either be "success" for a successful response to a command, "error" when an error was thrown while handling a command, and finally "event" for any event emitted.

Marionette (WebDriver classic)

Support for Web Authentication extension commands

Dana Keeler added support for all the Web Authentication extension commands, which allow users to authenticate themselves by Public Key Credentials.

Bug fixes

In Firefox 118, the following bug was fixed for Marionette while working on similar features for WebDriver BiDi:

  • on Android, we fixed a race condition which could lead to return an empty user text for prompts

Firefox Developer ExperienceFirefox DevTools Newsletter — 118

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 118 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

  • Krishna Ravishankar made it possible to copy WebSocket binary payload to the clipboard in the Network Monitor (#1578994)
  • Andrew de Rozario fixed an issue in the Network Monitor where form data query parameters were truncated (#1791983)
  • Sebastian Zartner improved the Inactive CSS feature so we will show warnings when
    • empty-cells is used on elements who don’t have a table-cell display value (#1583906)
    • column-related properties are used on non-multi-column containers (#1583912)
    • ignored properties are used on ::first-line pseudo-elements (#1842174)
  • Sebastian also revived the work from Takeshi Kurosawa to add links on ARIA attributes referring to other elements (#1546003)
In Firefox DevTools markup view, an element has a "aria-labelledby=references-button" attribute. A context menu is displayed where we can see an entry with the text: "Select Element #references-button"<figcaption class="wp-element-caption">Here, the ul element has an aria-labelledby attribute linking to an element whose id is references-button. Right-clicking on the attribute will offer the possibility to select the references-button element in the inspector. Ctrl+click (Cmd+click on OSX) on the attribute will also select the referred element.</figcaption>

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues


Better Debugger Tooltip

We refreshed the Debugger tooltip (#1687440) to make it easier to understand what the hovered value actually is. This is especially useful for values for which we used to display less than ideal result, like Date instances.

comparison of the Firefox Debugger variable tooltip on Firefox 117 and Firefox 118  The 2 versions are showing the debugger in a paused state, showing the following line of code:  `let today = new Date();`  On both versions a tooltip is displayed, pointing to the `today` keyword.  On 117, the tooltip only has a `<prototype>: Date.prototype` node  On 118, the tooltip has a header, which has the following text: "Date Tue Sep 26 2023…", and the prototype node is displayed underneath

Faster Inspector

In Firefox 117, we landed the initial support for CSS nesting. We then got a report that the inspector could be quite slow when trying to display deeply nested CSS rules (#1844446). We managed to tame that regression (#1845731) and we’re now actively monitoring the performances on such situation (#1846947).

Another fixes we did for this particular issue ended making other use cases (e.g. expanding the whole element tree in the markup view) significantly faster (#1845730).

Stronger tools

This version comes with a wide variety of bug fixes:

  • No more hang when inspecting element on Astro.js powered pages in development mode (#1847440)
  • Show event listener popup on click, for some specific flavor of jQuery event listeners (#1792737)
  • Remove stylesheets from the StyleEditor when <style> / <link> node are removed from the DOM (#1771113)
  • Avoid doing too many computation that would lead to browser lock-up in the Debugger on pages with many evaluated sources (#1841573)…
  • … which also revealed some issues in the Debugger tab context menus on evaluated sources (#1848011)
  • Ensure that the “Map scope” feature can be enabled after the Debugger paused (#1849987)
  • Inline preview for exception in the Debugger now work when the exception occurs on the first line of inline scripts (#1849144)
  • Resent request are no longer blocked by Opaque Request Blocking (#1824658)

Thank you for reading this and using our tools, see you next month for a new round of exciting updates 🙂

Niko MatsakisEmpathy in open source: be gentle with each other

Over the last few weeks I had been preparing a talk on “Inclusive Mentoring: Mentoring Across Differences” with one of my good friends at Amazon. Unfortunately, that talk got canceled because I came down with COVID when we were supposed to be presenting. But the themes we covered in the talk have been rattling in my brain ever since, and suddenly I’m seeing them everywhere. One of the big ones was about empathy — what it is, what it isn’t, and how you can practice it. Now that I’m thinking about it, I see empathy so often in open source.

What empathy is

In her book Atlas of the Heart1, Brené Brown defines empathy as

an emotional skill set that allows us to understand what someone is experiencing and to reflect back that understanding.

Empathy is not about being nice or making the other person feel good or even feel better2. Being empathetic means understanding what the other person feels and then showing them that you understand.

Understanding what the other person feels doesn’t mean you have to feel the same way. It also doesn’t mean you have to agree with them, or feel that they are “justified” in those feelings. In fact, as I’ll explain in a second, strong feelings and emotion are by design limited in their viewpoints — they are always showing us something, and showing us something real, but they are never showing us the full picture.

Usually we feel multiple, seemingly contradictory things, which can leave everything feeling like a big muddle. The goal, from what I can see, is to be able to pull those multiple feelings apart, understand them, and then – from a balanced place – decide how we are going to react to them. Hopefully in real time. Pretty damn hard, in my experience, but something we can get better at.

People are not any one thing

Some time back, Aaron Turon introduced me to Internal Family Systems through the book Self Therapy3. It’s really had a big influence on how I think about things. The super short version of IFS is “Inside Out is real”. We are each composites of a number of independent parts which capture pieces of our personality. When we are feeling balanced and whole, we are switching between these parts all the time in reaction to what is going on around us.

But sometimes things go awry. Sometimes, one part will get very alarmed about what it perceives to be happening, and it will take complete control of you. This is called blending. While you are blended, the part is doing its best to help you in the ways that it knows: that might mean making you super anxious, so that you identify risks, or it might mean making you yell at people, so that they will go away and you don’t have to risk them letting you down. No matter which part you are blended with in the moment, though, you lose access to your whole self and your full range of capabilities. Even though the part will help you solve the immediate problem, it often does so in ways that create other problems down the line.

This concept of parts has really helped me to understand myself, but it has also helped me to understand what previously seemed like contradictory behavior in other people. The reason that people sometimes act in extreme ways, ways that seem so different from the person I know at other times, is because they’re blendedthey’re not the person I know at that time, they’re just one part of that person. And probably a part that has helped them through some tough times in the past.

Empathy as “holding space”

I’ve often heard the term ‘emotional labor’ and, to be honest, I had a hard time connecting to it. But in Lama Rod Owen’s “Love and Rage”, he talks about emotional labor in terms of “the work we do to help people process their emotions” and, in particular, gives this list of examples:

This includes actively listening to others, asking how people are feeling, checking in with them, letting them vent in front of you, and not reacting to someone when they are being rude or disrespectful.

Now this list struck a chord with me. To me, the hardest part of empathy is holding space — letting someone have a reaction or a feeling without turning away. When people are reacting in an extreme way — whether it’s venting or being rude — it makes us uncomfortable, and often we’ll try to make them stop. This can take many forms. It could mean changing the topic, dismissing it (“get over it”, “I’m sure they didn’t mean it like that”), or trying to fix it (“what you need to do is…”, “let’s go kick their ass!”) For me, when people do that, it makes me feel unseen and kind of upset. Even if the other person is getting righteously angry on my behalf, I feel like suddenly the situation isn’t about me and how I want to think about things.

What does all this have to do with Github?

At this point you might be wondering “what do obscure therapeutic processes and buddhist philosophy have to do with Github issue threads?” Take another look at Lama Rod Owens’s list of examples of emotional labor, especially the last one:

not reacting to someone when they are being rude or disrespectful

To be frank, being an open-source maintainer means taking a lot of shit4. In his insightful, and widely discussed, talk “The Hard Parts of Open Source", Evan Czaplicki identified many of the “failure modes” of open source comment threads. One very memorable pattern is the “Why don’t you just…” comment, where somebody chimes in with an obvious alternative, as if you hadn’t thought of it. There is also my personal favorite, what I’ll call the “double agent” comment, where someone seems to feel that your goal is actually to ruin the project you’ve put so much effort into, and so comes in hot and angry.

My goal is always to respond to comments as if the commenter had been constructive and polite, or was my best friend. I don’t always achieve my goal, especially in forums where I have to respond quickly5. But I honestly do try. One technique is to find the key points in their comment and rephrase them, to be sure you understand, and then give your take. When I do that, I usually learn things — even when I initially thought somebody was just a blowhard, there is often a strong point underlying their argument, and it may lead me to change course if I listen to it. If nothing else, it’s always good to know the counterarguments in depth.

Empathy as a maintainer

And this brings us to the role of empathy as an open-source maintainer. As I said, these days, I see it popping up everywhere. To start, the idea of responding to someone’s comment, even one that feels rude, by identifying the key points they are trying to make feels to me like empathy, even if those points are often highly technical6. Fundamentally, empathy is all about understanding the other person and letting them know you understand, and that is what I am trying to do here.

But empathy comes into play in a more meta way as well. Trying to think how somebody feels — and why they might be feeling that way — can really help me to step back from feeling angry or injured by the tone of a comment and instead to refocus on what they are trying to communicate to me. Aaron Turon wrote a truly insightful and honest series of posts about his perspective on this called Listening and Trust. In part 3 of that series, he identified some of the key contributors to comment threads that go off the rails, what he called “momentum, urgency, and fatigue”. It’s worth reading that post, or reading it again if you already have. It’s a masterpiece of looking past the immediate reactions to understand better what’s going on, both within others and yourself.

Empathy when we surprise people

When Apple is working on a new product, they keep it absolutely top secret until they are ready – and then they tell the world, hoping for a big splash. This works for them. In open source, though, it’s an anti-pattern. The last thing you want to do is to surprise people – that’s a great way to trigger those parts we were talking about.

The difference, I think, is that open source projects are community projects – everybody feels some degree of ownership. That’s a big part of what makes open source so great! But, at the same time, when somebody starts messing with your stuff, that’s sure to get you upset. Paul Ford wrote an article identifying this feeling, which he called “Why wasn’t I consulted?”.

I find the phrase “Why wasn’t I consulted?” a pretty useful reminder for how it feels, but to be honest I’ve never liked it. The problem is that to me it feels condescending. But I totally get the way that people feel. It doesn’t always mean I think they’re right, or even justified in that feeling. But I get it, and I respect it. Heck, I feel it too!7

My personal creed these days is to be as open and transparent as I can with what I am doing and why. It’s part of why I love having this blog, since it lets me post up early ideas while I am still thinking about them. This also means I can start to get input and feedback. I don’t always listen to that feedback. A lot of times, people hate the things I am talking about, and they’re not shy about saying so – I try to take that as a signal, but just one signal of many. If people are upset, I’m probably doing something wrong, but it may not be the idea, it may be the way I am talking about it, or some particular aspect of it.

Empathy when we design our project processes

As I prepared this blog post, I re-read Aaron’s Listening and Trust, and I was struck again by how many insights he had there. One of them was that by applying empathy, and looking at our processes from the lens of how it feels to be a participant – what concerns get triggered – we can make changes so that everyone feels more included and less worn down. The key part here is that we have to look not only as how things feel for ourselves, but also how they feel for the participants – and for those who are not yet participating! There’s a huge swath of people who do not join in on Rust discussions, and I think we’re really missing out. This kind of design isn’t easy, but it’s crucial.

Empathy as a contributor

I’ve focused a lot on the role of empathy as an open-source maintainer. But empathy absolutely comes into play as a contributor. There’s a lot said on how people behave differently when commenting on the internet versus in person, and how the tone of a text comment can so easily be misread.

The fact is, when you contribute to an open-source project, the maintainers are going to come up short. They’re going to overlook things. They may not respond promptly to your comment or PR – they’re likely going to hide their head in the sand because they’re overwhemed.8 Or they may snap at you.

So what do you do when people let you down? I think the best is to speak for your feelings, but to do so in an empathetic way. If you are feeling hurt, don’t leave an angry comment. This doesn’t mean you have to silence your feelings – but just own them as your feelings. “Hey, I get that you are busy. Still, when I open a PR and nobody answers, it feels like this contribution is not wanted. If that’s true, just tell me, I can go elsewhere.”9

I bet some of you, when you read that last comment, were like “oh, heck no”. It’s scary to talk about how you feel. It takes a lot of courage. But it’s effective – and it can help the maintainer get unblended from whatever part they are in and think about things from your perspective. Maybe they will answer, “No, I really want this change, but I am just super busy right now, can you give me 3 months?” Or maybe they will say, “Actually, you’re right, I am not sure this is the right direction. I’m sorry that I didn’t say so before you put so much work into it.” Or maybe they won’t answer at all, because they’re hiding from the github issue thread – but when they come back and read it much later, they’ll reflect on how that made you feel, and try to be more prompt the next time. Either way, you know that you spoke up for yourself, but did so in a way that they can hear.

Empathy for ourselves and our own parts

This brings me to my final topic. No matter what role we play in an open-source project, or in life, the most important person to have empathy for is yourself. Ironically, this is often the hardest. We usually have very high expectations for ourselves, and we don’t cut ourselves much slack. As a maintainer, this might manifest as feeling you have to respond to every comment or task, and feeling bad when you don’t keep up. As a contributor, it might be feeling crappy when people point out bugs in your PR. No matter who we are, it might be kicking ourselves and feeling shame when we overreact in a comment.

In my view, shame is basically never good. Of course I make mistakes, and I regret them. But when I feel shame about them, I am actually focusing inward, focusing on my own mistakes instead of focusing on how I can make it up to the other person or resolve my predicament. It doesn’t actually do anyone any good.

I think there are different ways to experience shame. I know how I experience it. It feels like one of my parts is kicking the crap out of itself. And that really hurts. It hurts so bad that it tends to cause other parts to rise up to try and make it stop. That might be by getting angry at others — “it’s their fault we screwed up!” — or, more common for me, it might be by feeling depressed, withdrawing, and perhaps focusing on some technical project that can make me feel good about myself.

In their classic and highly recommended blog post, My FOSS Story, Andrew Gallant talked about how they deal with an overflowing inbox full of issues, feature requests, and comments:

The solution that I’ve adopted for this phenomenon is one that I’ve used extremely effectively in my personal life: establish boundaries. Courteously but firmly setting boundaries is one of those magical life hacks that pays dividends once you figure out how to do it. If you don’t know how to do it, then I’m not sure exactly how to learn how to do it unfortunately. But setting boundaries lets you focus on what’s important to you and not what’s important to others.

It can be really easy to overextend yourself in an open-source project. This could mean, as a maintainer, feeling you have to respond to every comment, fix every bug. Overextending yourself in turn is a great way to become blended with a part, and start acting out some of those older, defensive strategies you have for dealing with stress.

Also, I’ve got bad news. You are going to screw up in some way. It might be overextending yourself10. It might be responding poorly. Or pushing for an idea that turns out to be very deeply wrong. When you do that, you have a choice. You can feel shame, or you can extend compassion and empathy to yourself. It’s ok. Mistakes happen. They are how we learn.

Once you’ve gotten past the shame, and realized that making mistakes doesn’t make you bad, you can start to think about repair. OK, so you messed up. What can you do about it? Maybe nothing is needed. Or maybe you need to go and undo some of what you did. Or maybe you have to go and tell some people that what they are doing is not ok. Either way, compassion and empathy for yourself is how you will get there.

On the limits of my own experience

Before I go, I want to take a moment to acknowledge the limits of my own experience. I am a cis, white male, and I think in this post it shows. When I encounter antipathy, it tends to be targeted at individual things I have done or ideas I am espousing. At most, it might come about because of the role I am playing. I don’t encounter conscious or unconscious bias on the basis of my race, gender, sexual orientation, or any other such thing. This gives me a lot of luxury. For example, for the most part, I can take a rude comment and I can usually find an underlying technical point to focus on in my response. This is not true for all maintainers. In writing this post, I thought a lot about how the dynamics of open source seem almost perfectly designed11 to be exclusive to people who are not from groups deemed “high status” by society.

Rust has a pretty uneven track record here. There are projects that do better. Improving our processes to take better account of how they feel for participants is definitely a necessary step, along with other things. One thing I am convinced of: the more people that get involved in Rust – and especially the more distinct backgrounds and experiences those people have – the better it becomes. Rust is always trying to achieve 6 (previously) impossible things before breakfast, and we need all the ideas we can get.12

Be gentle with each other

If could I have just one wish, it would be this bastardized quote from the great Bill and Ted:

Be gentle with each other

We’ve talked a lot about empathy and how it comes into play, but really, in my mind, it all boils down to being gentle when somebody slips up. Note that being gentle doesn’t mean you can’t also be real and authentic about how you felt. We talked earlier about I-messages – by speaking plainly about how somebody made you feel, you can deliver a message that is both gentle and yet incredibly powerful. To me, the key is not to make assumptions about what’s going on for other people. You can never know their motivations. You can make guesses, but they’re always based on incomplete information.

Does this mean I think we should all go running around saying “when you do X, I felt like you were trying to ruin the project?” Well, not really, although I think that would be an improvement. Even better though would be to stop and think, wait, why would they be trying to ruin the project? Instead of assuming what other people are doing, tell them how they are making you feel. Maybe say, “when you do X, I feel like you are saying my use case doesn’t matter”. Or, better yet, say “when you do X, I will no longer be able to do Y, which I find really valuable”. I predict this is much more likely to lead to a constructive discussion.

It’s important to remember that the choice of words can have strong impact, too. For me, words like ruin or phrases like dumpster fire, shitshow, etc, can be quite triggering all on their own. I’m not always consistent on this. I’ve noticed that I sometimes use strong, colorful language because I think it’s funny. But I’ve also noticed that when other people do it, I can get pretty upset (“I know that code is not the best, but it’s worked for the last 3 years dang it.”).

I think you can boil all of this down to be precise and accurate when you communicate. It’s not accurate to say “you are trying to ruin the project”. You can’t know that. It is accurate to talk about what you feel and why you feel it. It’s also not accurate to say something is a dumpster fire, but it is accurate to call out shortcomings and concerns.

Anyway, I’m done giving advice. I’m no expert here, just one more person trying to learn and do the best I can. What I can say with confidence is that the things I’m talking here have really helped me personally in approaching difficult situations in my life, and I hope that they’ll help some of you too!


  1. I bought this book when it first came out, read a bit of it, and then thought of it more as a reference — a great book for getting clear, distinguished definitions that help to elucidate the subtleties of human emotion. But when I revisited it to prepare for this talk, I was surprised to find it was much more “front-to-back” readable than I thought, and carried a lot of hidden wisdom. ↩︎

  2. Though I think people feeling good and better is always a consequence of having encountered someone else empathetic. ↩︎

  3. By none other than Jay Earley, inventer of the Earley parser! This guy is my hero. ↩︎

  4. And I say this as a cis white man, which means I don’t even have to deal with shit resulting from people’s conscious or unconscious bias. ↩︎

  5. This is one reason I don’t personally like fast moving threads and discussions, and I often limit the venues where I will participate. I need a bit of time to sit with things and process them. ↩︎

  6. It’s worth highlighting that the key points they are trying to make are not always technical. Re-reading Aaron Turon’s Listening and Trust posts for this series, I was reminded of glaebhoerl’s pivotal comment that articulated very well their frustration at the Rust maintainer’s sense of entitlement and superiority, and the reasons for it. As glaebhoerl identified so clearly, it wasn’t so much the technical decision that was the problem — though I think on balance it was the wrong call, it was a debatable point — as the manner of engagement. ↩︎

  7. Like when Disney canceled Owl House without even asking me. WHAT GIVES DISNEY. ↩︎

  8. For example, I’ve been ignoring messages in the Salsa Zulip for a bit, and feeling bad about how I just don’t have the time to focus on that project right now. I’m sorry y’all and I do still expect to come back to Salsa 2022 (which, alas, will clearly not ship in 2022 – ah well, I knew the risks when I put a year into the name). ↩︎

  9. This structure, “when you do X, I feel Y”, is called an I-message. It’s surprisingly hard to do it right. It’s easy to make something that sounds like an I-message, but isn’t. For example, “When you closed this PR without commenting, it showed me I am not welcome here” is very different from “When you closed this PR without commenting, it made me feel like I am not welcome here”. The first one is not an I-message. It’s telling someone else how they feel. The second one is telling someone else how they made you feel. There’s a very good chance those two statements would land quite differently. ↩︎

  10. Unless, perhaps, you are Andrew Gallant, who from what I can see is one supremely well balanced individual. :) ↩︎

  11. This of course is what people mean when they talk about systemic racism, or at least how I understand it: it’s not that open source or most other things were designed intentionally to reinforce bias, but the structures of our society are setup so that if you don’t actively work to counteract bias, you wind up playing into it. ↩︎

  12. I always think of Jessica Lord’s inspirational blog post Privilege, Community, and Open source, which sadly appears to be offline, but you can read it on the web-archive↩︎

This Week In RustThis Week in Rust 514

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is async_fn_traits, a crate with async function traits to enable using higher ranked trait bounds in async functions.

Thanks to kornel for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

402 pull requests were merged in the last week

Rust Compiler Performance Triage

A very quiet week with the only large change in performance being improvements brought on by @saethlin's work on cleaning up the FileEncoder used in various places like rustc_metadata and rustc_serialize.

Triage done by @rylev. Revision range: af78bae..27b4eb9

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.6% [0.3%, 1.1%] 15
Regressions ❌
(secondary)
2.0% [0.2%, 7.1%] 32
Improvements ✅
(primary)
-0.7% [-1.3%, -0.3%] 70
Improvements ✅
(secondary)
-0.9% [-3.5%, -0.2%] 31
All ❌✅ (primary) -0.4% [-1.3%, 1.1%] 85

2 Regressions, 3 Improvements, 4 Mixed; 0 of them in rollups 73 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-09-27 - 2023-10-25 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

The problem with Rust it appears,
that it leaves programmers in tears
if they have to go back
to languages that lack
in short they've got feature-arrears.

llogiq on /r/rust

Thanks to Frank Steffahn for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

IRL (podcast)We’re Back! IRL Season 7: People Over Profit

This season, IRL host Bridget Todd meets people who are balancing the upsides of artificial intelligence with the downsides that are coming into view worldwide. 

Stay tuned for the first of five biweekly episodes on October 10! IRL is an original podcast from the non-profit Mozilla.

The Mozilla BlogGoogle Meet and the rest of G Workspace are working better than ever on Firefox

Been thinking about making the switch to Firefox, but worried about how well Google Workspace products will work? Well, we’ve got some good news for you.

Google services have always worked great on Firefox, thanks to our collaboration with the G Workspace team, who want to make sure their products work on all major browsers. But the internet never stops evolving, so we’re always looking for ways to improve people’s experience with services they choose to use on Firefox. 

Here’s an example of a recent improvement you can expect to see when you use Google Meet on Firefox: 

Google Meet

Want to turn on fun video effects during your next call? Or do you need to blur your background when you hop on your next online work meeting? You can now do both in Google Meet on Firefox. If you use Firefox version 115 or higher, you can set your visual effects either before or during your Meet meeting. 

If you are in the waiting room prior to your call, you can select “Apply visual effects” (the button with sparkles), which will present you the usual Google Meet menu of filter options.

A screenshot of the Google Meet waiting room, where you can select visual effects.<figcaption class="wp-element-caption">If you use Firefox version 115 or higher, you can set your visual effects either before or during your Meet meeting. </figcaption>

If you’re already in the call, you can select “More Options” (the button with the three dots) in the menu bar at the bottom of the page and then select “Apply visual effects.”

A screenshot of a Google Meet call, with a penguin selecting “Apply Visual Effects.”<figcaption class="wp-element-caption">During a Google Meet call, select “Apply visual effects.”</figcaption>

It’s not just Google Meet. Here’s a list of other Google tools and services that works great on Firefox:

  • Workspace (Gmail, Calendar, Chat, Docs, Drive, Meet, Sheets, Slides)
  • Classroom
  • Forms
  • Keep
  • Maps
  • Photos
  • Translate
  • Voice
  • YouTube (videos and Music)

So if you’re considering a switch but have had concerns with compatibility, you can be confident that Google services work great with Firefox — and that we’re always working on meeting your ever-changing needs.

Get Firefox

Get the browser that protects what’s important

The post Google Meet and the rest of G Workspace are working better than ever on Firefox appeared first on The Mozilla Blog.

The Mozilla BlogReclaim the internet with Mozilla

Remember when the internet felt personal? Like a cool space for ourselves to discover really amazing things? The web allowed us to tell our stories in new ways, learn from others across the world and solve problems together. This extraordinary power is what sparked Mozilla’s commitment to a healthy internet, through activism and open source projects like Firefox. Now, just as we asked people to build with us 25 years ago, we invite everyone to join us in our new call to action: reclaiming the internet.  

Mozilla knows there are not-so-great things about being online these days. From misinformation and security risks to being tracked for our data and the great speed at which AI is overtaking our lives, it can all make us feel like we’re just scrolling along for the ride. But while profit has shaped the web for too long, we still believe that the best days of the internet are still to come — and that people can and should reclaim their narrative, their time and the connections they make online. This is why we’re taking Mozilla’s quarter anniversary as an opportunity to both reflect on our accomplishments of the last 25 years and to set a course for the decades ahead. 

To commemorate this moment, we are hosting a five-day event at the Alte Münze in Berlin this Oct. 12 to 16 that embodies all of what Mozilla has been, what it aims to be and what it envisions for the internet. 

Daytime visitors will get to explore the future through four immersive rooms that show what’s possible when internet users reclaim expression, inspiration, wonder and community. In the evening, we will host an array of programming including an awards ceremony celebrating the inaugural cohort for Rise 25, a recognition of 25 everyday people who are shaping the future of the internet, creating technologies to be more ethical, responsible and equitable, and content that is more inclusive and diverse. They’re the next generation of leaders who will help empower all of us to make the internet cooler, safer and more aligned with the future we all dream of.

You can reserve tickets for Mozilla’s Reclaim the Internet event here.

Of course, we want you to join us even if you’re not in Berlin. You’ll be able to check out the exhibits and events on our Instagram and TikTok, including a livestream of the Rise25 award ceremony on Mozilla’s YouTube page

So, let’s come together and reclaim the internet. With your help, we can shape a future where individual expression, inspiration, wonder and community define our personal experiences online. The time is now to act, build and choose a better internet — and we need to do it together.

The post Reclaim the internet with Mozilla appeared first on The Mozilla Blog.

The Rust Programming Language BlogIncreasing the minimum supported Apple platform versions

As of Rust 1.74 (to be released on November 16th, 2023), the minimum version of Apple's platforms (iOS, macOS, and tvOS) that the Rust toolchain supports will be increased to newer baselines. These changes affect both the Rust compiler itself (rustc), other host tooling, and most importantly, the standard library and any binaries produced that use it. With these changes in place, any binaries produced will stop loading on older versions or exhibit other, unspecified, behavior.

The new minimum versions are now:

  • macOS: 10.12 Sierra (First released 2016)
  • iOS: 10 (First released 2016)
  • tvOS: 10 (First released 2016)

If your application does not target or support macOS 10.7-10.11 or iOS 7-9 already these changes most likely do not affect you.

Affected targets

The following contains each affected target, and the comprehensive effects on it:

  • x86_64-apple-darwin (Minimum OS raised)
  • aarch64-apple-ios (Minimum OS raised)
  • aarch64-apple-ios-sim (Minimum iOS and macOS version raised.)
  • x86_64-apple-ios (Minimum iOS and macOS version raised. This is also a simulator target.)
  • aarch64-apple-tvos (Minimum OS raised)
  • armv7-apple-ios (Target removed. The oldest iOS 10-compatible device uses ARMv7s.)
  • armv7s-apple-ios (Minimum OS raised)
  • i386-apple-ios (Minimum OS raised)
  • i686-apple-darwin (Minimum OS raised)
  • x86_64-apple-tvos (Minimum tvOS and macOS version raised. This is also a simulator target.)

From these changes, only one target has been removed entirely: armv7-apple-ios. It was a tier 3 target.

Note that Mac Catalyst and M1/M2 (aarch64) Mac targets are not affected, as their minimum OS version already has a higher baseline. Refer to the Platform Support Guide for more information.

Affected systems

These changes remove support for multiple older mobile devices (iDevices) and many more Mac systems. Thanks to @madsmtm for compiling the list.

As of this update, the following device models are no longer supported by the latest Rust toolchain:

iOS
  • iPhone 4S (Released in 2011)
  • iPad 2 (Released in 2011)
  • iPad, 3rd generation (Released in 2012)
  • iPad Mini, 1st generation (Released in 2012)
  • iPod Touch, 5th generation (Released in 2012)
macOS

A total of 27 Mac system models, released between 2007 and 2009, are no longer supported.

The affected systems are not comprehensively listed here, but external resources exist which contain lists of the exact models. They can be found from Apple and Yama-Mac, for example.

tvOS

The third generation AppleTV (released 2012-2013) is no longer supported.

Why are the requirements being changed?

Prior to now, Rust claimed support for very old Apple OS versions, but many never even received passive testing or support. This is a rough place to be for a toolchain, as it hinders opportunities for improvement in exchange for a support level many people, or everyone, will never utilize. For Apple's mobile platforms, many of the old versions are now even unable to receive new software due to App Store publishing restrictions.

Additionally, the past two years have clearly indicated that Apple, which has tight control over toolchains for these targets, is making it difficult-to-impossible to support them anymore. As of XCode 14, last year's toolchain release, building for many old OS versions became unsupported. XCode 15 continues this trend. After enough time, continuing to use an older toolchain can even lead to breaking build issues for others.

We want Rust to be a first-class option for developing software for and on Apple's platforms, but to continue this goal we have to set an easier, and more realistic compatibility baseline. The new requirements were determined after surveying what Apple and third-party statistics are available to us and picking a middle ground that balances compatibility with Rusts's needs and limitations.

Do I need to do anything?

If you or an application you develop are affected by this change, there are different options which may be helpful:

  • If possible, raise your minimum supported OS version(s). All OS versions discussed in this post have no support from the vendor. Not even security updates.
  • If you are running the Rust compiler or other previously-supported host tools, consider cross-compiling from a newer host instead. You may also no longer be able to depend on the Rust standard library.
  • If none of these options work, you may need to freeze the version of the Rust toolchain your project builds with. Alternatively, you may be able to maintain a custom toolchain that supports your requirements in any sub-component of it (such as libstd).

If your project does not directly support a specific OS version, but instead depends on a default version previously used by Rust, there are some steps you can take to help improve future compatibility. For example, a number of crates in the ecosystem have hardcoded Rust's previously supported OS baseline versions since they haven't changed for a long time:

  • If you use the cc crate to include other languages in your project, a future update will handle this transparently.
  • If you need a minimum OS version for anything else, crates should query the new rustc --print deployment-target option for a default, or user-set when available, value on toolchains using Rust 1.71 or newer going forward. Hardcoded defaults should only be used for older toolchains where this is unavailable.

Niko MatsakisPolonius revisited, part 1

lqd has been doing awesome work driving progress on polonius. He’s authoring an update for Inside Rust, but the TL;DR is that, with his latest PR, we’ve reimplemented the traditional Rust borrow checker in a more polonius-like style. We are working to iron out the last few performance hiccups and thinking about replacing the existing borrow checker with this new re-implementation, which is effectively a no-op from a user’s perspective (including from a performance perspective). This blog post walks through that work, describing how the new analysis works at a high-level. I plan to write some follow-up posts diving into how we can extend this analysis to be more precise (while hopefully remaining efficient).

What is Polonius?

Polonius is one of those long-running projects that are finally starting to move again. From an end user’s perspective, the key goal is that we want to accept functions like so-called Problem Case #3, which was originally a goal of NLL but eventually cut from the deliverable. From my perspective, though, I’m most excited about Polonius as a stepping stone towards an analysis that can support internal references and self borrows.

Polonius began its life as an alternative formulation of the borrow checker rules defined in Datalog. The key idea is to switch the way we do the analysis. Whereas NLL thinks of 'r as a lifetime consisting of a set of program points, in polonius, we call 'r an origin containing a set of loans. In other words, rather than tracking the parts of the program where a reference will be used, we track the places that the reference may have come from. For deeper coverage of Polonius, I recommend my talk at Rust Belt Rust from (egads) 2019 (slides here).

Running example

In order to explain the analyses, I’m going to use this running example. One thing you’ll note is that the lifetimes/origins in the example are written as numbers, like '0 and '1. This is because, when we start the borrow check, we haven’t computed lifetimes/origins yet – that is the job of the borrow check! So, we first go and create synthetic inference variables (just like an algebraic variable) to use as placeholders throughout the computation. Once we’re all done, we’ll have actual values we could plug in for them – in the case of polonius, those values are sets of loans (each loan is a & expression, more or less, that appears somewhere in the program).

Here is our example. It contains two loans, L0 and L1, of x and y respectively. There are also four assignments:

let mut x = 22;
let mut y = 44;
let mut p: &'0 u32 = &x; // Loan L0, borrowing `x`
y += 1;                  // (A) Mutate `y` -- is this ok?
let mut q: &'1 u32 = &y; // Loan L1, borrowing `y`
if something() {
    p = q;               // `p` now points at `y`
    x += 1;              // (B) Mutate `x` -- is this ok?
} else {
    y += 1;              // (C) Mutate `y` -- is this ok?
}
y += 1;                  // (D) Mutate `y` -- is this ok?
read_value(p);           // use `p` again here

Today in Rust, we get two errors (C and D). If you were to run this example with MiniRust, though, you would find that only D can actually cause Undefined Behavior. At point C, we mutate y, but the only variable that references y is q, and it will never be used again. The borrow checker today reports an error because its overly conservative. Polonius, on the other hand, gets that case correct.

Location Existing borrow checker Polonius MiniRust
A ✔️ ✔️ OK
B ✔️ ✔️ OK
C ✔️ OK
D Can cause UB, if true branch is taken

Reformulating the existing borrow check à la polonius

This blog post is going describe the existing borrow checker, but reformulated in a polonius-like style. This will make it easier to see how polonius is different in the next post. The idea of doing this reformulation came about when implementing the borrow checker in a-mir-formality1. At first, we weren’t sure if it was equivalent, but lqd verified it experimentally by testing it against the rustc test suite, where it matches the behavior 100% (lqd is also going to test against crater).

The borrow check analysis is a combination of three things, which we will cover in turn:

flowchart TD
  ConstructMIR --> LiveVariable
  ConstructMIR --> OutlivesGraph
  LiveVariable --> LiveLoanDataflow
  OutlivesGraph --> LiveLoanDataflow
  ConstructMIR["Construct the MIR"]
  LiveVariable["Compute the live variables"]
  OutlivesGraph["Compute the outlives graph"]
  LiveLoanDataflow["Compute the active loans at a given point"]
  

Construct the MIR

The borrow checker these days operates on MIR2. MIR is basically a very simplified version of Rust where each statement is broken down into rudimentary statements. Our program is already so simple that the MIR basically looks the same as the original program, except for the fact that it’s structured into a control-flow graph. The MIR would look roughly like this (simplified):

flowchart TD
  Intro --> BB1
  Intro["let mut x: i32\nlet mut y: i32\nlet mut p: &'0 i32\nlet mut q: &'1 i32"]
  BB1["p = &x\ny = y + 1;\nq = &y\nif something goto BB2 else BB3"]
  BB1 --> BB2
  BB1 --> BB3
  BB2["p = q;\nx = x + 1;\n"]
  BB3["y = y + 1;"]
  BB2 --> BB4;
  BB3 --> BB4;
  BB4["y = y + 1;\nread_value(p);\n"]

  classDef default text-align:left,fill-opacity:0;
  

Note that MIR begins with the types for all the variables; control-flow constructs like if get transformed into graph nodes called basic blocks, where each basic block contains only simple, straightline statements.

Compute the live origins

The first step is to compute the set of live origins at each program point. This is precisely the same as it was described in the NLL RFC. This is very similar to the classic liveness computation that is taught in a typical compiler course, but with one key difference. We are not computing live variables but rather live origins – the idea is roughly that the live origins are equal to the origins that appear in the types of the live variables:

LiveOrigins(P) = { O | O appears in the type of some variable V live at P }

The actual computation is slightly more subtle: when variables go out of scope, we take into account the rules from RFC #1327 to figure out precisely which of their origins may be accessed by the Drop impl. But I’m going to skip over that in this post.

Going back to our example, I’ve added comments which origins would be live at various points of interest:

let mut x = 22;
let mut y = 44;
let mut p: &'0 u32 = &x;
y += 1;
let mut q: &'1 u32 = &y;
// Here both `p` and `q` may be used later,
// and so the origins in their types (`'0` and `'1`)
// are live.
if something() {
    // Here, only the variable `q` is live.
    // `p` is dead because its current value is about
    // to be overwritten. As a result, the only live
    // origin is `'1`, since it appears in `q`'s type.
    p = q;
    x += 1;
} else {
    y += 1;
}
// Here, only the variable `p` is live
// (`q` is never used again),
// and so only the origin `'0` is live.
y += 1;
read_value(p);

Compute the subset graph

The next step in borrow checking is to run a type check across the MIR. MIR is effectively a very simplified form of Rust where statements are heavily desugared and there is a lot less type inference. There is, however, a lot of lifetime inference – basically when NLL starts every lifetime is an inference variable.

For example, consider the p = q assignment in our running example:

...
let mut p: &'0 u32 = &x;
y += 1;
let mut q: &'1 u32 = &y;
if something() {
    p = q; // <-- this assignment
    ...
} else {
    ...
}
...

To type check this, we take the type of q (&'1 u32) and require that it is a subtype of the type of p (&'0 u32):

&'1 u32 <: &'0 u32

As described in the NLL RFC, this subtyping relation holds if '1: '0. In NLL, we called this an outlives relation. But in polonius, because '0 and '1 are origins representing sets of loans, we call it a subset relation. In other words, '1: '0 could be written '1 ⊆ '0, and it means that whatever loans '1 may be referencing, '0 may reference too. Whatever final values we wind up with for '0 and '1 will have to reflect this constraint.

We can view these subset relations as a graph, where '1: '0 means there is an edge '1 --⊆--> '0. In the borrow checker today, this graph is flow insensitive, meaning that there is one graph for the entire function. As a result, we are going to get a graph like this:

flowchart LR
  L0 --"⊆"--> Tick0
  L1 --"⊆"--> Tick1
  Tick1 --"⊆"--> Tick0
  
  L0["{L0}"]
  L1["{L1}"]
  Tick0["'0"]
  Tick1["'1"]

  classDef default text-align:left,fill:#ffffff;
  

You can see that '0, the origin that appears in p, can be reached from both loan L0 and loan L1. That means that it could store a reference to either x or y, in short. In contrast, '1 (q) can only be reached from L1, and hence can only store a reference to y.

Active loans

There is one last piece to complete the borrow checker, which is computing the active loans. Active loans determine the errors that get reported. The idea is that, if there is an active loan of a place a.b.c, then accessing a.b.c may be an error, depending on the kind of loan/access.

Active loans build on the liveness analysis as well as the subset graph. The basic idea is that a loan is active at a point P if there is a path from the borrow that created the loan to P where, for each point along the path…

  • there is some live variable that may reference the loan
    • i.e., there is a live origin O at P where L ∈ O. L ∈ O means that there is a path in the subset graph from the loan L to the origin O.
  • the place expression that was borrowed (here, x) is not reassigned
    • this isn’t relevant to the current example, but the idea is that you can borrow the referent of a pointer, e.g., &mut *tmp. If you then later change tmp to point somewhere else, then the old loan of *tmp is no longer relevant, because it’s pointing to different data than the current value of *tmp.

Implementing using dataflow

In the compiler, we implement the above as a dataflow analysis. The value at any given point is the set of active loans. We gen a loan (add it to the value) when it is issued, and we kill a loan at a point P if either (1) the loan is not a member of the origins of any live variables; (2) the path borrowed by the loan is overwritten.

Active loans on entry to the function

Let’s walk through our running example. To start, look at the first basic block:

flowchart TD
  Start["..."]
  BB1["// Active loans: {}
       p = &x // Gen: L0 -- loan issued
       // Active loans: {L0}
       y = y + 1;
       q = &y // Gen L1 -- loan issued
       // Active loans {L0, L1}
       if something goto BB2 else BB3
  "]
  BB2["..."]
  BB3["..."]
  BB4["..."]

  Start --> BB1
  BB1 --> BB2
  BB1 --> BB3
  BB2 --> BB4
  BB3 --> BB4

  classDef default text-align:left,fill:#ffffff;
  classDef highlight text-align:left,fill:yellow;
  class BB1 highlight
  

This block is the start of the function, so the set of action loans starts out as empty. But then we encounter two &x statements, and each of them is the gen site for a loan (L0 and L1 respectively). By the end of the block, the active loan set is {L0, L1}.

Active loans on the “true” branch

The next interesting point is the “true” branch of the if:

flowchart TD
  Start["
    ...
    let mut q: &'1 i32;
    ...
  "]
  BB1["..."]
  BB2["
      // Kill L0 -- not part of any live origin
      // Active loans {L1}
      p = q;
      x = x + 1;
  "]
  BB3["..."]
  BB4["..."]
 
  Start --> BB1
  BB1 --> BB2
  BB1 --> BB3
  BB2 --> BB4
  BB3 --> BB4
 
  classDef default text-align:left,fill:#ffffff;
  classDef highlight text-align:left,fill:yellow;
  class BB2 highlight
  

The interesting thing here is that, on entering the block, there is a kill of L0. This is because the only live reference on entry to the block is q, as p is about to be overwritten. As the type of q is &'1 i32, this means that the live origins on entry to the block are {'1}. Looking at the subset graph we saw earlier…

flowchart LR
  L0 --"⊆"--> Tick0
  L1 --"⊆"--> Tick1
  Tick1 --"⊆"--> Tick0
  
  L0["{L0}"]
  L1["{L1}"]
  Tick0["'0"]
  Tick1["'1"]

  class L1 trace
  class Tick1 trace

  classDef default text-align:left,fill:#ffffff;
  classDef trace text-align:left,fill:yellow;
  

…we can trace the transitive predecessors of '1 to see that it contains only {L1} (I’ve highlighted those predecessors in yellow in the graph). This means that there is no live variable whose origins contains L0, so we add a kill for L0.

No error on true branch

Because the only active loan is L1, and L1 borrowed y, the x = x + 1 statement is accepted. This is a really interesting result! It illustrates how the idea of active loans restores some flow sensitivity to the borrow check.

Why is it so interesting? Well, consider this. At this point, the variable p is live. The variable p contains the origin '0, and if we look at the subset graph, '0 contains both L0 and L1. So, based purely on the subset graph, we would expect modifying x to be an error, since it is borrowed by L0. And yet it’s not!

This is because the active loan analysis noticed that, although in theory x may reference L0, it definitely doesn’t at this point.

Active loans on the false branch

In contrast, if we look at the “false” branch of the if:

flowchart TD
  Start["
    ...
    let mut p: &'0 i32;
    ...
  "]
  BB1["..."]
  BB2["..."]
  BB3["
      // Active loans {L0}, {L1}
      y = y + 1;
  "]
  BB4["..."]
 
  Start --> BB1
  BB1 --> BB2
  BB1 --> BB3
  BB2 --> BB4
  BB3 --> BB4
 
  classDef default text-align:left,fill:#ffffff;
  classDef highlight text-align:left,fill:yellow;
  class BB3 highlight
  
False error on the false branch

This path is also interesting: there is only one live variable, p. If you trace the code by hand, you can see that p could only refer to L0 (x) here. And yet the analysis concludes that we have two active loans: L0 and L1. This is because it is looking at the subset graph to determine what p may reference, and that graph is flow insensitive. So, since p may reference L1 at some point in the program, and we haven’t yet seen references to L1 go completely dead, we assume that p may reference L1 here. This leads to a false error being reported when the user does y = y + 1.

Active loans on the final block

Now let’s look at the final block:

flowchart TD
  Start["
    ...
    let mut p: &'0 i32;
    ...
  "]
  BB1["..."]
  BB2["..."]
  BB3["..."]
  BB4["
        // Active loans {L0}, {L1}
        y = y + 1;
        read_value(p);
  "]
 
  Start --> BB1
  BB1 --> BB2
  BB1 --> BB3
  BB2 --> BB4
  BB3 --> BB4
 
  classDef default text-align:left,fill:#ffffff;
  classDef highlight text-align:left,fill:yellow;
  class BB4 highlight
  

At this point, there is one live variable (p) and hence one live origin ('0); the subset graph tells us that p may reference both L0 and L1, so the set of active loans is {L0, L1}. This is correct: depending on which path we took, p may refer to either L0 or L1, and hence we flag a (correct) error when the user attempts to modify y.

Kills for reassignment

Our running example showed one reason that loans get killed when there are no more live references to them. This most commonly happens when you create a short-lived reference and then stop using it. But there is another way to get a kill, which happens from reassignment. Consider this example:

struct List {
    data: u32,
    next: Option<Box<List>>
}

fn print_all(mut p: &mut List) {
    loop {
        println!("{}", p.data);
        if let Some(n) = &mut p.next {
            p = n;
        } else {
            break;
        }
    }
}

I’m not going to walk through how this is borrow checked in detail here, but let me just point out what makes it interesting. In this loop, the code first borrows from p and then assigns that result to p. This means that, if you just look at the subset graph, on the next iteration around the loop, there would be an active loan of p. However, this code compiles – how does that work? The answer is that when we do p = n, we are mutating p, which means that, when we borrow from p on the next iteration, we are actually borrowing from a previous node than we borrowed from in the first iteration. So everything is fine. The reason the borrow checker is able to conclude this is that it kills the loan of p.next when it sees that p is assigned to. This is discussed in the NLL RFC in more detail.

Conclusion

That brings us to the end of part 1! In this post, we covered how you can describe the existing borrow check in a more polonius-like style. We also uncovered an interesting quirk in how the borrow checker is formulated. It uses a location insensitive alias analysis (the subset graph) but completely that with a dataflow propagation to track active loans. Together, this makes it more expressive. This wasn’t, however, the original plan with NLL. Originally, the subset graph was meant to be flow sensitive. Extending the subset graph to be flow sensitive is basically the heart of polonius. I’ve got some thoughts on how we might do that and I’ll be getting to that in later posts. I do want to say in passing though that doing all of this framing is also making me wonder – is it really necessary to combine a type check and the dataflow check? Can we frame the borrow checker (probably the more precise variants we’ll be getting to in future posts) in a more unified way? Not sure yet!


  1. You won’t find this code in the current version of a-mir-formality; it’s since been rewritten a few times and the current version hasn’t caught up yet. ↩︎

  2. The origin of the MIR is actually an interesting story. As documented in RFC #1211↩︎

Firefox Add-on ReviewsTop anti-tracking extensions

The truth of modern tracking is that it happens in so many different and complex ways it’s practically impossible to ensure absolute tracking protection. But that doesn’t mean we’re powerless against personal data harvesters attempting to trace our every online move. There are a bunch of Firefox browser extensions that can give you tremendous anti-tracking advantages… 

Privacy Badger

Sophisticated and effective anti-tracker that doesn’t require any setup whatsoever. Simply install Privacy Badger and right away it begins the work of finding the most hidden types of tackers on the web. 

Privacy Badger actually gets better at tracker blocking the more you use it. As you naturally navigate around the web and encounter new types of hidden trackers, Privacy Badger will find and block them—unreliant on externally maintained block lists or other methods that may lag behind the latest trends in sneaky tracking. Privacy Badger also automatically removes tracking codes from outgoing links on Facebook and Google. 

Decentraleyes

Another strong privacy protector that works well right out of the box, Decentraleyes effectively halts web page tracking requests from reaching third party content delivery networks (i.e. ad tech). 

A common issue with other extensions that try to block tracking requests is they also sometimes break the page itself, which is obviously not a great outcome. Decentraleyes solves this unfortunate side effect by injecting inert local files into the request, which protects your privacy (by distributing generic data instead of your personal info) while ensuring web pages don’t break in the process. Decentraleyes is also designed to work well with other types of content blockers like ad blockers.

ClearURLs

Ever noticed those long tracking codes that often get tagged to the end of your search result links or URLs on product pages from shopping sites? All that added guck to the URL is designed to track how you interact with the link. ClearURLs automatically removes the tracking clutter from links—giving you cleaner links and more privacy. 

Other key features include…

  • Clean up multiple URLs at once
  • Block hyperlink auditing (i.e. “ping tracking”; a method websites use to track clicks)
  • Block ETag tracking (i.e. “entity tags”; a tracking alternative to cookies)
  • Prevent Google and Yandex from rewriting search results to add tracking elements
  • Block some common ad domains (optional)

Cookie AutoDelete

Take control of your cookie trail with Cookie AutoDelete. Set it so cookies are automatically deleted every time you close a tab, or create safelists for select sites you want to preserve cookies. 

After installation, you must enable “Auto-clean” for the extension to automatically wipe away cookies. This is so you first have an opportunity to create a custom safelist, should you choose, before accidentally clearing away cookies you might want to keep. 

There’s not much you have to do once you’ve got your safelist set, but clicking the extension’s toolbar button opens a pop-up menu with a few convenient options, like the ability to wipe away cookies from open tabs or clear cookies for just a particular domain.

<figcaption class="wp-element-caption">Cookie AutoDelete’s pop-up menu gives you accessible cookie control wherever you go online. </figcaption>

Firefox Multi-Account Containers

Do you need to be simultaneously logged in to multiple accounts on the same platform, say for instance juggling various accounts on Google, Twitter, or Reddit? Multi-Account Containers can make your life a whole lot easier by helping you keep your many accounts “contained” in separate tabs so you can easily navigate between them without a need to constantly log in/out. 

By isolating your identities through containers, your browsing activity from one container isn’t correlated to another—making it far more difficult for these platforms to track and profile your holistic browsing behavior. 

Facebook Container

Does it come as a surprise that Facebook tries to track your online behavior beyond the confines of just Facebook? If so, I’m sorry to be the bearer of bad news. Facebook definitely tries to track you outside of Facebook. But with Facebook Container you can put a privacy barrier between the social media giant and your online life outside of it. 

Facebook primarily investigates your interests outside of Facebook through their various widgets you find embedded ubiquitously about the web (e.g. “Like” buttons or Facebook comments on articles, social share features, etc.) 

<figcaption class="wp-element-caption">Social widgets like these give Facebook and other platforms a sneaky means of tracking your interests around the web.</figcaption>

The privacy trade we make for the convenience of not needing to sign in to Facebook each time we visit the site (because it recognizes your browser as yours) is we give Facebook a potent way to track our moves around the web, since it can tell when you visit any web page embedded with its widgets. 

Facebook Container basically allows you the best of both worlds—you can preserve the convenience of not needing to sign in/out of Facebook, while placing a “container” around your Facebook profile so the company can’t follow you around the web anymore.

CanvasBlocker

Stop websites from using JavaScript APIs to “fingerprint” you when you visit. CanvasBlocker prevents a uniquely common way websites try to track your web moves.

Best suited for more technical users, CanvasBlocker lets you customize which API’s should be protected from fingerprinting — on some or all websites. The extension can even be configured to alter your API identity to further obfuscate your online identity.

Disconnect

Strong privacy tool that fares well against hidden trackers used by some of the biggest data trackers in the game like Google, Facebook, Twitter and others, Disconnect also provides the benefit of significantly speeding up page loads simply by virtue of blocking all the unwanted tracking traffic. 

Once installed, you’ll find a Disconnect button in your browser toolbar. Click it when visiting any website to see the number of trackers blocked (and where they’re from). You can also opt to unblock anything you feel you might need in your browsing experience. 

We hope one of these anti-tracker extensions provides you with a strong new layer of security. Feel free to explore more powerful privacy extensions on addons.mozilla.org

The Talospace ProjectProgress on the Firefox ppc64le JIT

A picture is worth a thousand Wasm opcodes. This is further along than we've gotten on earlier drafts. More soon.

The Rust Programming Language Blogcrates.io Policy Update RFC

Around the end of July the crates.io team opened an RFC to update the current crates.io usage policies. This policy update addresses operational concerns of the crates.io community service that have arisen since the last significant policy update in 2017, particularly related to name squatting and spam. The RFC has caused considerable discussion, and most of the suggested improvements have since been integrated into the proposal.

At the last team meeting the crates.io team decided to move the RFC forward and start the final comment period process.

We have been made aware by a couple of community members though that the RFC might not have been visible enough in the Rust community. We hope that this blog post changes that.

We invite you all to review the RFC and let us know if there are still any major concerns with these proposed policies.

Here is a quick TL;DR:

  • The current policies are quite vague on a couple of topics. The new policies are more explicit.
  • Reserving names is still allowed, but only to a certain degree and if you have a good reason for it.
  • The crates.io team will try to contact crate owners before taking any actions.

Finally, if you have any comments, please open threads on the RFC diff, instead of using the main comment box, to keep the discussion more structured. Thank you!

The Mozilla BlogDeciding for ourselves: 98% of people want a browser choice screen, Mozilla study finds

What if we got to easily choose our web browser, and didn’t have to rely on complex operating system settings to change the pre-installed default?

At Mozilla, our mission has always centered on empowering people to shape their own experiences online. But these days, big tech too often trumps individual choices, whether that’s through the algorithms that populate our feeds, the online reviews that influence our purchases or the barriers to changing pre-installed browsers on our devices — and keeping that choice. 

The reality is that much of our online experience is controlled by a small number of tech companies. Seeking a solution to the immense gatekeeping power concentrated in the hands of a few, lawmakers and regulatory bodies around the world are considering a range of interventions. One of these is “choice screens,” which prompt users to actively select their preferred default web browser.

Mozilla is interested in whether and how these remedies might work, not just as the maker of the privacy-focused Firefox, but as an organization with a mission to equip people to make their own choices. So, we conducted in-depth research with 12,000 people in Germany, Spain and Poland to understand how choice screens influence their decisions, preferences and satisfaction levels. Here’s what we learned:

1. Browser choice screens affect people’s decisions

Choice screens proved to be powerful influencers of user decisions. More than half of the people who were not presented with a choice screen said that they expected to change the default browser that had been selected for them, suggesting that the pre-installed browser may not serve the needs or preference of many users. On the other hand, 98% of the people who select a browser through a choice screen expect to stick with it. Even more encouraging: Choice screens led many users to opt for independent browsers that were not tied to the operating system or device manufacturer – a sign of fairer competition among browsers. 

2. The design, content and timing of the choice screen matter

Several factors affected people’s choices:

  • Providing more information about the browsers led to a slight increase in users selecting independent browsers while reducing the preference for the pre-installed ones. 
  • People preferred to see a wide variety of browsers. 
  • The positioning of browsers within the choice screen played a crucial role, with lower-positioned browsers being chosen less frequently, particularly on Android devices. For example, moving a browser from the first to fourth on the list significantly reduces the likelihood of a user selecting that browser. 
  • People preferred to see a choice screen during the process of setting up their device. Participants who see the choice screen after set-up, immediately after launching the pre-installed browser, are considerably more inclined to select this browser as their default choice. 

3. People want to choose their default browser

How users feel about choice screens was clear from the survey: An overwhelming majority (98%) of people preferred to see a choice screen. They want to see more information, a wide selection of browsers and the option to select their default browser while setting up their devices. 

4. Better user satisfaction

Perhaps most importantly, choice screens resulted in better user satisfaction across various metrics – including the ease of setting up the device, the amount of time it took to set up the device, and the range of settings they could customize. Notably, choice screens also boosted satisfaction with “the extent to which I felt in control” by 12%.  

Mozilla is currently engaging with regulators, companies, industry leaders, academics and consumer organizations to discuss the findings of this experiment. These insights provide a solid starting point for conversations to maximize the effectiveness of choice screens and address competition-related concerns. You can read the full report here

The power to choose a web browser should be in the hands of users. When thoughtfully designed and implemented, browser choice screens have the potential to reshape the digital landscape and empower users to make informed choices in a web that’s teeming with possibilities. We’re excited to continue this journey towards a more open and user-centric internet, and we ask you to be a part of it.

The post Deciding for ourselves: 98% of people want a browser choice screen, Mozilla study finds appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 513

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is RustQuant, a crate for quantitative finance.

Thanks to avhz for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

342 pull requests were merged in the last week

Rust Compiler Performance Triage

A pretty quiet week, with relatively few statistically significant changes, though some good improvements to a number of benchmarks, particularly in cycle counts rather than instructions.

Triage done by @simulacrum. Revision range: 7e0261e7ea..af78bae

3 Regressions, 3 Improvements, 2 Mixed; 2 of them in rollups

56 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-09-20 - 2023-10-18 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

This is the first programming language I've learned that makes it so easy to make test cases! It's actually a pleasure to implement them.

0xMB on rust-users

Thanks to Moy2010 for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Cameron KaiserWebP chemspill patch on Github

A fix is in the TenFourFox tree for MFSA 2023-40, a/k/a CVE-2023-4863, which is a heap overflow in the WebP image decoder. Firefox 45 would not ordinarily be vulnerable to this but we have our own basic WebP decoder using Google's library, so we are technically exploitable as well. I was working on a fix of my own but the PM27 fix that roytam1 cherrypicked is cleaner, so I've added that patch and one two (a followup was needed) more for correctness. Although this issue is currently being exploited in the wild, it would require a PowerPC-specific attack to be successful on a Power Mac. You do not need to clobber to update your build.

Cameron KaiserGoogle ending Basic HTML support for Gmail in 2024

Understandably they're saying little about it publicly, but word is getting around that Google's fast, super-compatible Basic HTML mode for Gmail will be removed in a few short months. "We’re writing to let you know that the Gmail Basic HTML view for desktop web and mobile web will be disabled starting early January 2024. The Gmail Basic HTML views are previous versions of Gmail that were replaced by their modern successors 10+ years ago and do not include full Gmail feature functionality."

There are also reports that you can't set Basic HTML mode now either. Most of you who want to use it probably already are, but if you're not, you can try this, this, this, this or even this to see if it gets around the front-end block.

Google can of course do whatever they want, and there are always maintenance costs to be had with keeping old stuff around — in this case, for users unlikely to be monetized in any meaningful fashion because you don't run all their crap. You are exactly the people Google wants to get rid of and doing so is by design. As such, it's effectively a giant "screw you," and will be a problem for those folks relying on this for a fast way to read Gmail with TenFourFox or any other limited system. (Hey, wanna buy a Pixel 8 to read Gmail?)

Speaking of "screw you," and with no small amount of irony given this is published on a Google platform, I certainly hope the antitrust case goes somewhere.

Niko MatsakisNew Layout, and now using Hugo!

Some time ago I wrote about how I wanted to improve how my blog works. I recently got a spate of emails about this – thanks to all of you! And a particular big thank you to Luna Razzaghipour, who went ahead and ported the blog over to use Hugo, cleaning up the layout a bit and preserving URLs. It’s much appreciated! If you notice something amiss (like a link that doesn’t work anymore), I’d be very grateful if you opened an issue on the babysteps github repo! Thanks!

Hugo seems fast so far, although I will say that figuring out how to use Hugo modules (so that I could preserve the atom feed…) was rather confusing! But it’s all working now (I think!). I’m still interested in playing around more with the layout, but overall I think it looks good, and I’m happy to have code coloring on the snippets. Hopefully it renders better on mobile too.

The Rust Programming Language BlogAnnouncing Rust 1.72.1

The Rust team has published a new point release of Rust, 1.72.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.72.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.72.1

1.72.1 resolves a few regressions introduced in 1.72.0:

Contributors to 1.72.1

Many people came together to create Rust 1.72.1. We couldn't have done it without all of you. Thanks!

The Talospace ProjectPartial ppc64le JIT available again for Firefox 115ESR

I've been rehabilitating the old ppc64le JIT against more current Firefoxes and there is now available a set of patches you can apply to the current 115ESR. This does not yet include support for Ion or Wasm; the first still has some regressions, and the second has multiple outright crashes. Still, even with just the Baseline Interpreter and Baseline Compiler it is several times faster on benchmarks than the interpreter-only 115. I've included also the relevant LTO-PGO and WebRTC patches so you can just apply the changesets numerically and build. The patches and the needed .mozconfigs for either building an optimized browser or a debug JS shell (should you want to poke around) are in this Github issue.

While this passes everything that is expected to pass, you may still experience issues using it, and you should not consider it supported. Always backup your profile first. But it's now an option for those of you who were using the previous set of patches against 91ESR.

Patrick ClokeCelery architecture breakdown

The Celery project, which is often used Python library to run “background tasks” for synchronous web frameworks, describes itself as:

Celery is a simple, flexible, and reliable distributed system to process vast amounts of messages , while providing operations with the tools required to maintain such a system.

It’s a task queue with focus on real-time processing, while also supporting task scheduling.

The documentation goes into great detail about how to configure Celery with its plethora of options, but it does not focus much on the high level architecture or how messages pass between the components. Celery is extremely flexible (almost every component can be easily replaced!) but this can make it hard to understand. I attempt to break it down to the best of my understanding below. [1]

High Level Architecture

Celery has a few main components [2]:

  1. Your application code, including any Task objects you’ve defined. (Usually called the “client” in Celery’s documentation.)
  2. A broker or message transport.
  3. One or more Celery workers.
  4. A (results) backend.
Celery overview

A simplified view of Celery components.

In order to use Celery you need to:

  1. Instantiate a Celery application (which includes configuration, such as which broker and backend to use and how to connect to them) and define one or more Task definitions.
  2. Run a broker.
  3. Run one or more Celery workers.
  4. (Maybe) run a backend.

If you’re unfamiliar with Celery, below is an example. It declares a simple add task using the @task decorator and will request the task to be executed in the background twice (add.delay(...)). [3] The results are then fetched (asyncresult_1.get()) and printed. Place this in a file named my_app.py:

from celery import Celery

app = Celery(
    "my_app",
    backend="rpc://",
    broker="amqp://guest@localhost//",
)


@app.task()
def add(a: int, b: int) -> int:
    return a + b


if __name__ == "__main__":
    # Request that the tasks run and capture their async results.
    asyncresult_1 = add.delay(1, 2)
    asyncresult_2 = add.delay(3, 4)

    result_1 = asyncresult_1.get()
    result_2 = asyncresult_2.get()
    # Should result in 3, 7.
    print(f"Results: {result_1}, {result_2}")

Usually you don’t care where (which worker) the task runs on it, or how it gets there but sometimes you need! We can break down the components more to reveal more detail:

Celery components

The Celery components broken into sub-components.

Broker

The message broker is a piece of off-the-shelf software which takes task requests and queues them until a worker is ready to process them. Common options include RabbitMQ, or Redis, although your cloud provider might have a custom one.

The broker may have some sub-components, including an exchange and one or more queues. (Note that Celery tends to use AMQP terminology and sometimes emulates features which do not exist on other brokers.)

Configuring your broker is beyond the scope of this article (and depends heavily on workload). The Celery routing documentation has more information on how and why you might route tasks to different queues.

Workers

Celery workers fetch queued tasks from the broker and then run the code defined in your task, they can optionally return a value via the results backend.

Celery workers have a “consumer” which fetches tasks from the broker: by default it requests many tasks at once, equivalent to “prefetch multiplier x concurrency“. (If your prefetch multiplier is 5 and your concurrency is 4, it attempts to fetch up to 20 queued tasks from the broker.) Once fetched it places them into an in-memory buffer. The task pool then runs each task via its Strategy — for a normal Celery Task the task pool essentially executes tasks from the consumer’s buffer.

The worker also handles scheduling tasks to run in future (by queueing them in-memory), but we will not go deeper into that here.

Using the “prefork” pool, the consumer and task pool are separate processes, while the “gevent”/”eventlet” pool uses coroutines, and the “threads” pool uses threads. There’s also a “solo” pool which can be useful for testing (everything is run in the same process: a single task runs at a time and blocks the consumer from fetching more tasks.)

Backend

The backend is another piece of off-the-shelf software which is used to store the results of your task. It provides a key-value store and is commonly Redis, although there are many options depending on how durable and large your results are. The results backend can be queried by using the AsyncResult object which is returned to your application code. [4]

Much like for brokers, how you configure results backends is beyond the scope of this article.

Dataflow

You might have observed that the above components discussed at least several different processes (client, broker, worker, worker pool, backend) which may also exist on different computers. How does this all work to pass the task between them? Usually this level of detail isn’t necessary to understand what it means to “run a task in the background”, but it can be useful for diagnosing performance or configuring brokers and backends.

The main thing to understand is that there’s lots of serialization happening across each process boundary:

Celery dataflow

A task message traversing from application code to the broker to a worker, and a result traversing from a worker to a backend to application code.

Request Serialization

When a client requests for a task to be run the information needs to be passed to the broker in a form it understands. The necessary data includes:

  • The task identifier (e.g. my_app.add).
  • Any arguments (e.g. (1, 2)) and keyword arguments.
  • A request ID.
  • Routing information.
  • …and a bunch of other metadata.

Exactly what is included is defined by the message protocol (of which Celery has two, although they’re fairly similar).

Most of the metadata gets placed in the headers while the task arguments, which might be any Python class, need to be serialized into the body. Celery supports many serializers and uses JSON by default (pickle, YAML, and msgpack, as well as custom schemes can be used as well).

After serialization, Celery also supports compressing the message or signing the message for additional security.

An example AMQP message containing the details of a task request (from RabbitMQ’s management interface) is shown below:

Celery task wrapped in a RabbitMQ message

The example Celery task wrapped in a RabbitMQ message

When a worker fetches a task from the broker it deserializes it into a Request and executes it (as discussed above). In the case of a “prefork” worker pool the Request is serialized again using pickle when passed to task pool [5].

The worker pool then unpickles the request, loads the task code, and executes it with the requested arguments. Finally your task code is running! Note that the task code itself is not contained in the serialized request, that is loaded separately by the worker.

Result Serialization

When a task returns a value it gets stored in the results backend with enough information for the original client to find it:

  • The result ID.
  • The result.
  • …and some other metadata.

Similarly to tasks this information must be serialized before being placed in the results backend (and gets split between the headers and body). Celery provides configuration options to customize this serialization. [6]

An example AMQP message containing the details of a result is shown below:

Celery result wrapped in a RabbitMQ message

The example Celery result wrapped in a RabbitMQ message

Once the result is fetched by the client it can deserialized the true (Python) return value and provide it to the application code.

Final thoughts

Since the Celery protocol is a public, documented API it allows you to create task requests externally to Celery! As long as you can interface to the Celery broker (and have some shared configuration) you can use a different application (or programming language) to publish and/or consume tasks. This is exactly what others have done:

Note that I haven’t used any of the above projects (and can’t vouch for them).

[1]Part of this started out as an explanation of how celery-batches works.
[2]Celery beat is another common component used to run scheduled or periodic tasks. Architecture wise it takes the same place as your application code, i.e. it runs forever and requests for tasks to be executed based on the time.
[3]There’s a bunch of ways to do this, apply_async and delay are the most common, but don’t impact the contents of this article.
[4]As a quick aside — AsyncResult does not refer to async/await in Python. AsyncResult.get() is synchronous. A previous article has some more information on this.
[5]This is not configurable. The Celery security guide recommends not using pickle for serializers (and it is well known that pickle can be a security flaw), but it does not seem documented anywhere that pickle will be used with the prefork pool. If you are using JSON to initially serialize to the broker then your task should only be left with “simple” types (strings, integers, floats, null, lists, and dictionaries) so this should not be an issue.
[6]Tasks and results can be configured to have different serializers (or different compression settings) via the task_ vs. result_ configuration options.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: August 2023 Progress Report

a dark background with thunderbird and k-9 mail logos centered, with the text "Thunderbird for Android, August 2023 progress report"

A Quiet Yet Productive Month

August was a relatively calm month for the K-9 Mail team, with many taking well-deserved summer vacations and attending our first Mozilla All-Hands event. Despite the quieter pace, we managed to hit a significant milestone on our journey to Thunderbird for Android: the beta release of our new account setup interface.

Beta Release with New Account Setup: We Want Your Feedback!

We’re thrilled to announce that we rolled out a beta version featuring the new account setup UI. This has been a long-awaited feature, and even though the team was partially on vacation, we managed to get it out for user testing. The initial feedback has been encouraging, and we’re eager to hear your thoughts.

You can find the K9-Mail v6.710 beta version here:

If you’ve tried the beta, we’d love to get your feedback. What did you like? What could be improved? Your insights will help us refine the feature for its official release.

How to Provide Feedback

You can provide feedback through the following channels:

Community contributions

In August we merged the following pull requests by these awesome contributors:

Releases

In August 2023 we published the following beta versions:

If you want to help shape future versions of the app, become a beta tester and provide feedback on new features while they are still in development.

The post Thunderbird for Android / K-9 Mail: August 2023 Progress Report appeared first on The Thunderbird Blog.

The Servo BlogThis month in Servo: upcoming events, new browser UI, and more!

Servo has had some exciting changes land in our nightly builds over the last month:

  • as of 2023-08-09, we now use rustls instead of OpenSSL (#30025)
  • as of 2023-08-21, our experimental WebGPU support was updated (#29795, #30359)
  • as of 2023-08-26, we can now build on ARM32 in addition to ARM64 (#30204)
  • as of 2023-09-01, CSS floats are now supported again (#30243 et al)
  • as of 2023-09-05, ‘white-space: nowrap’ is now supported again (#30259)
  • as of 2023-09-07, we have an improved crash error page (#30290)
  • as of 2023-09-15, our new browser UI is enabled by default (#30049)
WebGPU game of life, showing a 32x32 grid where the living cells are shaded with a rainbow texture

While our WebGPU support is still very much experimental (--pref dom.webgpu.enabled), it now passes over 5000 more tests in the Conformance Test Suite, after an upgrade from wgpu 0.6 (2020) to 0.16 (2023) and the addition of GPUSupportedFeatures. A few WebGPU demos now run too, notably those that don’t require changing the width or height on the fly, such as the Conway’s Game of Life built in Your first WebGPU app.

Both of these were contributed by Samson @sagudev, who has also done a lot of work on our DOM bindings, SpiderMonkey integration, and CI workflows, and we’re pleased to now have them join Servo as a reviewer too!

Wikipedia article for Servo, showing article text flowing around the floating infobox on the right

On the CSS front, floats and ‘white-space: nowrap’ were previously only supported in our legacy layout engine (--legacy-layout), but now they are supported again, and better than ever before! Floats in particular are one of the trickiest parts of CSS2, and our legacy version had many bugs that were essentially unfixable due to the legacy layout architecture.

Sometimes Servo crashes due to bugs or unimplemented features, and Rust helps us ensure that they almost always happen safely by panicking, but there’s still a lot we can do to improve the user experience while surfacing those panics, especially on platforms without stdio like Windows.

Our new crash error page shows the panic message and stack trace, instead of a confusing “unexpected scheme” error, and allows the user to reload the page. Note that not all crashes are handled gracefully yet — more work is needed to allow recovery from crashes in style and layout.

Servo’s new crash error page, showing a fake panic!() inserted at the start of Document::Write

Servo’s example browser — the nightly version of Servo you can download and run — now has a location bar! This new browser UI, or “minibrowser” mode, is now enabled by default, though you can disable it with --no-minibrowser if you run into any problems. See also #30049 for known issues with the minibrowser.

Servo’s new browser UI, showing a toolbar with a location field and go button

Upcoming events

September is also a big month for Servo as a project! We have joined Linux Foundation Europe, and we’re also attending several events in Bilbao, Spain, and Shanghai, China.

Servo will be at the LF Europe Member Summit in Bilbao, with a brief project update on 18 September at 10:45 local time (08:45 UTC), and the Open Source Summit Europe, with Manuel Rego speaking about Servo on 21 September at 11:55 local time (09:55 UTC).

At both events, we will also have a booth where you can play with Servo on a real device and ask us questions about the project, all day from 18 September through 21 September.

The following week, you can find us at the GOSIM workshop and conference in Shanghai, with Martin Robinson presenting one workshop and one talk:

The Mozilla BlogSeeing a Firefox IRL

Two images of red panda with a browser window overlay.

Did you know that the red panda is also known as a firefox? Sept. 16 is International Red Panda Day, so we thought it would be a good time to visit a Firefox, ahem red panda, in real life and talk to their caretakers at zoos across the U.S.

Red pandas are the first panda — discovered nearly 50 years before the giant panda. Unfortunately, they are also endangered with as few as 2,500 remaining in the wild. Founded in 2007, Red Panda Network (RPN) responds to threats to the species with community-based programs that empower local people to conserve red pandas and their habitat. You can learn about RPN’s work here

Additionally, across the world, there are several zoos that participate in a breeding program to help grow the red panda population. Now is a great time to visit the red pandas at your nearest zoo.

Before you go, let us tell you more about the red pandas and the people who care for those adorable red cat-bears (another moniker they go by). We reached out to five zoos who participate in the special red panda program, and sent questions to learn from the zookeepers and the red pandas they care for.

Click on the links below to read more about the red pandas and their caretakers:

A zookeeper feeds a red panda perched on a tree.<figcaption class="wp-element-caption">Potter Park Zoo’s zookeeper Carolyn with Wilson. (Photo credit: Heath Thurman)</figcaption>

Thank you so much to the zookeepers who spent time to share their knowledge with us. We’d love to learn more about other red pandas in the world, so if you happen to be at your local zoo, share your photos with us on Twitter, Instagram or TikTok. We can’t wait to see more firefoxes!

Get Firefox

Get the browser that protects what’s important

The post Seeing a Firefox IRL appeared first on The Mozilla Blog.

The Mozilla BlogFirefox IRL: Meet Seattle’s Carson, the Woodland Park Zoo’s feisty red panda

A red panda, with a browser window overlay on its face, crouches on a stump while a zookeeper feeds it.<figcaption class="wp-element-caption">Megan Blandford, the animal keeper at the Woodland Park Zoo, feeds a red panda. (Photo credit: Woodland Park Zoo.)</figcaption>

Did you know that the red panda is also known as a firefox? Sept. 16 is International Red Panda Day, so we thought it would be a good time to visit a Firefox, ahem red panda, in real life and talk to their caretakers at zoos across the U.S. See our full series of interviews here.

Situated in the Pacific Northwest, the Woodland Park Zoo in Seattle offers a temperate forest that provides cooler temperatures where animals like the red panda can thrive. There, you’ll meet Carson, an 8-year-old red panda. 

Carson’s caretaker is Megan Blandford, the animal keeper at the Woodland Park Zoo. Megan describes Carson’s personality as feisty, and from what we’ve heard, you can either find him looking for food scraps or napping at the top of his tree. We love naps, too!

We caught up with Megan where she tells us more about Carson, shares her first foray in the zoo world (you’ll never guess, it involved building a piano for a killer whale) and how caring for a red panda provides job security. 

Tell us about the red pandas you care for:

“…currently has one 8-year-old Himalayan red panda named Carson that you can see daily in Woodland Park Zoo’s temperate forest [habitat]. He’s got a very feisty personality and never hesitates to let you know his strong opinions and feelings about things. He will move heaven and earth for a few bites of grapes and apples, and is sure to assist the keepers every afternoon when we aren’t giving him his daily allotment of bamboo fast enough. When he’s not cruising around on the ground for potentially missed food scraps, you can usually find him curled up in his favorite snooze spot towards the top of the tree in front of his exhibit.”

A red panda sticks its tongue out. <figcaption class="wp-element-caption">Woodland Park Zoo’s 8-year-old Himalayan red panda, Carson. (Photo credit: Jeremy Dwyer-Lindgren/Woodland Park Zoo)</figcaption>

What’s the coolest part about working with red pandas?

“There are so many great things about working with red pandas that it’s hard to say what the best part is! I think personally my favorite part is when Carson sees me first thing in the morning and gets SO excited. He will run along the fence to greet me and is equally excited to come in and eat/train when I come back over a few minutes later. Fun fact! Picture what you would imagine a unicorn sounds like, and that’s one of the primary vocalizations the red pandas make. It’s like an extremely high-pitched rapid neighing. I also love being able to do keeper talks and talk to the public about them and how special they are, and let them know that there are actions that they can take to help protect these unique animals that are endangered in the wild.”  

How did you get your start as a zookeeper?

I’ve been an animal keeper since 2011 and have been working with red pandas since 2020. When I was in college, I initially wanted to become a veterinarian but learned through my university that I HATED performing surgery and needed to change my goals. After working in a herpetology and parrot cognition lab on campus, I thought I might try my hand at zookeeping. I first dipped my toe into the zoo world as an intern at Six Flags Discovery Kingdom in California, where I produced ideas and then built enrichment items for an orca they had at the time (I ended up working with an engineering team to build a Simon-says killer whale piano for her.) It can be very competitive to break into the zoo world, so after I graduated and was unable to land another zoo internship or job, I worked as a freelance photographer specializing in human rights violations in Africa, specifically the Gambaga outcast witch colony and the black markets around Ghana. When I got back to the U.S. I was lucky enough to get a full-time internship in Toledo, Ohio working with elephants. From there I was hired on, and the rest is history. I eventually started working at Woodland Park Zoo; the red pandas are always the highlight of my day.”

What does your typical day look like?

My typical day starts with life checking all the animals on my unit to make sure no one had any issues overnight (e.g. HVAC going out, branch falling on a fence, tragically uninformed mallards wandering into predator areas, etc), which for me includes the maned wolves, red pandas, southern pudu and jaguars. Then I’ll prep their diets in the morning and start their morning feedings. After everyone is happy and satiated, I’ll start cleaning, and red pandas make up a large part of that because red pandas defecate a lot! Like, an alarming amount if you aren’t expecting it. Their diets consist of lots of items like bamboo that are very low in nutritional value and take lots of energy to digest. As a result, they both sleep and poop a staggering amount. If you imagine the amount a typical golden retriever might produce in a day, each red panda produces about five times as much as that per day, it’s fantastic job security. I’ll also train all the animals throughout the day so that they can participate in their own husbandry and veterinary care; it allows us the flexibility to give them choice and control regarding their care. For example, we do voluntary blood draws on Carson every few months. The male red panda is trained to sit on a platform and is reinforced with his favorite snacks while our talented veterinary staff draw blood from his tail. He’s great at other behaviors as well, like allowing voluntary injections and standing on a scale to obtain his weight. After feeding, training and cleaning, I then start to feed out their diets for overnight and check on them one last time (I like to think of it as tucking them in) before leaving for the day.”

Thank you, Megan, for sharing your stories about being a zookeeper and Carson. One of these days we’ll have to get an audio recording of Carson doing his impression of a unicorn with his “extremely high-pitched rapid neighing.”  

Get Firefox

Get the browser that protects what’s important

The post Firefox IRL: Meet Seattle’s Carson, the Woodland Park Zoo’s feisty red panda appeared first on The Mozilla Blog.

The Mozilla BlogFirefox IRL: Meet Amaya, Basu, Takeo and Pili, Sacramento Zoo’s fantastic four red pandas

Did you know that the red panda is also known as a firefox? Sept. 16 is International Red Panda Day, so we thought it would be a good time to visit a Firefox, ahem red panda, in real life and talk to their caretakers at zoos across the U.S. See our full series of interviews here.

Sacramento Zoo has a full house with four red pandas: Amaya, Basu, Takeo and Pili. Just like people, each red panda has a unique personality. Their caretaker, Rachel Winkler, animal care supervisor at the Sacramento Zoo, shares her stories about this foursome.

Originally a math major, Rachel volunteered at the Sacramento Zoo when she discovered how much she liked caring for animals. From there, she went on to other zoos to learn more about becoming a zookeeper. Now, she’s back at Sacramento Zoo. She tells about the resident red pandas’ favorite snacks, sleeping spots and more. 

Tell us about the red pandas you care for.

“…we have four red pandas: one breeding pair (Amaya and Basu) and one pair of roommates (Takeo and Pili). Amaya is 6 years old and is our breeding female. She is pretty shy, only being comfortable with her regular keepers. She loves grapes and bamboo! Basu is 9 years old and is our breeding male. We call him the “golden boy” because of his yellow tinted fur, plus he’s super sweet. He loves strangers and always comes over when someone new comes to the area. Takeo is 14 years old and is the sweetest old man. Despite not having many teeth left in his old age, he loves craisins (cranberry raisins). He likes to sleep a lot, but usually loses his sleeping spot to Pili, since it’s her favorite spot too. Pili is 11 years old and is a sassy lady.  She likes to get what she wants, even if that means bumping Takeo out of somewhere he’s already asleep. Being on the older side hasn’t slowed her down, and she’s still very spunky!”

A red panda is perched on a tree.<figcaption class="wp-element-caption">Sacramento Zoo’s 14-year-old Takeo stills still loves to eat craisins despite not having many teeth left. (Photo credit: Sacramento Zoo)</figcaption>

What’s the coolest part about working with red pandas?

“The best part about working with red pandas is knowing I get to take care of an endangered species; there are less than 10,000 red pandas left in the wild. While it’s sad to know their population is struggling in the wild, it is exciting to be privileged to work with such a rare animal, and help raise awareness about conservation efforts to guests to help them in the wild. Here at the Sacramento Zoo, we support the Red Panda Network, who works hard to protect native panda habitat in Nepal as well as conduct research studies to better understand and save the pandas. I also love working with the red pandas because they are charismatic fluff balls who love their grapes.”

A red panda sits on a wooden table with a tennis ball.<figcaption class="wp-element-caption">Sacramento Zoo’s 9-year-old Basu is called the “golden boy” because of his yellow tinted fur. (Photo credit: Sacramento Zoo)</figcaption>

How did you get your start as a zookeeper?

“I started volunteering at Sacramento Zoo when I was an undergrad at UC Davis.  I was actually a math major on track to be a teacher when I realized how much I liked the animal care field and helping with conservation. Through volunteering, I was able to get enough experience to get an internship at the Oakland Zoo over the summer of my junior year of college. After graduation, through my experience at the Oakland Zoo, I applied and was hired as a paid apprentice (full-time temporary learning position) at Oakland. While at Oakland, I got to work with a variety of taxa from tiny tenrecs all the way up to giraffes! After about two years at Oakland, I was able to get a full-time permanent position at the Santa Barbara Zoo, where I worked primarily with gorillas, giraffes and meerkats, as well as being cross-trained in all other mammal areas. After about four years in Santa Barbara, I took a primary ungulate keeper position at Sacramento and moved back. After three years, I was promoted to animal care supervisor, where I oversaw the ungulate department, and recently have moved to oversee the commissary and carnivore department (which includes the red pandas!).”

A woman smiles in a selfie photo. On top is a red panda in a wooden box on a tree.<figcaption class="wp-element-caption">Sacramento Zoo’s Rachel with red panda Takeo. (Photo credit: Sacramento Zoo)</figcaption>

What does your typical day look like?

“Every morning animal care staff has a morning meeting where we go over important tasks for the day, the veterinary schedule for the day and share anything within sections that the whole department should know about. From there, we gather our diets for the day from the commissary and break into the section teams for the day and hash out section goals/needs. Once we have planned out the day, we go and do life checks on the animals to make sure everyone is okay to start our day. Mornings include the majority of cleaning of animal areas and habitats and feeding out a.m. diets. The rest of the day varies based on animals or sections you’re in, but some animals get midday feeds as well as p.m. feedings. After lunch usually lends free time for projects like creating enrichment, training or habitat upkeep.  At the end of the day, making sure everyone is fed, closed into appropriate areas/given access to off-habitat areas as needed, and we’re good to go until tomorrow when we do it all again.”

A red panda is perched on a tree.<figcaption class="wp-element-caption">Sacramento Zoo’s 11-year-old Pili is a sassy lady who likes to get what she wants. (Photo credit: Sacramento Zoo)</figcaption>

Thank you, Rachel, for sharing your stories about this foursome. Sounds like there are more fun adventures in store for this group.

Get Firefox

Get the browser that protects what’s important

The post Firefox IRL: Meet Amaya, Basu, Takeo and Pili, Sacramento Zoo’s fantastic four red pandas  appeared first on The Mozilla Blog.

The Mozilla BlogFirefox IRL: Meet Linda and Saffron, Idaho Falls Zoo’s mother-daughter red panda duo

Did you know that the red panda is also known as a firefox? Sept. 16 is International Red Panda Day, so we thought it would be a good time to visit a Firefox, ahem red panda, in real life and talk to their caretakers at zoos across the U.S. See our full series of interviews here.

Forget “Gilmore Girls.” The best mother-and-daughter show to watch is right at Idaho Falls Zoo. We’re talking about red panda Linda and her little one, Saffron. 

In 2021, Saffron was born at Idaho Falls Zoo and since then, she’s been at her mother’s side. Katie Barry, general curator at the zoo, tells us that Saffron recently discovered how yummy grapes taste and is always looking for treats along with her mom. We chatted with Katie, who shared a fun fact about how red pandas are able to go head first down a tree.

Tell us about the red pandas you care for.

“Linda is very much a nosey panda, wanting to be right up in your business as we clean the enclosure. She is always looking for handouts and loves bamboo and grapes. Saffron is a little more cautious and will keep her distance until the food comes out. Linda always hogs the grapes, so growing up Saffron never knew how good they were until late last year when she had her first one. Now she will be right up with Linda waiting when keepers come in with treats.”

Two red pandas are perched on tree logs.<figcaption class="wp-element-caption">Idaho Falls Zoo mother and daughter, Linda and Saffron. (Photo credit: Idaho Falls Zoo)</figcaption>

What’s the coolest part about working with red pandas?

“Red pandas are very curious, smart and playful animals. Being able to work with them is just amazing. They are fun to watch move the enclosure, over logs, ropes, ladders and tree branches. Their adaptation of the thumb-like bone to help grip the stocks of bamboo and cool ankle bone structure that allows them to turn their foot so they can go head first down a tree.” 

How did you get your start as a zookeeper?

Beginning my career in zookeeping, I first obtained a biology degree with an emphasis in zoology. From there, I took the time searching for internships and other available opportunities to get work experience in the zoo field. For me, I did a yearlong program at a facility in Washington that had exotic, big cats and learned animal husbandry basics. My peers and I had classes going over USDA regulations, nutrition and diet prep to proper cleaning and care for the animals. After completing the program, I was fortunate enough to get a job at a zoo in Mississippi as a zookeeper. Through my career, I have had the privilege to work with a variety of animals, from reptiles to primates and just about everything in between. Though I did enjoy the carnivores best; the red pandas are part of the small carnivore group.”

What does your typical day look like?

 “A typical day consists of a morning meeting where as a team we discuss what is going on, any meetings, trainings, tours, etc. that may be happening that we need to know about and schedule for. From there, our top priority is to check all the animals in our areas making sure they are all in good health, followed by a.m. feedings, any medications that need to be given out and moving on to clean. Exhibits are cleaned and checked before animals are put back out for the day and once out, holding areas are cleaned and checked over as well as giving out enrichment. The day is finished up with making diets for the night and next morning followed by p.m. feedings, shifting of any animals per protocols and end-of-the-day health checks. During the middle of the day, as time allows, we do various projects around our respective areas, fill out the day’s paperwork and work with the animals on specific behaviors to help promote better veterinary care and overall animal management. While the zoo is open, the animal care staff provides educational encounters with the public throughout the day.”

For visitors to the Idaho Falls Zoo, you can meet the zookeeper at the red panda exhibit on Saturdays to learn more about the mother-daughter duo. 

Thank you, Katie, for sharing your stories about Linda and Saffron. It sounds like just the perfect show to watch this summer!

Get Firefox

Get the browser that protects what’s important

The post Firefox IRL: Meet Linda and Saffron, Idaho Falls Zoo’s mother-daughter red panda duo appeared first on The Mozilla Blog.

The Mozilla BlogFirefox IRL: Meet Deagan, Maliha and Wilson, Michigan Potter Park Zoo’s red panda family

Did you know that the red panda is also known as a firefox? Sept. 16 is International Red Panda Day, so we thought it would be a good time to visit a Firefox, ahem red panda, in real life and talk to their caretakers at zoos across the U.S. See our full series of interviews here.

This past July marked the first birthday of Potter Park Zoo’s newest red panda, Wilson. His birth last year is part of the Association of Zoos and Aquariums (AZA)’s red panda Species Survival Plan Program. To maintain genetic diversity in the red panda population, the Michigan zoo’s female red panda Maliha was paired with Deagan Reid, and together they brought Wilson into this world.

The family’s caretaker is Carolyn Schulte, carnivore and primate keeper at Potter Park Zoo. Carolyn grew up loving animals and is close to celebrating 10 years at the zoo. Carolyn tells us the advantages of having a solo cub to help care for, as well as how she and others on the team are able to deepen their relationship with the cub through close contact. 

Tell us about the red pandas you care for.

We currently house three red pandas: Deagan Reid (dad), Maliha (mom) and Wilson (son). Deagan Reid is a great panda but prefers to do his own thing. He is willing to work with keepers for his favorite treat (pear) but more often prefers napping in a high spot in his habitat. Maliha is a very keeper-oriented panda, even when her favorite treat (grapes) is not involved, although she will never turn one down. This affinity for her caretakers means we have been able to train some helpful behaviors, including awake voluntary ultrasound, which we have used to confirm each of her three pregnancies. Wilson is our newest addition, born last July, and he had quickly become a staff favorite. Red pandas can have litters of one to four, with two being the most common, but Wilson was born as a solo cub. This means he got our undivided attention while still in the nest box, and he quickly learned that keepers usually mean snacks. He is a playful, inquisitive and intelligent panda and is a joy to work with.”

A red panda is perched on a tree log. <figcaption class="wp-element-caption">Potter Park Zoo’s newest addition, Wilson, will be celebrating his first birthday in July. (Photo credit: Heath Thurman)</figcaption>

What’s the coolest part about working with red pandas?

 “Unlike most of the other animals in my department, red pandas are not protected contact, but free contact, meaning we can share space. This makes caring for them and training them much simpler and means it is easier to develop close relationships. Due to this close relationship, we also get to be more involved when cubs are born since our female Maliha allows us to check in on the cubs’ progress almost daily without getting stressed. We can also start forming relationships with the cubs at a young age while they are still in the nest box, which makes caring for them and training behaviors much easier.”

How did you get your start as a zookeeper?

I always loved animals and knew I would work with them someday, I just didn’t know what that looked like. It wasn’t until I went to MSU and found out they had a degree in zoology with a concentration in zoo and aquarium science that I realized zookeeping was a real career pathway worth pursuing. I interned at John Ball Zoo in Grand Rapids during college and worked there as a keeper aide the summer after graduating. I heard Potter Park Zoo had a paid internship available, so I applied and got it! After the internship I applied twice to open full-time keeper positions here before being hired on. I’m now almost to my 10-year work anniversary.”

A zookeeper feeds a red panda perched on a tree. <figcaption class="wp-element-caption">Potter Park Zoo’s zookeeper Carolyn with Wilson. (Photo credit: Heath Thurman)</figcaption>

What does your typical day look like?

A typical day is busy. It begins with checking on and feeding all the animals I’m responsible for that day. The zoo is divided up into three animal departments, and each department is divided among two or three keepers on a given day. Most of the animals I work with require protected contact (humans and animals never share space), so feeding usually happens in off-exhibit spaces, giving me a chance to clean and check exhibits before sending animals back out for the day. This is also a good time to place enrichment (anything that encourages an animal to exhibit natural behaviors) in the exhibit. The middle of the day involves cleaning off-exhibit spaces and doing any projects that need to get done. The end of the day involves another round of checking on animals and feeding before leaving for the night.”

A zookeeper looks toward two red pandas perched on a tree log.<figcaption class="wp-element-caption">Potter Park Zoo’s zookeper Carolyn with Maliha (mom) and Wilson (baby). (Photo credit: Heath Thurman)</figcaption>

For visitors to the Potter Park Zoo, there are daily keeper talks, where you can learn fascinating facts and insights about the red pandas. 

Thank you, Carolyn, for sharing your photos and stories about the red panda family. We wonder how Potter Park Zoo will celebrate Wilson’s first birthday. Is the traditional smash cake for the first birthday also a thing for red pandas? 

Get Firefox

Get the browser that protects what’s important

The post Firefox IRL: Meet Deagan, Maliha and Wilson, Michigan Potter Park Zoo’s red panda family   appeared first on The Mozilla Blog.

The Mozilla BlogFirefox IRL: Meet Rose and Ruby, Zoo Atlanta’s red panda sisters

Did you know that the red panda is also known as a firefox? Sept. 16 is International Red Panda Day, so we thought it would be a good time to visit a Firefox, ahem red panda, in real life and talk to their caretakers at zoos across the U.S. See our full series of interviews here.

Do you remember the story, “Little Women”? It’s a classic tale about sisters who experience the ups and downs of life. Well, here’s another sister story to follow right at Zoo Atlanta, where a “Little Women” tale of red panda sisters is unfolding this summer.

Recently, Zoo Atlanta welcomed red panda sisters, Rose and Ruby from Zoo Knoxville. Their caretaker Michelle Elliott, Senior Keeper in Mammals at the Zoo Atlanta, shares fun facts about their red fur and her inspiration for becoming a zookeeper. 

Tell us about the red pandas you care for:

We recently welcomed sisters Rose and Ruby to Zoo Atlanta from Zoo Knoxville, and they are still living in a special area for newly-arrived animals while their habitat receives some upgrades to make it a better fit for two red pandas (we previously had just one). While there, they are taken care of by an animal care teammate who welcomes all of the Zoo’s new animals – she gets to care for wide variety of species! She reports that they are doing great and are really beginning to show their separate personalities. We can’t wait to meet them!”

A red panda is perched on a tree, sniffing some leaves.<figcaption class="wp-element-caption">Zoo Atlanta’s Rose enjoying a bamboo for snack. (Photo credit: Zoo Atlanta)</figcaption>

What’s the coolest part about working with red pandas?

I love that there’s always something more to learn about them! Did you know that red pandas have red fur to blend in with a moss that grows in the trees in their native habitat, or that they don’t actually have paw pads? Their fur on the underside of their paw is just super dense. They are really neat animals!”

How did you get your start as a zookeeper?

My inspiration to become an animal care professional came when I participated in an overnight program at Zoo Atlanta at 9 years old. I fell in love with the tigers, and when our instructor told us they were endangered, I decided to work in the conservation field so I could help save the species.”

A zookeeper smiles at the camera.<figcaption class="wp-element-caption">Zoo Atlanta’s Michelle has been busy with the new red pandas who will soon be making their public appearance this summer. (Photo credit: Zoo Atlanta)</figcaption>

What does your typical day look like?

“We arrive a couple hours before the Zoo opens to give the animals their morning meals, set up the public habitats for the day with enrichment, and do positive reinforcement training sessions with the animals. Once the Zoo is open and the animals are in their habitats, we clean their behind-the-scenes areas and prepare those with fresh hay beds and even more enrichment for the evening. There are often Keeper Talks, midday feedings, behind-the-scenes tours, or meetings in the afternoon, and at the end of the day we give the animals access to come inside to eat their dinners and start making a new mess for us to clean tomorrow!”

Thank you, Michelle, for sharing your stories. We can’t wait to for Rose and Ruby to meet everyone who visits them in their new habitat.

Get Firefox

Get the browser that protects what’s important

The post Firefox IRL: Meet Rose and Ruby, Zoo Atlanta’s red panda sisters  appeared first on The Mozilla Blog.

Nick FitzgeraldMy WasmCon 2023 Talk

I recently gave a talk at WasmCon 2023 about the measures we take to ensure security and correctness in Wasmtime and Cranelift. Very similar content to this blog post.

Here is the abstract:

WebAssembly programs are sandboxed and isolated from one another and from the host, so they can’t read or write external regions of memory, transfer control to arbitrary code in the process, or freely access the network and filesystem. This makes it safe to run untrusted WebAssembly programs: they cannot escape the sandbox to steal private data from elsewhere on your laptop or run a botnet on your servers. But these security properties only hold true if the WebAssembly runtime’s implementation is correct. This talk will explore the ways we are ensuring correctness in the Wasmtime WebAssembly runtime and in its compiler, Cranelift.

The slides are available here and the recording is up on YouTube, although unfortunately they missed the first 30 seconds or so:

Mozilla ThunderbirdThunderbird Podcast #4: Will The Real Mozilla Please Stand Up?

The Thunderbird logo is shown embracing a microphone. Underneath is the text "Will the real mozilla please stand up?"

The Thunderbird team is back from Mozilla’s All-Hands event, and we’re overwhelmed in the most positive way. In addition to the happy and positive vibes we’re feeling from meeting our colleagues in person for the first time, we have a lot of thoughts and impressions to share. Ryan, Jason, and Alex talk about how Mozilla is building AI tools for the good of humanity, and how our perception of AI has changed dramatically. Plus, the problem with the “hey Mozilla, just build a browser” argument.

Today’s episode is light on actual Thunderbird talk, and more focused on Mozilla as an organization and the future of AI. We hope you enjoy it as much as we enjoyed discussing it!

Subscribe To The Podcast

Have a question or comment for us? Just email podcast@thunderbird.net.

Thunderbird Podcast Episode #4 Chapters:

  • (00:00) – Intro
  • (00:28) – The All-Hands Hangover
  • (03:10) – The Thunderbird Team IRL!
  • (09:42) – moz://a beyond the products
  • (13:36) – Tackling AI & web issues
  • (19:24) – The “just build a browser” argument
  • (21:46) – Alex changes his mind about AI
  • (23:38) – Building AI tools for good
  • (33:09) – Fighting for the soul of the internet
  • (40:11) – Thunderbird and AI?
  • (47:16) – Warm fuzzy feelings
  • (52:51) – Outro

The post Thunderbird Podcast #4: Will The Real Mozilla Please Stand Up? appeared first on The Thunderbird Blog.

Mozilla Security BlogVersion 2.9 of the Mozilla Root Store Policy

Online security is constantly evolving, and thus we are excited to announce the publication of MRSP version 2.9, demonstrating that we are committed to keep up with the advancement of the web and further our commitment to a secure and trustworthy internet.

With each update to the Mozilla Root Store Policy (MRSP), we aim to address emerging challenges and enhance the integrity and reliability of our root store. Version 2.9 introduces several noteworthy changes and refinements, and within this blog post we provide an overview of key updates to the MRSP and their implications for the broader online community.

Managing the Effective Lifetimes of Root CA Certificates

One of the most crucial changes in this version of the MRSP is to limit the time that a root certificate may be in our root store. Often, a root certificate will be issued with a validity period of 25 or more years, but that is too long when one considers the rapid advances in computer processing strength. To address this concern and to make the web PKI more agile, we are implementing a schedule to remove trust bits and/or the root certificates themselves from our root store after they have been in use for more than a specified number of years.

Under the new section 7.4 of the MRSP, root certificates that are enabled with the website’s trust bit will have that bit removed when CA key material is 15 years old. Similarly, root certificates with the email trust bit will have a “Distrust for S/MIME After Date” set at 18 years from the CA’s key material generation date. A transition schedule has been established here, which phases this in for CA root certificates created before April 14, 2014. The transition schedule is subject to change if underlying algorithms become more susceptible to cryptanalytic attack or if other circumstances arise that make the schedule obsolete.

Compliance with CA/Browser Forum’s Baseline Requirements for S/MIME Certificates

The CA/Browser Forum released Baseline Requirements for S/MIME certificates (S/MIME BRs), with an effective date of September 1, 2023. Therefore, as of September 1, 2023, certificates issued for digitally signing or encrypting email messages must conform to the latest version of the S/MIME BRs, as stated in section 2.3 of the MRSP. Period-of-time audits to confirm compliance with the S/MIME BRs will be required for audit periods ending after October 30, 2023. Transition guidance is provided at the following wiki page: https://wiki.mozilla.org/CA/Transition_SMIME_BRs.

Security Incident and Vulnerability Disclosure

To enable swift response and resolution of security concerns impacting CAs, guidance for reporting security incidents and serious vulnerabilities has been added to section 2.4 of the MRSP. Additional guidance is provided in the following wiki page: https://wiki.mozilla.org/CA/Vulnerability_Disclosure.

CCADB Compliance Self-Assessment

Previously, CAs were required to perform an annual self-assessment of compliance with Mozilla’s policies and the CA/Browser Forum’s Baseline Requirements for TLS, but the MRSP did not specifically require that the annual self-assessment be submitted. Beginning in January 2024, CA operators with root certificates enabled with the website’s trust bit must perform and submit the CCADB Compliance Self-Assessment annually (within 92 calendar days from the close of their audit period). This will provide transparency into each CA’s ongoing compliance with Mozilla policies and the CA/Browser Forum’s Baseline Requirements for TLS.

Elimination of SHA-1

With the release of Firefox 52 in 2017, Mozilla removed support for SHA-1 in TLS certificates. Version 2.9 of the MRSP takes further steps to eliminate the use of SHA-1, allowing it only for end entity certificates that are completely outside the scope of the MRSP, and for specific, limited circumstances involving duplication of an existing SHA-1 intermediate CA certificate. These efforts align with industry best practices to phase out the usage of SHA-1.

Conclusion

Several of these changes will require that CAs revise their practices, so we have sent CAs a CA Communication and Survey to alert them about these changes and to inquire about their ability to comply with the new requirements by the effective dates.

These updates to the MRSP underscore Mozilla’s unwavering commitment to provide our users with a secure and trustworthy experience. We encourage your participation in the Mozilla community and the CCADB community to contribute to these efforts to provide a secure online experience for our users.

The post Version 2.9 of the Mozilla Root Store Policy appeared first on Mozilla Security Blog.

This Week In RustThis Week in Rust 512

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is irsenv, a hierarchical environmant variable manager.

Thanks to sysid for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

382 pull requests were merged in the last week

Rust Compiler Performance Triage

An interesting week. We saw a massive improvement to instruction-counts across over a hundred benchmarks, thanks to #110050 an improved encoding scheme for the dependency graphs that underlie incremental-compilation. However, these instruction-count improvements did not translate to direct cycle time improvements. We also saw an improvement to our artifact sizes due to #115306. Beyond that, we had a scattering of small regressions to instruction-counts that were justified because they were associated with bug fixes.

Triage done by @pnkfelix. Revision range: 15e52b05..7e0261e7

3 Regressions, 2 Improvements, 5 Mixed; 2 of them in rollups 84 artifact comparisons made in total

Full report

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-09-13 - 2023-10-11 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

It's very much a positive feedback loop: good tooling makes good tooling easier to build, so more of it gets built and the cycle repeats.
cargo-semver-checks stands on the shoulders of giants like rustc and rustdoc and Trustfall. Remove any one of them (or even just rustc's high-quality diagnostics!) and cargo-semver-checks wouldn't have been a viable project at all.

Predrag Gruevski on /r/rust

Thanks to Vincent de Phily for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla BlogHow to easily switch from Chrome to Firefox

There’s never been a better time to switch from Chrome to Firefox, if we do say so ourselves. 

Some of the internet’s most popular ad blockers, such as uBlock Origin — tools that save our digital sanity from video ads that auto-play, banners that take up half the screen and pop-up windows with infuriatingly tiny “x” buttons — will become less effective on Google’s web browser thanks to a set of changes in Chrome’s extensions platform

At Mozilla, we’re all about protecting your privacy and security – all while offering add-ons and features that enhance performance and functionality so you can experience the very best of the web. We know that content blockers are very important to Firefox users, so rest assured that despite changes to Chrome’s new extensions platform, we’ll continue to ensure access to the best privacy tools available – including content-blocking extensions that not only stop creepy ads from following you around the internet, but also allows for a faster and more seamless browsing experience. In addition, Firefox has recently enabled Total Cookie Protection as default for all users, making Firefox the most private and secure major browser available across Windows, Mac and Linux.  

Longtime Chrome user? We know change can be hard. But we’re here to help you make the move with any data you want to bring along, including your bookmarks, saved passwords and browsing history.  

Here’s how to easily switch from Chrome to Firefox as your desktop browser in five steps:

Step 1: Download and install Firefox from Mozilla’s download page

Step 2: If you have Chrome running, quit the app. But don’t delete it just yet.

Step 3: Open Firefox. The import tool should pop up. 

In case it doesn’t, click the menu button Fx57Menu > Settings > Near “Import Browser Data,” click “Import Data.”

Step 4: Select what you want to import. If you have more than one type of data saved in Chrome, you would be able to expand the “Import all available data” section to choose what information you’d like to import to Firefox:

  • Saved Passwords: Usernames and passwords you saved in Chrome. Here’s why you can trust Firefox with your passwords.
  • Bookmarks: Web pages that you bookmarked in Chrome. 
  • Browsing History: A list of web pages you’ve visited. If there’s an article you didn’t quite finish last week, bring over your browsing history so you can find it later.
  • Saved Payment Methods: When you’re ordering something online, web forms can be populated with credit card information you’ve saved (except your CVV number, as a precaution). On Firefox, you can also use a password as an extra layer of protection for your credit card data.

We may be a little biased, but we truly believe that Mozilla’s commitment to privacy helps make the internet better and safer for everyone. We wouldn’t be able to do it without the support of our community of Firefox users, so we’d love for you to join us.  

Related stories:

The post How to easily switch from Chrome to Firefox appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird Community Office Hours Schedule For September 2023

The thunderbird logo (a blue elemental bird curled up and protecting an inner icon shaped like a heart, formed from two hands clasping together)

Hello Thunderbird community! We’re bringing back monthly Office Hours, now with a Saturday option to make attendance more convenient. Please see the details below to learn how and when you can meet with us to share your feedback and ask questions.

Now that Thunderbird 115 Supernova has been released, we have a lot to discuss, plan, and do! And we’re rolling out monthly Office Hours sessions so that you can:

  • Share your Thunderbird experiences with us
  • Share your ideas for the future of Thunderbird
  • Discuss ways to get involved with Thunderbird
  • Ask us questions about Thunderbird and the Thunderbird Project
  • Meet new team members, and meet your fellow Thunderbird users

This month we’re hosting these sessions using Zoom (in the future we plan to stand up a dedicated Jitsi instance). You can easily participate using video, dialing in by phone, or asking questions in our community Matrix channel at #thunderbird:mozilla.org.

LocationSession 1 time conversion
10h-11h UTC
Monday, Sept 11


Zoom link
Session 2 time conversion
17h-18h UTC
Monday, Sept 11


Zoom link
Session 3 time conversion
17h-18h UTC
Saturday, Sept 16


Zoom link
Los Angeles, USA Mon 3am-4amMon 10am-12pmSat 10am-12pm
New York, USA Mon 6am-7amMon 1pm-2pmSat 1pm-2pm
São Paulo, Brazil Mon 07h-08hMon 14h-15hSat 14h-15h
Berlin, Germany Mon 12h-13hMon 19h-20hSat 19h-20h
Tokyo, Japan Mon 19h-20hTue 02h-03hSun 02h-03h
Canberra, AustraliaMon 8pm-9pmTue 3am-4amSun 3am-4am
Auckland, NZMon 10pm-11pmTue 5am-6amSun 5am-6am

In the table above please click a session for meeting time converted to your local time. If you encounter difficulty joining please post in Matrix back channel #thunderbird:mozilla.org or email us.

Hosts Wayne (Thunderbird Community Manager) and Jason (Marketing & Communications Manager), plus special guests from the Thunderbird team look forward to meeting you! 

If you are unable to attend we hope you will submit your ideas or ask for assistance.  

PLEASE NOTE: We’ll be recording this call for internal use and distribution only. In the future, we may explore publishing these on video platforms.


How To Join Community Office Hours By Phone

Meeting ID: 981 2417 3850
Password: 647025

Dial by your location
+1 646 518 9805 US (New York)
+1 669 219 2599 US (San Jose)
+1 647 558 0588 Canada
+33 1 7095 0103 France
+49 69 7104 9922 Germany
+44 330 088 5830 United Kingdom
Find your local number: https://mozilla.zoom.us/u/adPpRGsVms

The post Thunderbird Community Office Hours Schedule For September 2023 appeared first on The Thunderbird Blog.

Karl DubostMolly

Molly passed away at 60.

Portrait de Molly Holzschlag

A blog post of this nature is never easy. She was so larger than life that she puts a print on each of us, who have discovered the Web early on.

My first discovery of Molly Holzschlag was through the WebTechniques magazine published from 1996 to 2002. This was a real magazine about the Web. You would recognize early writers like Laura Lemay, Lynda Weinman, etc. She had a column there called Integrated Design. She started writing it on Web Techniques. September 1999.

Style sheets are one of the most effective innovations for designers; they make it easier to manage style elements via a single, linked sheet. When the time comes for a minor style update, such as a change in the color of article headers, the change can be implemented throughout the site in a few seconds.

Then she was part of the WaSP (Web Standards Project), including Jeffrey Zeldmann and others, an effort to bring interop in between browsers and educate Web developers on using Web Standards.

A capture of Molly's site in 2000.

I was hired at W3C in July 2000 to work on improving the quality of W3C specifications. And I had a long time interest in also improving the knowledge of W3C specs for the French community.

On May 2002, I was attending the World Wide Web Conference in Hawaii and I met Molly there for the first time. We had a long discussion (seating on chairs near the pool) about evangelization efforts, the W3C, WasP and the role that each could play. Together with Olivier Théreaux (working at W3C at that time), we had also a strong desire to have a better relationship of W3C with the Web developers community and were discussing similar efforts.

We created the public-evangelist mailing-list at W3C. She introduced herself on the list.

Then I met her regurlaly in my professional career, on conferences, at meetings and we even worked together in the Opera Developer Relations Team in 2011.

Today on Mastodon and blogs, you will see a lot of messages on how important she had been in the life of people from many continents, many places, for people who praised the Web profession as a craft.

Thanks for all the magical memories and your unconditional love for the Web.

Otsukare!

This Week In RustThis Week in Rust 511

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is str0m, a synchronous sans-IO WebRTC implementation.

Thanks to Hugo Tunius for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

357 pull requests were merged in the last week

Rust Compiler Performance Triage

A lot of spurious noise this week from a few benchmarks (bitmaps-3.1.0, libc, and even cargo-0.60.0 at a few points). Beyond that, we had a few small improvements associated with the trait system and with parallel-rustc.

Triage done by @pnkfelix. Revision range: cedbe5c7..15e52b05

4 Regressions, 7 Improvements, 8 Mixed; 2 of them in rollups 66 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-09-06 - 2023-10-04 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Rusts standard library, and a lot of the popular crates, are like a museum. While it does change, as new exhibitions are added, it is mostly finished. Each painting has a detailed explanation in 7 different languages underneath. Descriptions below each excitation are written beautifully, with detailed drawings, showing how everything works. It is so easy to navigate, one glance at the map is enough to find exactly what you are looking for. It is so convenient, you almost don't notice that you are learning something.

Internals of rustc are like a build site of a sprawling factory. You can see the scaffolds everywhere, as more production lines come online, and everything gets faster, better, bigger. Workers move around, knowing the place like the back of their hands. They can glance at the signs on the walls, and instantly tell you: where you are, what this place does and what pitfalls you should avoid. And you are a new hire who has just came for his first day at the new job. You look at the sign, and after some thinking, you too are able to tell roughly in which building you are. The signs almost always tell you what you need, just in short, cryptic sentences. You always can tell what is going on, with some thinking, but it is not effortless. The signs on the walls are not bad, just not written for anyone to get right away.

FractalFir on their blog

Thanks to Alona Enraght-Moony for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Hacks.Mozilla.OrgFaster Vue.js Execution in Firefox

Speedometer 3 is a cross-industry effort to build a modern browser benchmark rooted in real-world user experiences. Its goal is to focus browser engineering effort towards making the Web more smooth for actual users on actual pages. This is hard to do and most browser benchmarks don’t do it well, but we see it as a unique opportunity to improve responsiveness broadly across the Web.

This requires a deliberate analysis of the ecosystem — starting with real user experiences and identifying the essential technical elements underlying them. We built several new tests from scratch, and also updated some existing tests from Speedometer 2 to use more modern versions of widely-used JavaScript frameworks.

When the Vue.js test was updated from Vue 2 to Vue 3, we noticed some performance issues in Firefox. The root of the problem was Proxy object usage that was introduced in Vue 3.

Proxies are hard to optimize because they’re generic by design and can behave in all sorts of surprising ways (e.g., modifying trap functions after initialization, or wrapping a Proxy with another Proxy). They also weren’t used much on the performance-critical path when they were introduced, so we focused primarily on correctness in the original implementation.

Speedometer 3 developed evidence that some Proxies today are well-behaved, critical-path, and widely-used. So we optimized these to execute completely in the JIT — specializing for the shapes of the Proxies encountered at the call-site and avoiding redundant work. This makes reactivity in Vue.js significantly faster, and we also anticipate improvements on other workloads.

This change landed in Firefox 118, so it’s currently on Beta and will ride along to Release by the end of September.

Over the course of the year Firefox has improved by around 40% on the Vue.js benchmark from work like this. More importantly, and as we hoped, we’re observing real user metric improvements across all page loads in Firefox as we optimize Speedometer 3. We’ll share more details about this in a subsequent post.

The post Faster Vue.js Execution in Firefox appeared first on Mozilla Hacks - the Web developer blog.

Wladimir PalantA year after the disastrous breach, LastPass has not improved

In September last year, a breach at LastPass’ parent company GoTo (formerly LogMeIn) culminated in attackers siphoning out all data from their servers. The criticism from the security community has been massive. This was not so much because of the breach itself, such things happen, but because of the many obvious ways in which LastPass made matters worse: taking months to notify users, failing to provide useful mitigation instructions, downplaying the severity of the attack, ignoring technical issues which have been publicized years ago and made the attackers’ job much easier. The list goes on.

Now this has been almost a year ago. LastPass promised to improve, both as far as their communication goes and on the technical side of things. So let’s take a look at whether they managed to deliver.

TL;DR: They didn’t. So far I failed to find evidence of any improvements whatsoever.

Update (2023-09-26): It looks like at least the issues listed under “Secure settings” are finally going to be addressed.

A very battered ship with torn sails in a stormy sea, on its side the ship’s name: LastPass

The communication

The initial advisory

LastPass’ initial communication around the breach has been nothing short of a disaster. It happened more than three months after the users’ data was extracted from LastPass servers. Yet rather than taking responsibility and helping affected users, their PR statement was designed to downplay and to shift blame. For example, it talked a lot about LastPass’ secure default settings but failed to mention that LastPass never really enforced those. In fact, people who created their accounts a while ago and used very outdated (insecure) settings never saw as much as a warning.

The statement concluded with “There are no recommended actions that you need to take at this time.” I called this phrase “gross negligence” back when I initially wrote about it, and I still stand by this assessment.

The detailed advisory

It took LastPass another two months of strict radio silence to publish a more detailed advisory. That’s where we finally learned some more about the breach. We also learned that business customers using Federated Login are very much affected by the breach, the previous advisory explicitly denied that.

But even now, we learn that indirectly, in recommendation 9 out of 10 for LastPass’ business customers. It seems that LastPass considered generic stuff like advice on protecting against phishing attacks more important than mitigation of their breach. And then the recommendation didn’t actually say “You are in danger. Rotate K2 ASAP.” Instead, it said “If, based on your security posture or risk tolerance, you decide to rotate the K1 and K2 split knowledge components…” That’s the conclusion of a large pile of text essentially claiming that there is no risk.

At least the advisory for individual users got the priorities right. It was master password first, iterations count after that, and all the generic advice at the end.

Except: they still failed to admit the scope of the breach. The advice was:

Depending on the length and complexity of your master password and iteration count setting, you may want to reset your master password.

And this is just wrong. The breach already happened. Resetting the master password will help protect against future breaches, but it won’t help with the passwords already compromised. This advice should have really been:

Depending on the length and complexity of your master password and iteration count setting, you may want to reset all your passwords.

But this would amount to saying “we screwed up big time.” Which they definitely did. But they still wouldn’t admit it.

Improvements?

A blog post by the LastPass CEO Karin Toubba said:

I acknowledge our customers’ frustration with our inability to communicate more immediately, more clearly, and more comprehensively throughout this event. I accept the criticism and take full responsibility. We have learned a great deal and are committed to communicating more effectively going forward.

As I’ve outlined above, the detailed advisory published simultaneously with this blog post still left a lot to be desired. But this sounds like a commitment to improve. So maybe some better advice has been published in the six months which passed since then?

No, this doesn’t appear to be the case. Instead, the detailed advisory moved to the “Get Started – About LastPass” section of their support page. So it’s now considered generic advice for LastPass users. Any specific advice on mitigating the fallout of the breach, assuming that it isn’t too late already? There doesn’t seem to be any.

The LastPass blog has been publishing lots of articles again, often multiple per week. There doesn’t appear to be any useful information at all here however, only PR. To add insult to injury, LastPass published an article in July titled “How Zero Knowledge Keeps Passwords Safe.” It gives a generic overview of zero knowledge which largely doesn’t apply to LastPass. It concludes with:

For example, zero-knowledge means that no one has access to your master password for LastPass or the data stored in your LastPass vault, except you (not even LastPass).

This is bullshit. That’s not how LastPass has been designed, and I wrote about it five years ago. Other people did as well. LastPass didn’t care, otherwise this breach wouldn’t have been such a disaster.

Secure settings

The issue

LastPass likes to boast how their default settings are perfectly secure. But even assuming that this is true, what about the people who do not use their defaults? For example the people who created their LastPass account a long time ago, back when the defaults were different?

The iterations count is particularly important. Few people have heard about it, it being hidden under “Advanced Settings.” Yet when someone tries to decrypt your passwords, this value is an extremely important factor. A high value makes successful decryption much less likely.

As of 2023, the current default value is 600,000 iterations. Before the breach the default used to be 100,100 iterations, making decryption of passwords six times faster. And before 2018 it was 5,000 iterations. Before 2013 it was 500. And before 2012 the default was 1 iteration.

What happened to all the accounts which were created with the old defaults? It appears that for most of these LastPass failed to fix the settings automatically. People didn’t even receive a warning. So when the breach happened, quite a few users reported having their account configured with 1 iteration, massively weakening the protection provided by the encryption.

It’s the same with the master password. In 2018 LastPass introduced much stricter master password rules, requiring at least 12 characters. While I don’t consider length-based criteria very effective to guide users towards secure passwords, LastPass failed to enforce even this rule for existing accounts. Quite a few people first learned about the new password complexity requirement when reading about the breach.

Improvements?

I originally asked LastPass about enforcing a secure iterations count setting for existing accounts in February 2018. LastPass kept stalling until I published my research without making certain that all users are secure. And they ignored this issue another four years until the breach happened.

And while the breach prompted LastPass to increase the default iterations count, they appear to be still ignoring existing accounts. I just logged into my test account and checked the settings:

Screenshot of LastPass settings. “Password Iterations” setting is set to 5000.

There is no warning whatsoever. Only if I try to change this setting, a message pops up:

For your security, your master password iteration value must meet the LastPass minimum requirement: 600000

But people who are unaware of this setting will not be warned. And while LastPass definitely could update this setting automatically when people log in, they still choose not to do it for some reason.

It’s the same with the master password. The password of my test account is weak because this account has been created years ago. If I try to change it, I will be forced to choose a password that is at least 12 characters long. But as long as I just keep using the same password, LastPass won’t force me to change it – even though it definitely could.

There isn’t even a definitive warning when I log in. There is only this notification in the menu:

Screenshot of the LastPass menu. Security Dashboard has a red dot on its icon.

Only after clicking “Security Dashboard” will a warning message show up:

Screenshot of a LastPass message titled “Master password alert.” The message text says: “Master password strength: Weak (50%). For your protection, change your master password immediately.” Below it a red button titled “Change password.”

If this is such a critical issue that I need to change my master password immediately, why won’t LastPass just tell me to do it when I log in?

This alert message apparently pre-dates the breach, so there don’t seem to be any improvements in this area either.

Update (2023-09-26): Last week LastPass sent out an email to all users:

New master password requirements. LastPass is changing master password requirements for all users: all master passwords must meet a 12-character minimum. If your master password is less than 12-characters, you will be required to update it.

According to this email, LastPass will start enforcing stronger master passwords at some unspecified point in future. Currently, this requirement is still not being enforced, and the email does not list a timeline for this change.

More importantly, when I logged into my LastPass account after receiving this email, the iterations count finally got automatically updated to 600,000. The email does not mention any changes in this area, so it’s unclear whether this change is being applied to all LastPass accounts this time.

Brian Krebs is quoting LastPass CEO with the statement: “We have been able to determine that a small percentage of customers have items in their vaults that are corrupt and when we previously utilized automated scripts designed to re-encrypt vaults when the master password or iteration count is changed, they did not complete.” Quite frankly, I find this explanation rather doubtful.

First of all, reactions to my articles indicate that the percentage of old LastPass accounts which weren’t updated is far from small. There are lots of users finding an outdated iterations count configured in their accounts, yet only two reported their accounts being automatically updated so far.

Second: my test account in particular is unlikely to contain “corrupted items” which previously prevented the update. Back in 2018 I changed the iterations count to 100,000 and back to 5,000 manually. This worked correctly, so no corruption was present at that point. The account was virtually unused after that except for occasional logins, no data changes.

Unencrypted data

The issue

LastPass PR likes to use “secure vault” as a description of LastPass data storage. This implies that all data is secured (encrypted) and cannot be retrieved without the knowledge of the master password. But that’s not the case with LastPass.

LastPass encrypts passwords, user names and a few other pieces of data. Everything else is unencrypted, in particular website addresses and metadata. That’s a very bad idea, as security researchers kept pointing out again and again. In November 2015 (page 67). In January 2017. In July 2018. And there are probably more.

LastPass kept ignoring this issue. So when last year their data leaked, the attackers gained not only encrypted passwords but also plenty of plaintext data. Which LastPass was forced to admit but once again downplayed by claiming website addresses not to be sensitive data. And users were rightfully outraged.

Improvements?

Today I logged into my LastPass account and then opened https://lastpass.com/getaccts.php. This gives you the XML representation of your LastPass data as it is stored on the server. And I fail to see any improvements compared to this data one year ago. I gave LastPass the benefit of the doubt and created a new account record. Still:

<account url="68747470733a2f2f746573742e6465" last_modified="1693940903" >

The data in the url field is merely hex-encoded which can be easily translated back into https://test.de. And the last_modified field is a Unix timestamp, no encryption here either.

Conclusions

A year after the breach, LastPass still hasn’t provided their customers with useful instructions on mitigating the breach, nor has it admitted the full scope of the breach. They also failed to address any of the long-standing issues that security researchers have been warning about for years. At the time of writing, owners of older LastPass accounts still have to become active on their own in order to fix outdated security settings. And much of LastPass data isn’t being encrypted.

I honestly cannot explain LastPass’ denial to fix security settings of existing accounts. Back when I was nagging them about it, they straight out lied to me. Don’t they have any senior engineers on staff, so that nobody can implement this change? Do they really not care as long as they can blame the users for not updating their settings? Beats me.

As to not encrypting all the data, I am starting to suspect that LastPass actually wants visibility into your data. Do they need to know which websites you have accounts on in order to guide some business decisions? Or are they making additional income by selling this data? I don’t know, but LastPass persistently ignoring this issue makes me suspicious.

Either way, it seems that LastPass considers the matter of their breach closed. They published their advisory in March this year, and that’s it. Supposedly, they improved the security of their infrastructure, which nobody can verify of course. There is nothing else coming, no more communication and no technical improvements. Now they will only be publishing more lies about “zero knowledge architecture.”

The Talospace ProjectFirefox 117 on POWER

Now that the Talos II is upgraded and tuned up, it's back to development work, starting with (after a TenFourFox patch dump) Firefox 117. Maybe it's just me, but it seems subjectively zippier than 116, even accounting for the cruft that builds up during long browser sessions, and there are some notable developer-facing improvements. As usual, for the long-playing bug 1775202, either put --disable-webrtc in your .mozconfig if you don't need WebRTC, or tweak third_party/libwebrtc/moz.build with the patch from Fx116. The browser otherwise builds and works with a tweaked PGO-LTO patch and the .mozconfigs from Firefox 105.

Cameron KaiserAugust patch set for TenFourFox

The next patch set has landed, bringing the TenFourFox security base up to 115ESR. This includes the usual new certificate roots and updates to pins, HSTS and TLDs, as well as applicable security updates such as a full pull-up to the browser's SCTP support (not that this is frequently used in TenFourFox but rather to make future patches a little more tractable). On the bug fix side there is an update to the ATSUI font blocklist (thanks Chris T) and a wallpaper for a JavaScript-related crash on apple.com (thanks roytam1). Finally, basic adblock has been made stricter and is now also targetting invasive fingerprinting scripts. This adds a bit more overhead to checking the origin but that all runs at native C++ speed, and ensures we're less likely to get bogged down running JavaScript that we'd really rather not.

As this is a base pullup, building this time around will require a full clobber, so be sure to clear out everything before you begin.

For our next set, I'm thinking of an update to Reader Mode, since I firmly believe that's one of the most useful modes to run TenFourFox in on limited Power Mac hardware. That's why we made it sticky and provided a way to automatically open it by site (under Preferences, TenFourFox) — on resource-limited systems a resource-light view of a resource-heavy page is pretty much the way to go. And isn't everything resource-heavy to a Power Mac?

Mozilla Localization (L10N)Localizer Spotlight: Victor Ibragimov (Tajik locale)

Hello World!

My name is Victor Ibragimov, and I am from Dushanbe, Tajikistan (One of Five Central Asia Countries).

On September 3, 2023, I celebrate my third year as a member of the Mozilla community, starting from September 3, 2020!

Q. What first drew you to want to volunteer with Mozilla’s localization program?

I have been volunteering as a professional translator and coordinator of English to Tajik translations for over 20 years. Throughout my career, I have worked on numerous software localization projects, including Debian OS, Ubuntu OS, Fedora OS, openSuse OS, SailfishOS, KDE, Gnome Desktops, and many other fantastic software and platforms.

Around three years ago, I discovered that all these computer operating systems and desktops used Firefox web browser by default. However, I noticed that Firefox did not have Tajik language support. Determined to address this gap, I reached out to the maintainers of these projects. They informed me that Firefox is a separate project and advised me to contact the Mozilla team directly to initiate the localization of Tajik language.

With my extensive experience in translation and coordination, I was determined to contribute to the completion of a high-quality Tajik translation. This commitment was driven by my desire to enhance the usability of Mozilla products for Tajik-speaking users and to foster inclusion in the global tech community.

Q. What have been some of the most rewarding or impactful projects you’ve localized for Mozilla?

Some of the most rewarding and impactful projects I have localized for Mozilla include the translation of Firefox web browser into Tajik language.

Additionally, I have worked on localizing Mozilla’s mobile projects, such as Firefox for Android and Focus for Android. These projects have allowed Tajik-speaking users to have a seamless browsing experience on their mobile devices and maintain their privacy with the Focus app. This has had a positive impact on the accessibility of technology for Tajik-speaking individuals and has empowered them to fully utilize Mozilla’s mobile products.

Overall, these localization projects have been rewarding and impactful as they have contributed to breaking down language barriers, fostering inclusion, and empowering Tajik-speaking users to access and utilize Mozilla’s products effectively across various platforms.

Q. What advice would you give to someone new wanting to get involved in localizing for Mozilla?

1. Start by familiarizing yourself with the Mozilla community and the localization process. Visit the Mozilla website and explore the resources and documentation available for translators. Join relevant forums or mailing lists to connect with other translators and learn from their experiences.
2. Choose a project or software that you are passionate about and that aligns with your language expertise. It could be Firefox, Thunderbird, or any other Mozilla project. By working on something you are interested in, you will stay motivated and enjoy the process of localization.
3. Take advantage of the available tools and resources. Mozilla provides various tools and platforms to facilitate the localization process, such as Pontoon and Transvision. Familiarize yourself with these tools and use them to contribute effectively.
4. Collaborate and communicate with other translators. Localization is a collaborative effort, so it’s important to engage with other translators, ask questions, and seek feedback. Participate in discussions and share your knowledge and experiences with the community.
5. Be proactive and take initiative. Look for opportunities to contribute beyond just translating strings. Offer to review translations, suggest improvements, or help with testing and bug reporting. This will not only enhance your skills but also make you a valuable member of the localization team.
6. Stay updated with the latest developments in your language and the software you are localizing. Attend conferences, workshops, or webinars related to localization or technology to stay informed about new trends and best practices.
7. Seek feedback and continuously improve your translations. Localization is an ongoing process, and there is always room for improvement. Actively seek feedback from users, fellow translators, and project maintainers to refine your translations and ensure they are accurate, clear, and culturally appropriate. Embrace feedback as an opportunity for growth and strive to deliver high-quality localized content.
8. Stay connected with the Mozilla community and stay up to date with changes and updates. Join relevant mailing lists or forums to stay informed about new projects, updates, and announcements. Regularly check the Mozilla website and other official channels for any news or changes that may impact your localization work. By staying connected, you can actively contribute to the community and ensure your translations are up to date with the latest developments.
9. Be patient and persistent. Localization can be challenging at times, especially when dealing with technical terms or complex strings. Don’t get discouraged if you face difficulties initially. Keep practicing, learning, and improving your skills.
10. Lastly, enjoy the process and have fun! Localizing for Mozilla is not just about contributing to a global project, but also about preserving and promoting your language globally. Embrace the opportunity to make a positive impact and connect with your language community.

Remember, whether you are a newcomer or an experienced translator, your contribution to localizing Mozilla projects can have a significant impact. So, take the leap and start making a difference in your language community and beyond.

Q. How has your volunteering impacted users in your language community?

As a Mozilla volunteer, my contributions to Tajik translation have had a significant impact on Mozilla users in the Tajik language community. By ensuring that Firefox web browser is fully localized and accessible in Tajik, I have helped to make it easier for Tajik internet users to navigate and use the browser in their native language. This has not only improved their overall browsing experience but also promoted the importance of using Tajik as a language of technology and digital communication.

Furthermore, by incorporating the new Tajik language reforms into translations, I have played a role in making the Tajik language clearer and more beautiful. This has not only enhanced the user experience for Tajik-speaking Mozilla users but has also contributed to the development and preservation of the language itself.

In addition, my involvement in creating new Internet terminology for the Tajik language has been instrumental in bridging the linguistic gap between technology and the Tajik-speaking community. This has allowed for the development of e-government, e-commerce, and e-education platforms in Tajikistan, as well as empowering Tajik internet users to fully utilize the potential of the internet in their daily lives.

Moreover, the opportunity to create multilingual dictionaries with Tajik language has further enriched the linguistic resources available to Tajik speakers. This has not only facilitated effective communication but has also fostered a sense of pride and ownership over their language.

Interested in featuring in these spotlights? Or know someone you think we should interview? Fill out this form, or reach out directly to delphine at mozilla dot com. Interested in contributing to localization? Head on over here for more details!

Useful Links

Karl DubostThe lucky day of me falling hard professionally

Let me tell you a story…

Pawn of lucky cat (manekineko) in the air with my reflection in the window

In my first professional year (~1995/1996) as a ~~web developer~~, well webmaster at that time, I was working on the BNP website. Yes, the BNP French bank website, except at that time it was only a couple of hundreds static web pages (maybe around 300).

The client asked us to fix the footer on all these html files. The job was assigned to me.

I opened the local website FTP folder with all the files in it. I started to look at the HTML and noticed a simple search and replace would not do it.

Let's use Regex for parsing/fixing HTML. Haha.

Very proud of my regex, I executed it on all files. Sure I had done a good job! I dropped the folder in the FTP application and all files on the live site were replaced.

Oooops!

I had made a mistake in my regex. I replaced every characters in the HTML by character + space. The site was now displaying the raw ascii characters.

< H T M L >
< H E A D >
< T I T L E > B N P < / T I T L E >

and so on. The roughly 300 files. I had no backup. Cold sweat.

We were around 6 or 7 people working in this Web agency as webmasters. I asked everyone

  1. to stop everything they were doing.
  2. to not visit the BNP website.
  3. to save locally every cached files in their browser history and/or their local backup if they had and send them to me.

On a couple of hours we have been able to recreate the site and with a bit of manual work too.

That was one of my most formative mistakes at the beginning of the Web.

  1. Work on a backup.
  2. Do not synchronize before checking locally.
  3. Install a local web server on your computer.
  4. Browser caching is cool!
  5. Teams are awesome. You are not alone. You can reach out for help.
  6. Create a better team process for working on sites.
  7. Do not use Regex for parsing HTML

Mistakes will happen. Learn from them.

Otsukare!

Firefox Developer ExperienceFirefox WebDriver Newsletter — 117

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 117 release cycle

Contributions

With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla.

No new contribution to mention for release 117, but we already have a few planned for release 118. It’s really easy to get started!

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette and geckodriver.

WebDriver BiDi

Firefox 117 comes with a lot of new WebDriver BiDi features. We made several improvements to existing commands and events, but this release also contains several new commands and events.

New: “browser.close” command

Clients using only WebDriver BiDi can now cleanly close the browser and terminate all WebDriver sessions by using the browser.close command. It will return before closing the WebSocket connection, so it can be used as any other command. browser.close takes no argument and has no return value.

New: “browsingContext.setViewport” command

The browsingContext.setViewport command allows users to change the characteristics of the viewport of a specific browsing context. The command supports a viewport parameter with width and height properties, which will set the dimensions of the viewport. It can be useful to emulate narrow viewports or test responsive design.

<figcaption class="wp-element-caption">Viewport resized to 480x640 using browsingContext.setViewport</figcaption>

To restore the viewport afterwards, simply call the command without passing the viewport parameter and the default dimensions will be re-applied.

New: “browsingContext.fragmentNavigated” event

browsingContext.fragmentNavigated is a new event which will be emitted for a same-document navigation, for instance when navigating to an anchor of the current document. The payload of this event is a browsingContext.NavigationInfo, similar to the existing browsingContext events load and domContentLoaded.

Support for “background” argument in browsingContext.create

A new argument background is now supported for browsingContext.create. When background is set to true, the new context (tab or window) will be opened in the background. This argument is optional and defaults to false.

More importantly, this means browsingContext.create now opens contexts in the foreground by default, whereas they were always in the background before. This should make it easier to handle new tabs and windows, until we support browsingContext.activate to change the selected context dynamically (which is coming soon).

Support for “clip” argument in browsingContext.captureScreenshot

The clip argument for browsingContext.captureScreenshot allows to restrict the screenshot to a specific area or to a specific element. clip can either be a browsingContext.BoxClipRectangle or a browsingContext.ElementClipRectangle. The BoxClipRectangle expects viewport coordinates (x, y) and dimensions (width, height). ElementClipRectangle expects at least an element property, and you can also provide a scrollIntoView boolean to make sure the element is visible before capturing the screenshot. This can be especially useful for component visual regression testing.

<figcaption class="wp-element-caption">Screenshot made using the clip argument of captureScreenshot</figcaption>

Support for navigation id in several commands and events

Starting with Firefox 117, all events and commands which relate to a navigation should now provide a non-null navigation id. This id is a UUID which identifies a specific page navigation and will help to identify events and commands linked to a single navigation. The navigation property is now available in the response of the browsingContext.navigate command, as well as in the payload of the following events: browsingContext.domContentLoaded, browsingContext.load, browsingContext.fragmentNavigated, network.beforeRequestSent, network.responseStarted, network.responseCompleted. Note that for network events, the navigation property will only be set for the initial request of a navigation, not for subsequent requests triggered by the page load.

Bug fixes

Firefox 117 also comes with a few bug fixes, including:

Marionette (WebDriver classic)

There are no updates for Marionette and WebDriver Classic in Firefox 117.

Mike HommeyHacking the ELF format for Firefox, 12 years later ; doing better with less

(I haven't posted a lot in the past couple years, except for git-cinnabar announcements. This is going to be a long one, hold tight)

This is quite the cryptic title, isn't it? What is this all about? ELF (Executable and Linkable Format) is a file format used for binary files (e.g. executables, shared libraries, object files, and even core dumps) on some Unix systems (Linux, Solaris, BSD, etc.). A little over 12 years ago, I wrote a blog post about improving libxul startup I/O by hacking the ELF format. For context, libxul is the shared library, shipped with Firefox, that contains most of its code.

Let me spare you the read. Back then I was looking at I/O patterns during Firefox startup on Linux, and sought ways to reduce disk seeks that were related to loading libxul. One particular pattern was caused by relocations, and the way we alleviated it was through elfhack.

Relocations are necessary in order for executables to work when they are loaded in memory at a location that is not always the same (because of e.g. ASLR). Applying them requires reading the section containing the relocations, and adjusting the pieces of code or data that are described by the relocations. When the relocation section is very large (and that was the case on libxul back then, and more so now), that means going back and forth (via disk seeks) between the relocation section and the pieces to adjust.

Elfhack to the rescue

Shortly after the aforementioned blog post, the elfhack tool was born and made its way into the Firefox code base.

The main idea behind elfhack was to reduce the size of the relocation section. How? By storing it in a more compact form. But how? By taking the executable apart, rewriting its relocation section, injecting code to apply those relocations, moving sections around, and adjusting the ELF program header, section header, section table, and string table accordingly. I will spare you the gory details (especially the part about splitting segments or the hack to use .bss section as a temporary Global Offset Table). Elfhack itself is essentially a minimalist linker that works on already linked executables. That has caused us a number of issues over the years (and much more). In fact, it's known not to work on binaries created with lld (the linker from the LLVM project) because the way lld lays things out does not provide space for the tricks we pull (although it seems to be working with the latest version of lld. But who knows what will happen with next version).

Hindsight is 20/20, and if I were to redo it, I'd take a different route. Wait, I'm actually kind of doing that! But let me fill you in with what happened in the past 12 years, first.

Android packed relocations

In 2014, Chrome started using a similar-ish approach for Android on ARM with an even more compact format, compared to the crude packing elfhack was doing. Instead of injecting initialization code in the executable, it would use a custom dynamic loader/linker to handle the packed relocations (that loader/linker was forked from the one in the Android NDK, which solved similar problems to what our own custom linker had, but that's another story).

That approach eventually made its way into Android itself, in 2015, with support from the dynamic loader in bionic (the Android libc), and later support for emitting those packed relocations was added to lld in October 2017. Interestingly, the packer added to lld created smaller packed relocations than the packer in Android (for the same format).

The road to standardization

Shortly after bionic got its native packed relocation support, a conversation started on the gnu-gabi mailing list related to the general problem of relocations representing a large portion of Position Independent Executable. What we observed on a shared library had started to creep into programs as well because PIE binaries started to be prominent around that time, with some compilers and linkers starting to default to that for hardening reasons. Both Chrome's and Firefox prior art were mentioned. This was April 2017.

A few months went by, and a simpler format was put forward, with great results, which led to, a few days later, a formal proposal for RELR relocations in the Generic System V Application Binary Interface.

More widespread availability

Shortly after the proposal, Android got experimental support for it, and a few months later, in July 2018, lld gained experimental support as well.

The Linux kernel got support for it too, for KASLR relocations, but for arm64 only (I suppose this was for Android kernels. It still is the only architecture it has support for to this day).

GNU binutils gained support for the proposal (via a -z pack-relative-relocs flag) at the end of 2021, and glibc eventually caught up in 2022, and this shipped respectively in binutils 2.38 and glibc 2.36. These versions should now have reached most latest releases of major Linux distros.

Lld thereafter got support for the same flag as binutils's, with the same side effect of adding a version dependency on GLIBC_ABI_DT_RELR, to avoid crashes when running executables with packed relocations against an older glibc.

What about Firefox?

Elfhack was updated to use the format from the proposal at the very end of 2021 (or rather, close enough to that format). More recently (as in, two months ago), support for the -z pack-relative-relocs flag was added, so that when building Firefox against a recent enough glibc and with a recent enough linker, it will use that instead of elfhack automatically. This means in some cases, Firefox packages in Linux distros will be using those relocations (for instance, that's the case since Firefox 116 in Debian unstable).

Which (finally) brings us to the next step, and the meat of this post.

Retiring Elfhack

It's actually still too early for that. The Firefox binaries Mozilla provides need to run on a broad variety of systems, including many that don't support those new packed relocations. That includes Android systems older than Red Velvet Cake (11), and not necessarily very old desktop systems.

Android Pie (9) shipped with experimental, but incompatible, support for the same packed relocation format, but using different constants. Hacking the PT_DYNAMIC segment (the segment containing metadata for dynamic linking) for compatibility with all Android versions >= 9 would technically be possible, but again, Mozilla needs to support even older versions of Android.

There comes the idea behind what I've now called relrhack: injecting code that can apply the packed relocations created by the linker if the system dynamic loader hasn't.

To some extent, that sounds similar to what elfhack does, doesn't it? But elfhack packs the relocations itself. And because its input is a fully linked binary, it has to do complex things that we know don't always work reliably.

In the past few years, an idea was floating in the back of my mind to change elfhack to start off a relocatable binary (also known as partially linked). It would then rewrite the sections it needs to, and invoke the linker to link that to its initialization code and produce the final binary. That would theoretically avoid all the kinds of problems we've hit, and work more reliably with lld.

The idea I've toyed with more recently, though, is even simpler: Use the -z pack-relative-relocs linker support, and add the initialization code on the linker command line so that it does everything in one go. We're at this sweet spot in time where we can actually start doing this.

Testing the idea

My first attempts were with a small executable, and linking with lld's older --pack-dyn-relocs=relr flag, which does the same as -z pack-relative-relocs but skips adding the GLIBC_ABI_DT_RELR version dependency. That allowed to avoid having to do post-processing of the binary in this first experimentation step.

I quickly got something working on a Debian Bullseye system (using an older glibc that doesn't support the packed relocations). Here's how it goes:

// Compile with: clang -fuse-ld=lld -Wl,--pack-dyn-relocs=relr,--entry=my_start,-z,norelro -o relr-test
#include <stdio.h>

char *helloworld[] = {"Hello, world"};

int main(void) {
  printf("%s\n", helloworld[0]);
  return 0;
}

This is a minimal Hello world program that contains a relative relocation: the helloworld variable is an array of pointers, and those pointers need to be relocated. Optimizations would get rid of the array but we don't enable optimizations specifically for that. We also disable "Relocation Read-Only", which is a protection that makes the dynamic loader relocated sections read-only after it's done applying relocations. That would prevent us from applying the missing relocations on our own. We're just testing, we'll deal with that later.

Compiling just this without --entry=my_start (because we haven't defined that yet), and running it yields a segmentation fault. We don't even reach main because there actually is an initialization function section that runs before that, and its location, defined in the .init_array section, is behind a relative relocation, which --pack-dyn-relocs=relr packed. This is exactly why -z pack-relative-relocs adds a dependency on a symbol version that doesn't exist in older glibcs. With that flag, the error becomes:

/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_ABI_DT_RELR' not found

which is more user-friendly than a plain crash.

At this point, what do we want? Well, we want to apply the relocations ourselves, as early as possible. The first thing that will run in an executable is its "entry point", that defaults to _start (provided by the C runtime, aka CRT). As hinted in the code snippet above, we can set our own with --entry.

static void real_init();
extern void _start();

void my_start() {
  real_init();
  _start();
}

Here's our own entry point. It will start by calling the "real" initialization function we forward declare here. Let's see if that actually works. Let's add the following temporarily and see how things go.

void real_init() {
  printf("Early Hello world\n");
}

Running the program now yields:

$ ./relr-test
Early Hello world
Segmentation fault

There we go, we've executed code before anything relies on the relative relocations being applied. By the way, adding functions calls like this printf, that early, with elfhack, was an interesting challenge. This is pleasantly much simpler.

Applying the relocations for real

Let's replace that real_init function with some boilerplate for the upcoming real real_init:

#include <link.h>

#ifndef DT_RELRSZ
#define DT_RELRSZ 35
#endif
#ifndef DT_RELR
#define DT_RELR 36
#endif

extern ElfW(Dyn) _DYNAMIC[];
extern ElfW(Ehdr) __executable_start;

The defines are there because older systems don't have them in link.h. _DYNAMIC is a symbol that gives access to the PT_DYNAMIC segment at runtime, and the __executable_start symbol gives access to the base address of the program, which non-relocated addresses in the binary are relative to.

Now we're ready for the real work:

void real_init() {
  // Find the relocations section.
  ElfW(Addr) relr;
  ElfW(Word) size = 0;
  for (ElfW(Dyn) *dyn = _DYNAMIC; dyn->d_tag != DT_NULL; dyn++) {
    if (dyn->d_tag == DT_RELR) {
      relr = dyn->d_un.d_ptr;
    }
    if (dyn->d_tag == DT_RELRSZ) {
      size = dyn->d_un.d_val;
    }
  }
  uintptr_t elf_header = (uintptr_t)&__executable_start;

  // Apply the relocations.
  ElfW(Addr) *ptr, *start, *end;
  start = (ElfW(Addr) *)(elf_header + relr);
  end = (ElfW(Addr) *)(elf_header + relr + size);
  for (ElfW(Addr) *entry = start; entry < end; entry++) {
    if ((*entry & 1) == 0) {
      ptr = (ElfW(Addr) *)(elf_header + *entry);
      *ptr += elf_header;
    } else {
      size_t remaining = 8 * sizeof(ElfW(Addr)) - 1;
      ElfW(Addr) bits = *entry;
      do {
        bits >>= 1;
        remaining--;
        ptr++;
        if (bits & 1) {
          *ptr += elf_header;
        }
      } while (bits);
      ptr += remaining;
    }
  }
}

It's all kind of boring here. We scan the PT_DYNAMIC segment to get the location and size of the packed relocations section, and then read and apply them.

And does it work?

$ ./relr-test
Hello, world

It does! Mission accomplished? If only...

The devil is in the details

Let's try running this same binary on a system with a more recent glibc:

$ ./relr-test 
./relr-test: error while loading shared libraries: ./relr-test: DT_RELR without GLIBC_ABI_DT_RELR dependency

Oh come on! Yes, glibc insists that when the PT_DYNAMIC segment contains these types of relocations, the binary must have that symbol version dependency. That same symbol version dependency we need to avoid in order to work on older systems. I have no idea why the glibc developers went all their way to prevent that. Someone even asked when this was all at the patch stage, with no answer.

We'll figure out a workaround later. Let's use -Wl,-z,pack-relative-relocs for now and see how it goes.

$ ./relr-test 
Segmentation fault

Oops. Well, that actually didn't happen when I was first testing, but for the purpose of this post, I didn't want to touch this topic before strictly necessary. Because we're now running on a system that does support the packed relocations, when our initialization code is reached, relocations are already applied, and we're applying them again. That overcompensates every relocated address, and leads to accesses to unmapped memory.

But how can we know whether relocations were applied? Well, conveniently, the address of a function, from within that function, doesn't need a relative relocation to be known. That's one half. The other half requires "something" that uses a relative relocation to know that same address. We insert this before real_init, but after its forward declaration:

void (*__real_init)() = real_init;

Because it's a global variable that points to the address of the function, it requires a relocation. And because the function is static and in the compilation unit, it needs a relative relocation, not one that would require symbol resolution.

Now we can add this at the beginning of real_init:

  // Don't apply relocations when the dynamic loader has applied them already.
  if (__real_init == real_init) {
    return;
  }

And we're done. This works:

$ ./relr-test 
Hello, world

Unfortunately, we're back to square one on an older system:

$ ./relr-test 
./relr-test: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_ABI_DT_RELR' not found (required by ./relr-test)

Hacking the ELF format, again

And here we go again, having to post-process a binary. So what do we need this time around? Well, starting from a binary linked with --pack-dyn-relocs=relr, we need to avoid the "DT_RELR without GLIBC_ABI_DT_RELR" check. If we change the PT_DYNAMIC segment such that it doesn't contain DT_RELR-related tags, the error will be avoided. Sadly, that means we'll always apply relocations ourselves, but so be it.

How do we do that? Open the file, find the PT_DYNAMIC segment, scan it, overwrite a few tags with a different value, and done. Damn, that's much less work than everything elfhack was doing. I will spare you the code required to do that. Heck, that can trivially be done in a hex editor. Hey, you know what? That would actually be less stuff to write here than ELF parsing code, and would still allow you to follow at home.

Let's start from that binary we built earlier with --pack-dyn-relocs=relr.

$ objcopy --dump-section .dynamic=dyn relr-test

We now have a dyn file with the contents of the PT_DYNAMIC segment.

In that segment, each block of 16 bytes (assuming a 64-bits system) stores a 8-byte tag and a 8-byte value. We want to change the DT_RELR, DT_RELRSZ and DT_RELRENT tags. Their hex value are, respectively, 0x24, 0x23 and 0x25.

$ xxd dyn | grep 2[345]00
00000060: 2400 0000 0000 0000 6804 0000 0000 0000  $.......h.......
00000070: 2300 0000 0000 0000 1000 0000 0000 0000  #...............
00000080: 2500 0000 0000 0000 0800 0000 0000 0000  %...............

(got lucky a bit here, not matching anywhere else than in the tag)

Let's set an extra arbitrary high-ish bit.

$ xxd dyn | sed -n '/: 2[345]00/s/ 0000/ 0080/p'
00000060: 2400 0080 0000 0000 6804 0000 0000 0000  $.......h.......
00000070: 2300 0080 0000 0000 1000 0000 0000 0000  #...............
00000080: 2500 0080 0000 0000 0800 0000 0000 0000  %...............

This went well, let's do it for real.

$ xxd dyn | sed '/: 2[345]00/s/ 0000/ 0080/' | xxd -r > dyn.new
$ objcopy --update-section .dynamic=dyn.new relr-test

Let me tell you I'm glad we're in 2023, because these objcopy options we just used didn't exist 12+ years ago.

So, how did it go?

$ ./relr-test 
Segmentation fault

Uh oh. Well duh, we didn't change the code that applies the relocations, so it can't find the packed relocation section.

Let's edit the loop to use this:

    if (dyn->d_tag == (DT_RELR | 0x80000000)) {
      relr = dyn->d_un.d_ptr;
    }
    if (dyn->d_tag == (DT_RELRSZ | 0x80000000)) {
      size = dyn->d_un.d_val;
    }

And start over:

$ clang -fuse-ld=lld -Wl,--pack-dyn-relocs=relr,--entry=my_start,-z,norelro -o relr-test relr-test.c
$ objcopy --dump-section .dynamic=dyn relr-test
$ xxd dyn | sed '/: 2[345]00/s/ 0000/ 0080/' | xxd -r > dyn.new
$ objcopy --update-section .dynamic=dyn.new relr-test
$ ./relr-test
Hello, world

Copy over to the newer system, and try:

$ ./relr-test
Hello, world

Flawless victory. We now have a binary that works on both old and new systems, using packed relocations created by the linker, and barely post-processing the binary (and we don't need that if (__real_init == real_init) anymore).

Generalizing a little

Okay, so while we're here, we'd rather use -z packed-relative-relocs because it works across more linkers, so we need to get rid of that GLIBC_ABI_DT_RELR symbol version dependency it adds, in order for the output to be more or less equivalent to what --pack-dyn-relocs=relr would produce.

$ clang -fuse-ld=lld -Wl,-z,pack-relative-relocs,--entry=my_start,-z,norelro -o relr-test relr-test.c

You know what, we might as well learn new things. Objcopy is nice, but as I was starting to write this section, I figured it was going to be annoying to do in the same style as above.

Have you heard of GNU poke? I saw a presentation about it at FOSDEM 2023, and haven't had the occasion to try it, I guess this is the day to do that. We'll be using GNU poke 3.2 (latest version as of writing).

Of course, that version doesn't contain the necessary bits. But this is Free Software, right? After a few patches, we're all set.

$ git clone https://git.savannah.gnu.org/git/poke/poke-elf
$ POKE_LOAD_PATH=poke-elf poke relr-test
(poke) load elf
(poke) var elf = Elf64_File @ 0#B

Let's get the section containing the symbol version information. It starts with a Verneed header.

(poke) var section = elf.get_sections_by_type(ELF_SHT_GNU_VERNEED)[0]
(poke) var verneed = Elf_Verneed @ section.sh_offset
(poke) verneed
Elf_Verneed {vn_version=1UH,vn_cnt=2UH,vn_file=110U,vn_aux=16U,vn_next=0U}

vn_file identifies the library file expected to contain those vn_cnt versions. Let's check this is about the libc. The section's sh_link will tell us which entry of the section header (shdr) corresponds to the string table that vn_file points into.

(poke) var strtab = elf.shdr[section.sh_link].sh_offset
(poke) string @ strtab + verneed.vn_file#B
"libc.so.6"

Bingo. Now let's scan the two (per vn_cnt) Vernaux entries that the Verneed header points to via vn_aux. The first one:

(poke) var off = section.sh_offset + verneed.vn_aux#B
(poke) var aux = Elf_Vernaux @ off
(poke) aux
Elf_Vernaux {vna_hash=157882997U,vna_flags=0UH,vna_other=2UH,vna_name=120U,vna_next=16U}
(poke) string @ strtab + aux.vna_name#B
"GLIBC_2.2.5"

And the second one, that vna_next points to.

(poke) var off = off + aux.vna_next#B
(poke) var aux2 = Elf_Vernaux @ off
(poke) aux2
Elf_Vernaux {vna_hash=16584258U,vna_flags=0UH,vna_other=3UH,vna_name=132U,vna_next=0U}
(poke) string @ strtab + aux2.vna_name#B
"GLIBC_ABI_DT_RELR"

This is it. This is the symbol version we want to get rid of. We could go on by adjusting vna_next in the first entry, and reducing vn_cnt in the header, but forward thinking to automating this for binaries that may contain more than two symbol versions from more than one dependency, it's just simpler to pretend this version is a repeat of the previous one. So we copy all its fields, except vna_next.

(poke) aux2.vna_hash = aux.vna_hash 
(poke) aux2.vna_flags = aux.vna_flags 
(poke) aux2.vna_other = aux.vna_other
(poke) aux2.vna_name = aux.vna_name

We could stop here and go back to the objcopy/xxd way of adjusting the PT_DYNAMIC segment, but while we're in poke, it can't hurt to try to do the adjustement with it.

(poke) var dyn = elf.get_sections_by_type(ELF_SHT_DYNAMIC)[0]
(poke) var dyn = Elf64_Dyn[dyn.sh_size / dyn.sh_entsize] @ dyn.sh_offset
(poke) for (d in dyn) if (d.d_tag in [ELF_DT_RELR,ELF_DT_RELRSZ,ELF_DT_RELRENT]) d.d_tag |= 0x80000000L
<stdin>:1:20: error: invalid operand in expression
</stdin><stdin>:1:20: error: expected uint<32>, got Elf64_Sxword

Gah, that seemed straightforward. It turns out in is not lenient about integer types. Let's just use the plain values.

(poke) for (d in dyn) if (d.d_tag in [0x23L,0x24L,0x25L]) d.d_tag |= 0x80000000L
unhandled constraint violation exception
failed expression
  elf_config.check_enum ("dynamic-tag-typ                       elf_mach, d_tag)
in field Elf64_Dyn.d_tag

This time, this is because poke is actually validating the tag values, which is both a blessing and a curse. It can avoid shooting yourself in the foot (after all, we're setting a non-existing value), but also hinder getting things done (because before I actually got here, many of the d_tag values in the binary straight out of the linker weren't even supported).

Let's make poke's validator know about the values we're about to set:

(poke) for (n in [0x23L,0x24L,0x25L]) elf_config.add_enum :class "dynamic-tag-types" :entries [Elf_Config_UInt { value = 0x80000000L | n }]
(poke) for (d in dyn) if (d.d_tag in [0x23L,0x24L,0x25L]) d.d_tag |= 0x80000000L
(poke) .exit
$ ./relr-test
Hello, world

And it works on the newer system too!

Repeating for a shared library

Let's set up a new testcase, using a shared library:

  • Take our previous testcase, and rename the main function to relr_test.
  • Compile it with clang -fuse-ld=lld -Wl,--pack-dyn-relocs=relr,--entry=my_start,-z,norelro -fPIC -shared -o librelr-test.so
  • Create a new file with the following content and compile it:
// Compile with: clang -o relr-test -L. -lrelr-test -Wl,-rpath,'$ORIGIN'
extern int relr_test(void);

int main(void) {
  return relr_test();
}
  • Apply the same GNU poke commands as before, on the librelr-test.so file.

So now, it should work, right?

$ ./relr-test
Segmentation fault

Oops. What's going on?

$ gdb -q -ex run -ex backtrace -ex detach -ex quit ./relr-test
Reading symbols from ./relr-test...
(No debugging symbols found in ./relr-test)
Starting program: /relr-test 
BFD: /librelr-test.so: unknown type [0x13] section `.relr.dyn'
warning: `/librelr-test.so': Shared library architecture unknown is not compatible with target architecture i386:x86-64.

Program received signal SIGSEGV, Segmentation fault.
0x00000000000016c0 in ?? ()
#0  0x00000000000016c0 in ?? ()
#1  0x00007ffff7fe1fe2 in call_init (l=<optimized out>, argc=argc@entry=1, argv=argv@entry=0x7fffffffdfc8, 
    env=env@entry=0x7fffffffdfd8) at dl-init.c:72
#2  0x00007ffff7fe20e9 in call_init (env=0x7fffffffdfd8, argv=0x7fffffffdfc8, argc=1, l=</optimized><optimized out>) at dl-init.c:30
#3  _dl_init (main_map=0x7ffff7ffe180, argc=1, argv=0x7fffffffdfc8, env=0x7fffffffdfd8) at dl-init.c:119
#4  0x00007ffff7fd30ca in _dl_start_user () from /lib64/ld-linux-x86-64.so.2
#5  0x0000000000000001 in ?? ()
#6  0x00007fffffffe236 in ?? ()
#7  0x0000000000000000 in ?? ()
Detaching from program: /relr-test, process 3104868
[Inferior 1 (process 3104868) detached]

Side note: it looks like we'll also need to change some section types if we want to keep tools like gdb happy.

So, this is crashing when doing what looks like a jump/call to an address that is not relocated (seeing how low it is). Let's pull the libc6 source and see what's around dl-init.c:72:

addrs = (ElfW(Addr) *) (init_array->d_un.d_ptr + l->l_addr);
for (j = 0; j < jm; ++j)
  ((init_t) addrs[j]) (argc, argv, env);

This is when it goes through .init_array and calls each of the functions in the table. So, .init_array is not relocated, which means our initialization code hasn't run. But why? Well, that's because the ELF entry point is not used for shared libraries. So, we need to execute our code some other way. What runs on shared library loading? Well, functions from the .init_array table... but they need to be relocated, we got ourselves a chicken and egg problem. Does something else run before that? It turns out that yes, right before that dl-init.c:72 code, there is this:

if (l->l_info[DT_INIT] != NULL)
  DL_CALL_DT_INIT(l, l->l_addr + l->l_info[DT_INIT]->d_un.d_ptr, argc, argv, env);

And the good news here is that it doesn't require DT_INIT to be relocated: that l_addr is the base address the loader used for the library, so it's relocating the address itself. Thank goodness.

So, how do we get a function in DT_INIT? Well... we already have one:

$ readelf -d librelr-test.so | grep '(INIT)'
 0x000000000000000c (INIT)               0x18a8
$ readelf -sW librelr-test.so | grep 18a8
     7: 00000000000018a8     0 FUNC    GLOBAL DEFAULT   13 _init
    20: 00000000000018a8     0 FUNC    GLOBAL DEFAULT   13 _init

So we want to wrap it similarly to what we did for _start, adding the following to the code of the library:

extern void _init();

void my_init() {
  real_init();
  _init();
}

And we replace --entry=my_start with --init=my_init when relinking librelr-test.so (while not forgetting all the GNU poke dance), and it finally works:

$ ./relr-test
Hello, world

(and obviously, it also works on the newer system too)

But does this work for Firefox?

We now have a manual procedure that gets us mostly what we want, that works with two tiny testcases. But does it scale to Firefox? Before implementing the whole thing, let's test a little more. First, let's build two .o files based on our code so far, without the relr_test function. One with the my_init wrapper, the other with the my_start wrapper. We'll call the former relr-test-lib.o and the latter relr-test-bin.o (Compile with clang -c -fPIC -O2).

Then, let's add the following to the .mozconfig we use to build Firefox:

export MOZ_PROGRAM_LDFLAGS="-Wl,-z,pack-relative-relocs,--entry=my_start,-z,norelro /path/to/relr-test-bin.o"
mk_add_options 'export EXTRA_DSO_LDOPTS="-Wl,-z,pack-relative-relocs,--init=my_init,-z,norelro /path/to/relr-test-lib.o"'

This leverages some arcane Firefox build system knowledge to have something minimally intrusive to use the flags we need and to inject our code. However, because of how the Firefox build system works, it also means some Rust build scripts will also be compiled with these flags (unfortunately). In turn, this means those build scripts won't run on a system without packed relocation support in glibc, so we need to build Firefox on the newer system.

And because we're on the newer system, running this freshly built Firefox will just work, because the init code is skipped and relocations applied by the dynamic loader. Things will only get spicy when we start applying our hack to make our initialization code handle the relocations itself. Because Firefox is bigger than our previous testcases, scanning through to find the right versioned symbol to remove is going to be cumbersome, so we'll just skip that part. In fact, we can just use our first approach with objcopy, because it's smaller. After a successful build, let's first do that for libxul.so, which is the largest binary in Firefox.

$ objcopy --dump-section .dynamic=dyn obj-x86_64-pc-linux-gnu/dist/bin/libxul.so
$ xxd dyn | sed '/: 2[345]00/s/ 0000/ 0080/' | xxd -r > dyn.new
$ objcopy --update-section .dynamic=dyn.new obj-x86_64-pc-linux-gnu/dist/bin/libxul.so
$ ./mach run
 0:00.15 /path/to/obj-x86_64-pc-linux-gnu/dist/bin/firefox -no-remote -profile /path/to/obj-x86_64-pc-linux-gnu/tmp/profile-default
$ echo $?
245

Aaaand... it doesn't start. Let's try again in a debugger.

$ ./mach run --debug
<snip>
(gdb) run
<snip>
Thread 1 "firefox" received signal SIGSEGV, Segmentation fault.
real_init () at /tmp/relr-test.c:55
55          if ((*entry & 1) == 0) {

It's crashing while applying the relocations?! But why?

(gdb) print entry
$1 = (Elf64_Addr *) 0x303c8

That's way too small to be a valid address. What's going on? Let's start looking where this value is and where it comes from.

(gdb) print &entry
Address requested for identifier "entry" which is in register $rax

So where does the value of the rax register come from?

(gdb) set pagination off
(gdb) disassemble/m
<snip>
41          if (dyn->d_tag == (DT_RELR | 0x80000000)) {
42            relr = dyn->d_un.d_ptr;
   0x00007ffff2289f47 <+71>:    mov    (%rcx),%rax
<snip>
52        start = (ElfW(Addr) *)(elf_header + relr);
   0x00007ffff2289f54 <+84>:    add    0x681185(%rip),%rax        # 0x7ffff290b0e0
<snip>

So rax starts with the value from DT_RELR, and the value stored at the address 0x7ffff290b0e0 is added to it. What's at that address?

(gdb) print *(void**)0x7ffff290b0e0
$1 = (void *) 0x0

Well, no surprise here. Wanna bet it's another chicken and egg problem?

(gdb) info files
<snip>
        0x00007ffff28eaed8 - 0x00007ffff290b0e8 is .got in /path/to/obj-x86_64-pc-linux-gnu/dist/bin/libxul.so
<snip>

It's in the Global Offset Table, that's typically something that will have been relocated. It smells like there's a packed relocation for this, which would confirm our new chicken and egg problem. First, we find the non-relocated virtual address of the .got section in libxul.so.

$ readelf -SW obj-x86_64-pc-linux-gnu/dist/bin/libxul.so | grep '.got '
  [28] .got              PROGBITS        000000000ab7aed8 ab78ed8 020210 00  WA  0   0  8

So that 0x000000000ab7aed8 is loaded at 0x00007ffff28eaed8. Then we check if there's a relocation for the non-relocated virtual address of 0x7ffff290b0e0.

$ readelf -r obj-x86_64-pc-linux-gnu/dist/bin/libxul.so | grep -e Relocation -e $(printf %x $((0x7ffff290b0e0 - 0x00007ffff28eaed8 + 0x000000000ab7aed8)))
Relocation section '.rela.dyn' at offset 0x28028 contains 1404 entries:
Relocation section '.relr.dyn' at offset 0x303c8 contains 13406 entries:
000000000ab9b0e0
Relocation section '.rela.plt' at offset 0x4a6b8 contains 2635 entries:

And there is, and it is a RELR one, one of those that we're supposed to apply ourselves... we're kind of doomed aren't we? But how come this wasn't a problem with librelr-test.so? Let's find out in the corresponding code there:

$ objdump -d librelr-test.so
<snip>
    11e1:       48 8b 05 30 21 00 00    mov    0x2130(%rip),%rax        # 3318 <__executable_start@Base>
<snip>
$ readelf -SW librelr-test.so
<snip>
  [20] .got              PROGBITS        0000000000003308 002308 000040 08  WA  0   0  8
<snip>
$ readelf -r librelr-test.so | grep -e Relocation -e 3318
Relocation section '.rela.dyn' at offset 0x450 contains 7 entries:
000000003318  000300000006 R_X86_64_GLOB_DAT 0000000000000000 __executable_start + 0
Relocation section '.rela.plt' at offset 0x4f8 contains 1 entry:
Relocation section '.relr.dyn' at offset 0x510 contains 3 entries:

We had a relocation through symbol resolution, which the dynamic loader applies before calling our initialization code. That's what saved us, but all things considered, that is not exactly great either.

How do we avoid this? Well, let's take a step back, and consider why the GOT is being used. Our code is just using the address of __executable_start, and the compiler doesn't know where it is (the symbol is extern). Since it doesn't know where it is, and whether it will be in the same binary, and because we are building Position Independent Code, it uses the GOT, and a relocation will put the right address in the GOT. At link time, when the linker knows the symbol is in the same binary, it ends up using a relative relocation, which causes our problem.

So, how do we avoid using the GOT? By making the compiler aware that the symbol is eventually going to be in the same binary, which we can do by marking it with the hidden visibility.

Replacing

extern ElfW(Ehdr) __executable_start;

with

extern __attribute__((visibility("hidden"))) ElfW(Ehdr) __executable_start;

will do that for us. And after rebuilding, and re-hacking, our Firefox works, yay!

Let's try other binaries

Let's now try with the main Firefox binary.

$ objcopy --dump-section .dynamic=dyn obj-x86_64-pc-linux-gnu/dist/bin/firefox
$ xxd dyn | sed '/: 2[345]00/s/ 0000/ 0080/' | xxd -r > dyn.new
$ objcopy --update-section .dynamic=dyn.new obj-x86_64-pc-linux-gnu/dist/bin/firefox
$ ./mach run
 0:00.15 /path/to/obj-x86_64-pc-linux-gnu/dist/bin/firefox -no-remote -profile /path/to/obj-x86_64-pc-linux-gnu/tmp/profile-default
$ echo $?
245

We crashed again. Come on! What is it this time?

$ ./mach run --debug
<snip>
(gdb) run
<snip>
Program received signal SIGSEGV, Segmentation fault.
0x0000000000032370 in ?? ()
(gdb) bt
#0  0x0000000000032370 in ?? ()
#1  0x00005555555977be in phc_init (aMallocTable=0x7fffffffdb38, aBridge=0x555555626778 <greplacemallocbridge>)
    at /path/to/memory/replace/phc/PHC.cpp:1700
#2  0x00005555555817c5 in init () at /path/to/memory/build/mozjemalloc.cpp:5213
#3  0x000055555558196c in Allocator<replacemallocbase>::malloc (arg1=72704) at /path/to/memory/build/malloc_decls.h:51
#4  malloc (arg1=72704) at /path/to/memory/build/malloc_decls.h:51
#5  0x00007ffff7ca57ba in (anonymous namespace)::pool::pool (this=0x7ffff7e162c0 <(anonymous namespace)::emergency_pool>)
    at ../../../../src/libstdc++-v3/libsupc++/eh_alloc.cc:123
#6  __static_initialization_and_destruction_0 (__priority=65535, __initialize_p=1)
    at ../../../../src/libstdc++-v3/libsupc++/eh_alloc.cc:262
#7  _GLOBAL__sub_I_eh_alloc.cc(void) () at ../../../../src/libstdc++-v3/libsupc++/eh_alloc.cc:338
#8  0x00007ffff7fcfabe in call_init (env=0x7fffffffdd00, argv=0x7fffffffdcd8, argc=4, l=<optimized out>) at ./elf/dl-init.c:70
#9  call_init (l=</optimized><optimized out>, argc=4, argv=0x7fffffffdcd8, env=0x7fffffffdd00) at ./elf/dl-init.c:26
#10 0x00007ffff7fcfba4 in _dl_init (main_map=0x7ffff7ffe2e0, argc=4, argv=0x7fffffffdcd8, env=0x7fffffffdd00) at ./elf/dl-init.c:117
#11 0x00007ffff7fe5a60 in _dl_start_user () from /lib64/ld-linux-x86-64.so.2
#12 0x0000000000000004 in ?? ()
#13 0x00007fffffffdfae in ?? ()
#14 0x00007fffffffdfe2 in ?? ()
#15 0x00007fffffffdfed in ?? ()
#16 0x00007fffffffdff6 in ?? ()
#17 0x0000000000000000 in ?? ()
(gdb) info symbol 0x00007ffff7ca57ba
_GLOBAL__sub_I_eh_alloc.cc + 58 in section .text of /lib/x86_64-linux-gnu/libstdc++.so.6

Oh boy! So here, what's going on is that the libstdc++ initializer is called before Firefox's, and that initializer calls malloc, which is provided by the Firefox binary, but because Firefox's initializer hasn't run yet, the code in its allocator that depends on relative relocations fails...

Let's... just workaround this by disabling the feature of the Firefox allocator that requires those relocations:

ac_add_options --disable-replace-malloc

Rebuild, re-hack, and... Victory is mine!

Getting this in production

So far, we've looked at how we can achieve the same as elfhack with a simpler and more reliable strategy, that will allow us to consistently use lld across platforms and build types. Now that the approach has been validated, we can proceed with writing the actual code and hooking it in the Firefox build system. Our strategy here will be for our new tool to act as the linker. It will take all the arguments the compiler passes it, and will itself call the real linker with all the required extra arguments, including the object file containing the code to apply the relocations.

Of course, I also encountered some more grievances. For example, GNU ld doesn't define the __executable_start symbol when linking shared libraries, contrary to lld. Thankfully, it defines __ehdr_start, with the same meaning (and so does lld). There are also some details I left out for the _init function, which normally takes 3 arguments, and that the actual solution will have to deal with. It will also have to deal with "Relocation Read-Only" (relro), but for that, we can just reuse the code from elfhack.

The code already exists, and is up for review (this post was written in large part to give reviewers some extra background). The code handles desktop Linux for now (Android support will come later ; it will require a couple adjustments), and is limited to shared libraries (until the allocator is changed to avoid using relative relocations). It's also significantly smaller than elfhack.

$ loc build/unix/elfhack/elf*
--------------------------------------------------------------------------------
 Language             Files        Lines        Blank      Comment         Code
--------------------------------------------------------------------------------
 C++                      2         2393          230          302         1861
 C/C++ Header             1          701          120           17          564
--------------------------------------------------------------------------------
 Total                    3         3094          350          319         2425
--------------------------------------------------------------------------------
$ loc build/unix/elfhack/relr* 
--------------------------------------------------------------------------------
 Language             Files        Lines        Blank      Comment         Code
--------------------------------------------------------------------------------
 C++                      1          443           32           62          349
 C/C++ Header             1           25            5            3           17
--------------------------------------------------------------------------------
 Total                    2          468           37           65          366
--------------------------------------------------------------------------------

(this excludes the code to apply relocations, which is shared between both)

This is the beginning of the end for elfhack. Once "relrhack" is enabled in its place, it will be left around for Firefox downstream builds on systems with older linkers that don't support the necessary flags. Elfhack will eventually be removed when support for those systems is dropped, in a few years. Further down the line, we'll be able to retire both tools, as support for RELR relocations become ubiquitous.

As anticipated, this was a long post. Thank you for sticking to the end.

The Rust Programming Language BlogElecting New Project Directors

Today we are launching the process to elect new Project Directors to the Rust Foundation Board of Directors. As we begin the process, we wanted to spend some time explaining the goals and procedures we will follow. We will summarize everything here, but if you would like to you can read the official process documentation.

We ask all project members to begin working with their Leadership Council representative to nominate potential Project Directors. See the Candidate Gathering section for more details. Nominations are due by September 15, 2023.

What are Project Directors?

The Rust Foundation Board of Directors has five seats reserved for Project Directors. These Project Directors serve as representatives of the Rust project itself on the Board. Like all Directors, the Project Directors are elected by the entity they represent, which in the case of the Rust Project means they are elected by the Rust Leadership Council. Project Directors serve for a term of two years and will have staggered terms. This year we will appoint two new directors and next year we will appoint three new directors.

The current project directors are Jane Losare-Lusby, Josh Stone, Mark Rousskov, Ryan Levick and Tyler Mandry. This year, Jane Losare-Lusby and Josh Stone will be rotating out of their roles as Project Directors, so the current elections are to fill their seats. We are grateful for the work the Jane and Josh have put in during their terms as Project Directors!

We want to make sure the Project Directors can effectively represent the project as a whole, so we are soliciting input from the whole project. The elections process will go through two phases: Candidate Gathering and Election. Read on for more detail about how these work.

Candidate Gathering

The first phase is beginning right now. In this phase, we are inviting the members of all of the top level Rust teams and their subteams to nominate people who will make good project directors. The goal is to bubble these up to the Council through each of the top-level teams. You should be hearing from your Council Representative soon with more details, but if not, feel free to reach out to them directly.

Each team is encouraged to suggest candidates. Since we are electing two new directors, it would be ideal for teams to nominate at least two candidates. Nominees can be anyone in the project and do not have to be a member of the team who nominates them.

The candidate gathering process will be open until September 15, at which point each team's Council Representative will share their team's nominations and reasoning with the whole Leadership Council. At this point, the Council will confirm with each of the nominees that they are willing to accept the nomination and fill the role of Project Director. Then the Council will publish the set of candidates.

This then starts a ten day period where members of the Rust Project are invited to share feedback on the nominees with the Council. This feedback can include reasons why a nominee would make a good project director, or concerns the Council should be aware of.

The Council will announce the set of nominees by September 19 and the ten day feedback period will last until September 29. Once this time has passed, we will move on to the election phase.

Election

The Council will meet during the week of October 1 to complete the election process. In this meeting we will discuss each candidate and once we have done this the facilitator will propose a set of two of them to be the new Project Directors. The facilitator puts this to a vote, and if the Council unanimously agrees with the proposed pair of candidates then the process is completed. Otherwise, we will give another opportunity for council members to express their objections and we will continue with another proposal. This process repeats until we find two nominees who the Council can unanimously consent to. The Council will then confirm these nominees through an official vote.

Once this is done, we will announce the new Project Directors. In addition, we will contact each of the nominees, including those who were not elected, to tell them a little bit more about what we saw as their strengths and opportunities for growth to help them serve better in similar roles in the future.

Timeline

This process will continue through all of September and into October. Below are the key dates:

  • Candidate nominations due: September 15
  • Candidates published: ~~September 19~~ September 22
  • Feedback period: ~~September 19 - 29~~ September 22 - October 2
  • Election meeting: Week of October 1

After the election meeting happens, the Rust Leadership Council will announce the results and the new Project Directors will assume their responsibilities.

Edit: we have adjusted the candidate publication date due to delays in getting all the nominees ready.

Acknowledgements

A number of people have been involved in designing and launching this election process and we wish to extend a heartfelt thanks to all of them! We'd especially like to thank the members of the Project Director Election Proposal Committee: Jane Losare-Lusby, Eric Holk, and Ryan Levick. Additionally, many members of the Rust Community have provided feedback and thoughtful discussions that led to significant improvements to the process. We are grateful for all of your contributions.

Firefox Developer ExperienceFirefox DevTools Newsletter — 117

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 117 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

  • Gregory Pappas removed the now unused devtools.markup.mutationBreakpoints.enabled pref (#1574540). This preference was used to control DOM Mutation Breakpoints, but they are now always enabled.

A DOM Mutation Breakpoint will pauses the code when the DOM node on which you have set the breakpoint is modified. The documentation contains further information on how to use them and how they can help you.

  • Vinny Diehl improved the measuring tool by making it possible to resize the selected area with the keyboard arrow keys (#1262782)
Firefox is displaying a website, and DevTools are open on the right hand side. In DevTools, the Setting panel is displayed, and under the "Available Toolbox Buttons" section, the "Measure a portion of a page" item is checked. In the top DevTools toolbar, there's a ruler icon that is visually active. On the webpage itself, there are a few big block buttons with various labels. There's an highlight overlay on top of the button, with anchor point at each corner of the rectangle. There's a small tooltip next to it showcasing the width and the height in pixels

Once you draw the measuring rectangle, you can now move it around with the arrow keys to perfectly align to what you want to measure. You can also change the width and height of the overlay by holding the Ctrl key (Cmd on OSX). Holding Shift will make it move/resize faster. You can read more about the measuring tool in the dedicated documentation page

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues


CSS Nesting

In 117, Stylo — Firefox CSS engine — adds support for CSS Nesting, a long wanted feature by web developers (some variations of nesting were introduced a long time ago in pre-processed CSS language like SASS).

So what you may have written like this:

button.action {
  color: tomato;
}

button.action:hover {
  color: gold;
}

@media (width < 500px) {
  button.action {
    font-size: 16px;
  }
}

could now be written like this instead:

button.action {
  color: tomato;

  &:hover {
    color: gold;
  }

  @media (width < 500px) {
    & {
      font-size: 16px;
    }
  }  
}

I encourage you to read the specification for nesting which isn’t too long and is very well written: https://www.w3.org/TR/css-nesting-1/

CSS nesting has a pretty big impact on DevTools, specifically, but not only, in the Rules view. In order to make it easier to identify rules, we are showing all the ancestor rules, and adding indentation, opening and closing brackets so it’s similar to the CSS you wrote (#1838803).

Firefox DevTools Inspector rules view showing the 3 following rules:  ``` button.action {   &:hover {     color: gold;   } } button.action {   @media (width < 500px) {     & {       font-size: 16px;     }   } } button.action {   color: tomato; } ```

Compatibility Tooltip

Web compatibility inspection has been enhanced with our new CSS compatibility tooltip in the Developer Tools Inspector (#1840775). An icon is now displayed next to properties that could lead to web compatibility issues. When hovered, the tooltip indicates which browsers are not supported and displays a link to the MDN page for the property so you can learn more about it.

We’re only checking property names at the moment, but we plan to also show this tooltip for unsupported property values (#1636301)

More Inspector goodies

Did you know that highlight pseudo-elements (for example: ::selection) can only be styled by a limited set of properties that do not affect layout (see specification)? The properties that don’t have any effect will now be marked as such in the Rules view (#1842157)

The Firefox DevTools Rules view is displaying the following rule:  ``` h2::selection {   background: pink;   font-size: 2em; } ```  An info icon is displayed after the font-size value, and a tooltip is displayed, pointing to it. The text of the tooltip is:  font-size is not supported on highlight pseudo-elements

We also fixed a strange issue where the last item in custom property fallback list was hidden if used on font-family property (#1842314)

Network Proxy

You can now see HTTP Proxy information in the Network Headers panel (#1707192)

Netmonitor Headers panel displayed for a network response  Proxy address, status and version are displayed

Do Not Clear

Finally, console.clear() won’t clear the console output anymore when the “Enable persistent logs” setting is enabled (#1267856), which matches Chrome DevTools behaviour.


Thank you for reading this and using our tools, see you in a month for a new round of exciting updates 🙂

Wladimir PalantChrome Sync privacy is still very bad

Five years ago I wrote an article about the shortcomings of Chrome Sync (as well as a minor issue with Firefox Sync). Now Chrome Sync has seen many improvements since then. So time seems right for me to revisit it and to see whether it respects your privacy now.

Spoiler: No, it doesn’t. It improved, but that’s an improvement from outright horrible to merely very bad. The good news: today you can use Chrome Sync in a way that preserves your privacy. Google however isn’t interested in helping you figure out how to do it.

The default flow

Chrome Sync isn’t some obscure feature of Google Chrome. In fact, as of Chrome 116 setting up sync is part of the suggested setup when you first install the browser:

Screenshot of Chrome’s welcome screen with the text “Sign in and turn on sync to get your bookmarks, passwords and more on all devices. Your Chrome, Everywhere” and the highlighted button saying “Continue.”

Clicking “Continue” will ask you to log into your Google account after which you are suggested to turn on sync:

A prompt titled “Turn on sync.” The text below says: “You can always choose what to sync in settings. Google may personalize Search and other services based on your history.” The prompt has the buttons Settings, Cancel and (highlighted) Yes, I’m in.

Did you click the suggested “Yes, I’m in” button here? Then you’ve already lost. You just allowed Chrome to upload your data to Google servers, without any encryption. Your passwords, browsing history, bookmarks, open tabs? They are no longer yours only, you allowed Google to access them. Didn’t you notice the “Google may personalize Search and other services based on your history” text in the prompt?

In case you have any doubts, this setting (which is off by default) gets turned on when you click “Yes, I’m in”:

Screenshot of Chrome’s setting titled “Make searches and browsing better” with the explanation text “Sends URLs of pages you visit to Google.” The setting is turned on.

Yes, Google is definitely watching over your shoulder now.

The privacy-preserving flow

Now there is a way for you to use Chrome Sync and keep your privacy. In the prompt above, you should have clicked “Settings.” Which would have given you this page:

A message saying “Setup in progress” along with buttons “Cancel” and “Confirm.” Below it Chrome settings, featuring “Sync” and “Other services” sections.

Do you see what you need to do here before confirming? Anyone? Right, “Make searches and browsing better” option has already been turned on and needs to be switched off. But that isn’t the main issue.

“Encryption options” is what you need to look into. Don’t trust the claim that Chrome is encrypting your data, expand this section.

The selected option says “Encrypt synced passwords with your Google Account.” The other option is “Encrypt synced data with your own sync passphrase. This doesn't include payment methods and addresses from Google Pay.”

That default option sounds sorta nice, right? What it means however is: “Whatever encryption there might be, we get to see your data whenever we want it. But you trust us not to peek, right?” The correct answer is “No” by the way, as Google is certain to monetize your browsing history at the very least. And even if you trust Google to do no evil, do you also trust your government? Because often enough Google will hand over your data to local authorities.

The right way to use Chrome Sync is to set up a passphrase here. This will make sure that most of your data is safely encrypted (payment data being a notable exception), so that neither Google nor anyone else with access to Google servers can read it.

What does Google do with your data?

Deep in Chrome’s privacy policy is a section called How Chrome handles your synced information. That’s where you get some hints towards how your data is being used. In particular:

If you don’t use your Chrome data to personalize your Google experience outside of Chrome, Google will only use your Chrome data after it’s anonymized and aggregated with data from other users.

So Google will use the data for personalization. But even if you opt out of this personalization, they will still use your “anonymized and aggregated” data. As seen before, promises to anonymize and aggregate data cannot necessarily be trusted. Even if Google is serious about this, proper anonymization is difficult to achieve.

So how do you make sure that Google doesn’t use your data at all?

If you would like to use Google’s cloud to store and sync your Chrome data but you don’t want Google to access the data, you can encrypt your synced Chrome data with your own sync passphrase.

Yes, sync passphrase it is. This phrase is the closest thing I could find towards endorsing sync passphrases, hidden in a document that almost nobody reads.

This makes perfect sense of course. Google has no interest in helping you protect your data. They rather want you to share your data with them, so that Google can profit off it.

It could have been worse

Yes, it could have been worse. In fact, it was worse.

Chrome Sync used to enable immediately when you signed into Chrome, without any further action from you. It also used to upload your data unencrypted before you had a chance to change the settings. Besides, the sync passphrase would only result in passwords being encrypted and none of the other data. And there used to be a warning scaring people away from setting a sync passphrase because it wouldn’t allow Google to display your passwords online. And the encryption was horribly misimplemented.

If you look at it this way, there have been considerable improvements to Chrome Sync over the past five years. But it still isn’t resembling a service meant to respect users’ privacy. That’s by design of course: Google really doesn’t want you to use effective protection for your data. That data is their profits.

Comparison to Firefox Sync

I suspect that people skimming my previous article on the topic took away from it something like “both Chrome Sync and Firefox Sync have issues, but Chrome fixed theirs.” Nothing could be further from the truth.

While Chrome did improve, they are still nowhere close to where Firefox Sync started off. Thing is: Firefox Sync was built with privacy in mind. It was encrypting all data from the very start, by default. Mozilla’s goal was never monetizing this data.

Google on the other hand built a sync service that allowed them to collect all of users’ data, with a tiny encryption shim on top of it. Outside pressure seems to have forced them to make Chrome Sync encryption actually usable. But they really don’t want you to use this, and their user interface design makes that very clear.

Given that, the Firefox Sync issue I pointed out is comparably minor. It isn’t great that five years weren’t enough to address it. This isn’t a reason to discourage people from using Firefox Sync however.

The Rust Programming Language BlogChange in Guidance on Committing Lockfiles

For years, the Cargo team has encouraged Rust developers to commit their Cargo.lock file for packages with binaries but not libraries. We now recommend people do what is best for their project. To help people make a decision, we do include some considerations and suggest committing Cargo.lock as a starting point in their decision making. To align with that starting point, cargo new will no longer ignore Cargo.lock for libraries as of nightly-2023-08-24. Regardless of what decision projects make, we encourage regular testing against their latest dependencies.

Background

The old guidelines ensured libraries tested their latest dependencies which helped us keep quality high within Rust's package ecosystem by ensuring issues, especially backwards compatibility issues, were quickly found and addressed. While this extra testing was not exhaustive, We believe it helped foster a culture of quality in this nascent ecosystem.

This hasn't been without its downsides though. This has removed an important piece of history from code bases, making bisecting to find the root cause of a bug harder for maintainers. For contributors, especially newer ones, this is another potential source of confusion and frustration from an unreliable CI whenever a dependency is yanked or a new release contains a bug.

Why the change

A lot has changed for Rust since the guideline was written. Rust has shifted from being a language for early adopters to being more mainstream, and we need to be mindful of the on-boarding experience of these new-to-Rust developers. Also with this wider adoption, it isn't always practical to assume everyone is using the latest Rust release and the community has been working through how to manage support for minimum-supported Rust versions (MSRV). Part of this is maintaining an instance of your dependency tree that can build with your MSRV. A lockfile is an appropriate way to pin versions for your project so you can validate your MSRV but we found people were instead putting upperbounds on their version requirements due to the strength of our prior guideline despite likely being a worse solution.

The wider software development ecosystem has also changed a lot in the intervening time. CI has become easier to setup and maintain. We also have products like Dependabot and Renovate. This has opened up options besides having version control ignore Cargo.lock to test newer dependencies. Developers could have a scheduled job that first runs cargo update. They could also have bots regularly update their Cargo.lock in PRs, ensuring they pass CI before being merged.

Since there isn't a universal answer to these situations, we felt it was best to leave the choice to developers and give them information they need in making a decision. For feedback on this policy change, see rust-lang/cargo#8728. You can also reach out the the Cargo team more generally on Zulip.

The Rust Programming Language BlogAnnouncing Rust 1.72.0

The Rust team is happy to announce a new version of Rust, 1.72.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.72.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.72.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.72.0 stable

Rust reports potentially useful cfg-disabled items in errors

You can conditionally enable Rust code using cfg, such as to provide certain functions only with certain crate features, or only on particular platforms. Previously, items disabled in this way would be effectively invisible to the compiler. Now, though, the compiler will remember the name and cfg conditions of those items, so it can report (for example) if a function you tried to call is unavailable because you need to enable a crate feature.

   Compiling my-project v0.1.0 (/tmp/my-project)
error[E0432]: unresolved import `rustix::io_uring`
   --> src/main.rs:1:5
    |
1   | use rustix::io_uring;
    |     ^^^^^^^^^^^^^^^^ no `io_uring` in the root
    |
note: found an item that was configured out
   --> /home/username/.cargo/registry/src/index.crates.io-6f17d22bba15001f/rustix-0.38.8/src/lib.rs:213:9
    |
213 | pub mod io_uring;
    |         ^^^^^^^^
    = note: the item is gated behind the `io_uring` feature

For more information about this error, try `rustc --explain E0432`.
error: could not compile `my-project` (bin "my-project") due to previous error

Const evaluation time is now unlimited

To prevent user-provided const evaluation from getting into a compile-time infinite loop or otherwise taking unbounded time at compile time, Rust previously limited the maximum number of statements run as part of any given constant evaluation. However, especially creative Rust code could hit these limits and produce a compiler error. Worse, whether code hit the limit could vary wildly based on libraries invoked by the user; if a library you invoked split a statement into two within one of its functions, your code could then fail to compile.

Now, you can do an unlimited amount of const evaluation at compile time. To avoid having long compilations without feedback, the compiler will always emit a message after your compile-time code has been running for a while, and repeat that message after a period that doubles each time. By default, the compiler will also emit a deny-by-default lint (const_eval_long_running) after a large number of steps to catch infinite loops, but you can allow(const_eval_long_running) to permit especially long const evaluation.

Uplifted lints from Clippy

Several lints from Clippy have been pulled into rustc:

  • clippy::undropped_manually_drops to undropped_manually_drops (deny)

    • ManuallyDrop does not drop its inner value, so calling std::mem::drop on it does nothing. Instead, the lint will suggest ManuallyDrop::into_inner first, or you may use the unsafe ManuallyDrop::drop to run the destructor in-place. This lint is denied by default.
  • clippy::invalid_utf8_in_unchecked to invalid_from_utf8_unchecked (deny) and invalid_from_utf8 (warn)

    • The first checks for calls to std::str::from_utf8_unchecked and std::str::from_utf8_unchecked_mut with an invalid UTF-8 literal, which violates their safety pre-conditions, resulting in undefined behavior. This lint is denied by default.
    • The second checks for calls to std::str::from_utf8 and std::str::from_utf8_mut with an invalid UTF-8 literal, which will always return an error. This lint is a warning by default.
  • clippy::cmp_nan to invalid_nan_comparisons (warn)

    • This checks for comparisons with f32::NAN or f64::NAN as one of the operands. NaN does not compare meaningfully to anything – not even itself – so those comparisons are always false. This lint is a warning by default, and will suggest calling the is_nan() method instead.
  • clippy::cast_ref_to_mut to invalid_reference_casting (allow)

    • This checks for casts of &T to &mut T without using interior mutability, which is immediate undefined behavior, even if the reference is unused. This lint is currently allowed by default due to potential false positives, but it is planned to be denied by default in 1.73 after implementation improvements.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Future Windows compatibility

In a future release we're planning to increase the minimum supported Windows version to 10. The accepted proposal in compiler MCP 651 is that Rust 1.75 will be the last to officially support Windows 7, 8, and 8.1. When Rust 1.76 is released in February 2024, only Windows 10 and later will be supported as tier-1 targets. This change will apply both as a host compiler and as a compilation target.

Contributors to 1.72.0

Many people came together to create Rust 1.72.0. We couldn't have done it without all of you. Thanks!

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: July 2023 Progress Report

a dark background with thunderbird and k-9 mail logos centered, with the text "Thunderbird for Android, July progress report"

The day I write this, it’s very hot outside. Too hot to think of a good introduction to this blog post that also includes a link to the previous month’s progress report… Well, I guess this will have to do. I’m off to get some ice cream 🍨😎

Please enjoy this brief report of our development activities in July 2023.

Improved account setup

Since Wolf joined in February of this year, he has spent a considerable amount of time on many of the individual pieces that make up the new and improved account setup user interface. July was the month when things started coming together. For the first time we were able to test the whole flow and not just individual parts.

Things were looking good. But a few small issues kept us busy and prevented us from releasing a beta version containing the new account setup.

Material 3 experiments

We’ve done some experiments to get a better idea of how much work it will be to switch the app to Material 3, the latest version of Google’s open-source design system. We’re now cautiously optimistic. And so the current plan is switch to Material 3 before renaming the app from K-9 Mail to Thunderbird.

Community contributions

In July we merged the following pull requests by external contributors:

Security audit report

After a few busy days surrounding the Thunderbird Supernova release, we finally managed to publish the report of the security audit organized by OSTIF and performed by 7ASecurity. We’re happy to report that no high-risk vulnerabilities were found. The security audit did uncover a handful of low-to-medium risk vulnerabilities.

To learn more about this, read our blog post K-9 Mail Collaborates With OSTIF, 7ASecurity On Security Audit.

Thank you to everyone involved in making this happen!

The post Thunderbird for Android / K-9 Mail: July 2023 Progress Report appeared first on The Thunderbird Blog.

Martin ThompsonFraud, Abuse, Fingerprinting, Privacy, and Openness

Fraud and abuse online are pretty serious problems. How sites manage fraud is something of a mystery to most people. Indeed, as this post will show, that’s deliberate.

This post provides an outline of how fraud management operates. It looks at the basic techniques that are used and the challenges involved. In doing so, it explores the tension between fraud management and privacy.

Hopefully this post helps you understand why fingerprinting is bad for privacy; why you should nevertheless be happy that your bank is fingerprinting you; and, why efforts to replace fingerprinting are unlikely to change anything.

Fraud and abuse are a consequence of the way the Web works. Recognizing that these are a part of the cost of a Web that values privacy, openness, and equity is hard, but I can’t see a better option.

What sorts of fraud and abuse?

This post concentrates on the conduct of fraud or abuse using online services. Web-based services mostly, but mobile apps and similar services have similar concerns.

The sorts of fraud and abuse of most interest are those that operate at scale. One-off theft needs different treatment. Click fraud in advertising is a good example. Click fraud is where a site seeks to convince advertisers that ads have been shown to people in order to get more money. Click fraud is a constant companion to the advertising industry, and one that is unlikely to ever go away. Managing click fraud is an important part of participating in advertising, and something that affects everyone that uses online services.

Outside of advertising, fraud management techniques[1] are also used to manage the risk of fake accounts that are created for fraud or abuse purposes. Online stores and banks also use fraud management as part of an overall strategy for managing the risk of payment fraud or theft.

This is a very high-level overview, so most of this document applies equally to lots of different fraud and abuse scenarios. Obviously, each situation will be different, but I’m glossing over the details.

Understanding online fraud and abuse

Let’s say that you have a site that makes some information or service available. This site will attract clients, which we can split into two basic groups: clients that the site wants to serve, and clients that the site does not want to serve.

The attacker in this model seeks to access the service for some reason. In order to do so, the attacker attempts to convince sites that they are a real client.

For click fraud, a site might seek to convince its advertising partners that ads were shown to real people. The goal is to convince the advertiser to pay the fraudlent site more money. Sophisticated click fraud can also involve faking clicks or ad conversions in an effort to falsely convince the advertiser that the ads on the fraudulent site are more useful as they are responsible for sales.

An adversary rarely gains much by performing a single instance of fraud. They will often seek to automate fraud, accessing the service as many times as possible. Fraud at scale can be very damaging, but it also means that it is easier to detect.

Automation allows fraud to be conducted at scale, but it also creates telltales: signals that allow an attack to be recognized.

Detection

Detection is the first stage for anyone looking to defeat fraud or abuse. To do that, site operators will look for anomalies of any sort. Maybe the attack will appear as an increase in incoming requests or a repetitive pattern of accesses.

Repetition might be a key to detecting fraud. An attacker might try to have their attacks blend in with real humans that are also accessing the system. An attacker’s ability to mimic human behaviour is usually limited, as they often hope to execute many fraudulent transactions. Attackers have to balance the risk that they are detected against the desire to complete multiple actions before they are detected.

Detecting fraud and abuse relies on a range of techniques. Anti-fraud people generally keep details of their methods secret, but we know that they use both automated and manual techniques.

  • Automated systems generally use machine learning that is trained on the details of past attacks. This scales really well and allows for repeat attacks to be detected quickly and efficiently.

  • Human experts can be better at recognizing new forms of attack. Attacks that are detected by automated systems can be confirmed by humans before deploying interventions.

Of course, attackers are also constantly trying to adapt their techniques to evade detection. Detecting an attack can take time.

Identification/classification

It is not enough to know that fraud is occurring. Once recognized, the pattern of fraudulent behaviour needs to be classified, so that future attacks can be recognized.

As noted, most fraud is automated in some way. Even if humans are involved, to operate at any significant scale, even humans will be operating to a script. Whether executed by machines or humans, the script will be designed to evade existing defenses. This means that attacks need to be carefully scripted, which can produce patterns. If a pattern can be found, attempts at fraud can be distinguished from genuine attempts from people to visit the site.

Patterns in abuse manifest in one of two ways:

  1. Common software. If attackers only use a specific piece of hardware or software, then any common characteristics might be revealed by fingerprinting. Even if the attacker varies some characteristics (like the User-Agent header or similar obvious things), other characteristics might stay the same, which can be used to recognize the attack. This is why browser fingerprinting is a valuable tool for managing fraud.

  2. Common practices. Software or scripted interaction can produce fixed patterns of behaviour that can be used to recognize an attempted attack. Clues might exist in the timing of actions or the consistency of interaction patterns. For instance, automated fraud might not exhibit the sorts of variance in mouse movements that a diverse set of people could.

The script that is followed by an attacker might try to vary some of these things. However, unless the attack script is able to simulate the sorts of diversity that real people do – which is unlikely – any resulting common patterns can be used to identify likely attempts at fraud.

Once a pattern is established, future attempts can be recognized. Also, if enough information has been recorded from past interactions, previously undetected fraud might now be identifiable.

Learned patterns can sometimes be used on multiple sites. If an attack is detected and thwarted on one site, similar attacks on other sites might be easier to identify. Fraud and abuse detection services that operate across many sites can therefore be very effective at detecting and mitigating attacks on multiple sites.

Fingerprinting and privacy

Browser makers generally regard browser fingerprinting as an attack on user privacy. The fingerprint of a browser is consistent across sites in ways that are hard to control. Browsers can have unique or nearly-unique fingerprints, which means that people can be effectively identified and tracked using the fingerprint of their browser, against their wishes or expectations.

Fingerprinting used this way undermines controls that browsers use to maintain contextual integrity. Circumventing these controls is unfortunately widespread. Services exist that offer “cookie-less tracking” capabilities, which can including linking cross-site activity using browser fingerprinting or “primary identifiers”[4].

Fingerprinting options in browsers continue to evolve in two directions:

  • New browser features, especially those with personalization or hardware interactions, can expand the ways in which browsers might become more identifiable through fingerprinting.

  • Browser privacy engineers are constantly reducing the ways in which browsers can be fingerprinted.

Though these efforts often pull in different directions, the general trend is toward reduced effectiveness of fingerprinting. Browsers are gradually becoming more homogenous in their observable behaviour despite the introduction of new capabilities. New features that might be used for fingerprinting tend not to be accessible without active user intervention, making them far less reliable as a means of identification. Existing rich sources of fingerprinting information – like plugin or font enumeration – will eventually be far more limited.

Reductions in the effectiveness of fingerprinting are unlikely to ever result in every browser looking identical. More homogenous browser fingerprints makes the set of people who share a fingerprint larger. In turn, this only reduces the odds that a site can successfully reidentify someone using a fingerprint.

Reduced effectiveness of fingerprinting might limit the ability of sites in distinguishing between real and abusive activity. This places stronger reliance on other signals, like behavioural cues. It might also mean that additional checks are needed to discriminate between suspicious and wanted activity, though this comes with its own hazards.

Even when fingerprinting is less useful, fingerprints can still help in managing fraud. Though many users might share the same fingerprint, additional scrutiny can be reserved for those browsers that share a fingerprint with the attacker.

Mitigation strategies

Once a particular instance of fraud is detected and a pattern has been established, it becomes possible to mitigate the effects of the attack. This can involve some difficult choices.

With the difficulty in detecting fraud, sites often tolerate extensive fraud before they are able to start implementing mitigation. Classification takes time and can be error prone. Furthermore, sites don’t want to annoy their customers by falsely accusing them of fraud.

Stringing attackers along

Tolerance of apparent abuse can have other positive effects. A change in how a site reacts to attempted abuse might tip an attacker off that their method is no longer viable. To that end, a site might allow abuse to continue, without any obvious reaction[5].

A site that reacts to fraud in obvious ways will also reveal when fraud has escaped detection. This can be worse, as it allows an attacker to learn when their attack was successful. Tolerating fraud attempts deprives the attacker of immediate feedback.

Delaying the obvious effects of mitigation allows abuse detection to remain effective for longer. Similarly, providing feedback about abuse in the aggregate might prevent an attacker from learning when specific tactics were successful. Attackers that receive less feedback or late feedback cannot adapt as quickly and so are able to evade detection for a smaller proportion of the overall time.

Addressing past abuse

A delayed response depends on being able to somehow negate or mitigate the effect of fraud from the past. This is also helpful where instances of fraud or abuse previously escaped detection.

For something like click fraud, the effect of fraud is often payment, which is not immediate. The cost of fraud can be effectively managed if it can be detected before payment comes due. The advertiser can refuse to pay for fraudulent ad placements and disqualify any conversions that are attributed to them. The same applies to credit card fraud, where settlement of payments can be delayed to allow time for fraudulent patterns to be detected.

It is not always possible to retroactively mitigate fraud or delay its effect. Sites can instead require additional checks or delays. These might not deprive an attacker of feedback on whether their evasive methods were successful, but changes in response could thwart or slow attacks.

Security by obscurity

As someone who works in other areas of security, this overall approach to managing fraud seems very … brittle.

Kerckhoffs’s principle – which guides the design of most security systems – says that you design systems that depend only on protecting the key and not keeping the details of how a system is built secret. A system design that is public knowledge can be analysed and improved upon by many. Keeping the details of the system secret, known as security by obscurity, is considered bad form and usually considered indicative of a weak system design.

Here, security assurances rely very much on security by obscurity. Detecting fraud depends on spotting patterns, then building ways of recognizing those patterns. An attacker that can avoid detection might be able to conduct fraud with impunity. That is, the system of defense relies on techniques so fragile that knowledge of their details would render them ineffectual.

Is there hope for new tools?

There are some technologies that offer some hope of helping manage fraud and abuse risk. However, my expectation is that these will only support existing methods.

Any improvements these might provide is unlikely to result in changes in behaviour. Anything that helps attackers avoid detection will be exploited to the maximum extent possible; anything that helps defenders detect fraud or abuse will just be used to supplement existing information sources.

Privacy Pass

Privacy Pass, offers a way for sites to exchange information about the trustworthiness of their visitors. If one site decides that someone is trustworthy, it can give the browser an anonymous token. Other sites can be told that someone is trustworthy by passing them this token.

Ostensibly, Privacy Pass tokens cannot carry information, only the presence (or absence) of a token carries any information. A browser might be told that the token means “trustworthy”, but it could mean anything[6]. That means that the token issuer needs to be trusted.

How a site determines whether to provide a token also has consequences. Take Apple’s Private Access Tokens, which are supposed to mean that the browser is trustworthy, but they really carry a cryptographically-backed assertion that the holder has an Apple device. For sites looking to find a lucrative advertising audience, this provides a strong indicator that a visitor is rich enough to be able to afford Apple hardware. That is bankable information.

This is an example of how the method used to decide whether to provide a token can leak. In order to protect this information, a decent proportion of tokens need to use alternative methods.

We also need to ensure that sites do not become overly reliant on tokens. Otherwise, people who are unable to produce a token could find themselves unable to access services. People routinely fail to convince computers of their status as a human for many reasons[7]. Clients might be able to withhold some proportion of tokens so that sites might learn not to become dependent on them.

If these shortcomings are addressed somehow, it is possible that Privacy Pass could help sites detect or identify fraud or abuse. However, implementing the safeguards necessary to protect privacy and equitable access is not easy. It might not even be worth it.

Questionable options

Google have proposed an extension to Privacy Pass that carries secret information. The goal here is to allow sites to rely on an assessment of trust that is made by another site, but not reveal the decision to the client. All clients would be expected to retrieve a token and proffer one in order to access the service. Suspicious clients would be given a token that secretly identifies them as such.

This would avoid revealing to clients that they have been identified as potentially fraudulent, but it comes with two problems:

  1. Any determination would only be based on information available to the site that provides the token. The marking would less reliable as a result and based only on the client identity or browser fingerprint[8]. Consequently, any such marking would not be directly usable and it need to be combined with other indicators, like how the client behaves.

  2. Clients that might be secretly classified as dishonest have far less incentive to carry a token that might label them as such.

The secret bit also carries information, which – again – could mean anything. Anything like this would need safeguards against privacy abuse by token providers.

Google have also proposed Web Environment Integrity, which seeks to suppress diversity of client software. Eric Rescorla has a good explanation of how this sort of approach is problematic. Without proper safeguards, the same concerns apply to Apple’s Private Access Tokens.

The key insight for me is that all of these technologies risk placing restrictions on how people access the Web. Some more than others. But openness is worth protecting, even if it does make some things harder. Fraud and abuse management are in some ways a product of that openness, but so is user empowerment, equity of access, and privacy.

Summary

It seems unlikely that anything is going to change. Those who want to commit fraud will continue to try to evade detection and those who are trying to stop them will try increasingly invasive methods, including fingerprinting.

Fraud and abuse are something that many sites contend with. There are no easy or assured methods for managing fraud or abuse risk. Defenders look for patterns, both in client characteristics and their behaviour. Fingerprinting browsers this way can have poor privacy consquences. Concealing how attacks are classified is the only way to ensure that attackers do not adapt their methods to avoid protections. New methods for classification might help, but they create new challenges that will need to be managed.

Fraud is here to stay. Fingerprinting too. I wish that I had a better story to tell, but this is one of the prices we pay for an open Web.


  1. I’m not comfortable using the more widely used “anti-fraud” term here. It sounds too definite, as if to imply that fraud can be prevented perfectly. Fraud and abuse can be managed, but not so absolutely. ↩︎

  2. This story has been widely misreported, see (Schneier, The Register, and Slashdot). These articles cite a recent study from UC Irvine, which cites a study from 2014 that applies to a largely defunct CAPTCHA method. CAPTCHA fans might hold out some hope, though maybe the rest of us would be happy to never see another inane test. ↩︎

  3. There is a whole industry around the scalping of limited run sneakers, to the point that there are specialist cloud services that boast extra low latency access to the sites for major sneaker vendors. ↩︎

  4. Think email addresses or phone numbers. These sites like to pretend that these practices are privacy respecting, but collecting primary identifiers often involves deceptive practices. For example, making access to a service conditional on providing a phone number. ↩︎

  5. It is widely believed that, during the second World War, that the British chose not to act on intelligence gained from their breaking of Enigma codes. No doubt the Admiralty did exercise discretion in how it used the information it gained, but the famous case of the bombing of Coventry in November 1940 was not one of these instances. ↩︎

  6. It could be bad if tokens had something to say about the colour of a person’s skin or their gender identity. There are more bad uses than good ones for these tokens. ↩︎

  7. Finally, a good reason to cite the study mentioned previously. ↩︎

  8. A fingerprint could be re-evaluated on the other site without using a token, so that isn’t much help. ↩︎

Mozilla Privacy BlogMozilla applauds CFPB for taking on the Data Broker Ecosystem

Earlier this week, the Consumer Financial Protection Bureau (CFPB) announced that it will develop rules to prevent “misuse and abuse” of people’s sensitive information by placing restrictions on data sharing by data brokers. This is a much-needed step to advance privacy, give people more control over their data, and shed light on a notoriously murky industry.

In Congressional testimony and advocacy, Mozilla has raised concerns about the opaque state of the data broker industry; it’s nearly impossible to fully understand the extent of data selling and sharing today. For this reason, our methodology for assigning *Privacy Not Included (*PNI) warning labels to the products and brands we research considers whether a company’s privacy policy indicates they can buy or sell personal information with data brokers. If we determine that they do, they earn a privacy ‘ding’. Many of the harms people experience online are the result of unchecked data collection – by data brokers and beyond. For example, sensitive health data collected in apps is typically unprotected, which can have serious consequences. Similarly, geolocation data for purchase poses a privacy and safety risk for all Americans, but especially so for the most marginalized members of society and those who face the threat of violence such as those fleeing domestic abuse.

At Mozilla, we’ve worked to push the industry in a better direction. We build privacy protections into the browser to prevent data collection and offer tools that make it harder for data brokers to create a detailed profile of consumers’ online activity. We work to improve the advertising ecosystem, where data brokers sell information to target consumers, and we help people navigate deceptive design practices that trick people into handing over their data in the first place.

That said, we can only do so much in our products and by holding companies to account. It’s undeniable that consumer data powers today’s internet. As CFPB Director Rohit Chopra noted at this week’s White House roundtable on data brokers, AI only increases the reliance on vast troves of data. CFBP’s efforts are vital and we applaud the move – but it’s only a piece of the bigger picture.

As a part of reigning in the data broker industry, consumers require a comprehensive legal framework. Federal privacy legislation, like last year’s American Data Privacy and Protection Act (ADPPA), is critical to ensuring that people have agency over their online data and can truly benefit from technologies that improve their lives, without conceding to the exploitation of their personal data.

We’re pleased to see the White House and CFPB tackle some of these hugely problematic practices, and are eager to delve into the CFPB’s proposed rules when they’re released. We’re hopeful these efforts can be a big step forward towards protecting sensitive consumer data, and the privacy of our most marginalized groups in society.

The post Mozilla applauds CFPB for taking on the Data Broker Ecosystem appeared first on Open Policy & Advocacy.

The Talospace ProjectFirefox 116 on POWER

Firefox 116 is out with user interface improvements (notably a sidebar switcher), faster HTTP/2 uploads, and some initial UI rework for changes to how recently closed tabs are handled. On the developer side, the Audio Output Devices API lets you redirect browser audio output to a permitted device without having to change it globally, plus directional attributes for certain HTML form elements for those of you using a right-to-left language system.

This release needs new patches. First, for the long-playing bug 1775202, either put --disable-webrtc in your .mozconfig if you don't need WebRTC, or tweak third_party/libwebrtc/moz.build with this updated patch. The browser otherwise builds and works with an updated PGO-LTO patch and the .mozconfigs from Firefox 105.

Mozilla Privacy BlogMozilla Supports Updates to the Health Breach Notification Rule

[Read our full submission here.]

Privacy is in our DNA at Mozilla, from our privacy-enhancing products to our support for laws and regulations that enshrine privacy for all. In line with our foundational principle that individual privacy and security on the web should never be treated as optional, we have supported a range of US action on privacy, including bipartisan Federal privacy legislative proposals and the Federal Trade Commission’s (FTC’s) Commercial Surveillance and Data Security ANPR.

This week, we submitted a comment supporting the FTC’s Notice of Proposed Rulemaking for the Health Breach Notification Rule (HBNR.) The purpose of the HBNR is to protect non-HIPAA health-related data, such as data from running apps and diet-tracking websites. It does so by requiring certain entities that share health-related information without consent, or experience a data breach, to notify individuals, the FTC, and sometimes the media of the breach of privacy.

The rule already applied to many health apps and websites, as demonstrated by a set of settlements from earlier this year, but the new proposed rule even more clearly delineates the responsibilities of companies running health-related apps or websites.

Mozilla has deep insight into the privacy practices of health-related apps, because our *Privacy Not Included research team recently did deep dives on the privacy policies and practices of mental health and reproductive health apps. They found dismal privacy practices for some of the most sensitive apps they studied. *PNI’s research demonstrates the dire need for this update to the HBNR, and allowed us to suggest two main ways in which the FTC can further strengthen its proposed rule:

  • The FTC should explicitly define consent (or “authorization”) in the context of the HBNR. We know that many companies will use deceptive designs to trick people into giving consent, for example, and the FTC should clearly state that deceptive consent flows do not count as consent.
  • We have been early supporters of browser-based privacy signals such as the Global Privacy Control, with proper enforcement; the HBNR should allow users to indicate their lack of consent using these signals. Browser based privacy signals are already recognized in a number of laws and regulations, and make privacy more consumer-friendly.

You can read our full comment here.

The post Mozilla Supports Updates to the Health Breach Notification Rule appeared first on Open Policy & Advocacy.

Mozilla Addons BlogPrepare your Firefox desktop extension for the upcoming Android release

In the coming months Mozilla will launch support for an open ecosystem of extensions on Firefox for Android on addons.mozilla.org (AMO). We’ll announce a definite launch date in early September, but it’s safe to expect a roll-out before the year’s end. Here’s everything developers need to know to get their Firefox desktop extensions ready for Android usage and discoverability on AMO…

Firefox will become the only major Android browser to support an open extension ecosystem

For the past few years Firefox for Android officially supported a small subset of extensions while we focused our efforts on strengthening core Firefox for Android functionality and understanding the unique needs of mobile browser users. Today, Mozilla has built the infrastructure necessary to support an open extension ecosystem on Firefox for Android. We anticipate considerable user demand for more extensions on Firefox for Android, so why not start optimizing your desktop extension for mobile-use right away?

“There is so much creative potential to unlock within the mobile browser space. Mozilla wants to provide developers with the best support we can so they’re equipped and empowered to build modern mobile WebExtensions.” — Giorgio Natili, Firefox Director of Engineering

To support our ecosystem of extension developers, we will create additional guides, resources and host community events to support your transition to a managed multi-process environment like Android.

Transition background scripts to non-persistent event pages

We recently introduced support for multi-process in Firefox for Android Nightly. This means extensions are no longer hosted in the main process as Firefox’s user interface. This is a key consideration since Android is prone to shutting down resource-intensive processes, such as extensions. To mitigate the risk of unexpected extension termination, we’ve introduced event page architecture to be non-persistent and more resilient to process termination. Thus we strongly encourage developers to transition from persistent backgrounds to non-persistent Event pages to improve their extension’s stability. In summary, this means:

  • Update your manifest.json background key and add “persistent”: false.
  • Ensure listeners are registered synchronously at the top-level.
  • Record global state in the storage API, for example storage.session.
  • Change timers to alarms.
  • Switch from using extension.getBackgroundPage for calling a function from the background page, to extension messaging or runtime.getBackgroundPage.

Once you’re ready to test the mobile version of your extension, create a collection on AMO and test it on Firefox for Android Nightly (note you’ll need to make a one-time change to Nightly’s advanced settings; please see the “Enable general extension support setting in Nightly” section of this post for details). If you’d prefer to polish your extension before publishing it on AMO, you can also debug and run the extension with web-ext.

This is an exciting time for developers seeking to expand the reach of their desktop extensions into the mobile Android space. For community support and input, you’re welcome to join the conversation on Firefox Add-ons Discourse.

The post Prepare your Firefox desktop extension for the upcoming Android release appeared first on Mozilla Add-ons Community Blog.

Firefox NightlyUnboxing More DevTools Powers, and Reusable Delights – These Weeks in Firefox: Issue 144

Highlights

  • Several Developer Tools updates landed thanks to the DevTools team and fellow contributors. Be sure to check out the DevTools section for more details.
    • Nicolas enabled the shape highlighter in devtools for offset-path property (bug)

Screenshot of the shape highlighter in devtools displaying a circular motion path, as a result of the `offset-path` property

    • Logan Rosen fixed a color contrast issue in the inspector image preview (bug)

A before and after comparison of the devtools inspector image preview. In the before image, the image dimensions label has poor colour contrast and is difficult to read. In the after image, its colour contrast is improved and is easier to read.

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Ganna
  • Gregory Pappas [:gregp]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • As per the deprecation plan described in Bug 1827910 comment 1 – in Firefox 118 – the browser_style manifest.json option is not supported anymore for Manifest Version 3 extensions (Bug 1830711)
  • In Firefox 117, we introduced new UI controls for the Quarantined Domains feature in the extensions panel and the extension action context menu (Bug 1838234)
WebExtension APIs
  • In Firefox 117, Manifest Version 2 extensions with a granted activeTab permission will be able to use tabs.executeScript to inject content scripts into dynamically created iframes that are same origin with the top level context (Bug 1841483)
Addon Manager & about:addons
  • Applied a few toolkit-level changes (along with a few tweaks at the browser level) to the AddonManager internals in order to support the mozAddonManager-based install flow on GeckoView (Bug 1822640, Bug 1845745, Bug 1845749, Bug 1845820)

Developer Tools

DevTools
  • Contributors
    • Gregory removed the unused devtools.storage.test.forceLegacyActors preference (bug)
  • Nicolas added preview support for HighlightRegistry objects (used in Custom Highlight API) in Console/Debugger (bug)

A before and after comparison of preview support for `HighlightRegistry` objects in the devtools console. In the before image, no entry details are displayed in the console for a `HighlightRegistry` object. In the after image, entry details such as keys and values can be seen.

  • Hubert improved Debugger preview tooltip (bug)

A before and after comparison of the devtools debugger preview tooltip. In the before image, the tooltip is not very informative and contains minimal `prototype` details. In the after image, there is an additional property `Date` displayed in the tooltip.

  • Hubert migrated the whole Debugger codebase away from JSX (bug)
  • Hubert fixed an issue in Netmonitor where resend request was blocked by Opaque Request Blocking (bug)
WebDriver BiDi
  • Sasha implemented the browsingContext.activate command which will force a given browsing context to become visible by moving its tab and window to the foreground (bug)
  • Sasha added the background argument to browsingContext.create which allows users to decide if new tabs and windows should be in the background (bug)
  • Sasha also fixed a bug on Android to make sure the correct tab was selecting when using background: true (bug)
  • Henrik added a type field to events and responses coming from WebDriver BiDi so that clients can easily process them (bug)
  • Julian updated our vendored Puppeteer to version 20.9.0 with many new tests passing for the BiDi implementation: 385 passing tests compared to only 125 before the update (bug)

ESMification status

Lint, Docs and Workflow

Migration Improvements

  • An experiment is underway on the release channel that allows people to migrate some Chrome extensions into Firefox! We’re running this experiment with a small population for about a month to make sure that extensions migration is behaving properly out in the wild before we consider rolling this out more widely.
  • mconley
  • gregp got rid of some migration code that only works for unsupported versions of Windows

Picture-in-Picture

Search and Navigation

Storybook/Reusable Components

Firefox NightlyA View to a Better, Faster Web – These Weeks in Firefox: Issue 143

Highlights

  • Mozilla published its standards position on the Web Environment Integrity API proposal draft put forward by the Google Chrome team.
  • A new version of Firefox View is in Nightly behind the browser.tabs.firefox-view-next pref; it is still a Work-in-Progress but it’s undergoing QA testing now
    • The new version includes sections to show your recent browsing, your currently open and recently closed tabs, tabs from other devices, and browsing history.
  • The Necko team has landed some HTTP/2 upload speed improvements, and we’re seeing results with significant improvements in the 50th percentile and higher on Firefox 115 (the results are from Beta)! Details here.
  • We’re now apparently beating Chrome on the SunSpider JavaScript benchmark!
  • Credit card autofill support has now been enabled in more regions in Nightly
  • Nicolas tweaked the appearance of nested rules in the inspector to better match the authored text (bug)

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Gregory Pappas [:gregp]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Thanks to Tim Giles for fixing the regression tracked by Bug 1840953 – Link in add-on recommendation heading has incorrect text
    • NOTE: the fix has been applied on the moz-support-link custom element side, which will not be overriding the text content of the element with its default “Learn more” string when the element has a data-l10n-name attribute set.
  • Thanks to Gregor Sunta contributed fix, about:addons cards will now be showing a rating URL when there is a “review URL” associated with an add-on, even if the add-on doesn’t have any rating yet – Bug 1841619
WebExtensions Framework
  • As part of work to introduce explicit extension process crash handling: we applied changes to the background page state tracking and transitions to make sure the extension process will be respawned automatically for extensions using an event page as their background script when one of their persisted listeners have been triggered after the extension process crash – Bug 1762225.
  • As part of fixing a regression related to the encrypted IndexedDB storage enabled in Firefox 115 for web pages running in private browsing mode: we re-introduced an explicit exception raised when an extension page running in private browsing mode tries to open an IndexedDB database so that the extensions can then fallback to browser.storage.local as they were doing before Firefox 115 – Bug 1841806.
WebExtension APIs
  • Fixed storage.onChanged events wrongly emitted to extension content scripts on changes to the storage.session data – Bug 1842009.

Developer Tools

DevTools
  • Contributors
    • Vinny Dieh added keyboard support for resizing the area drawn with the measuring tool and updated the documentation (bug)
    • Gregory Pappas removed the preference we had to enable DOM Breakpoints, since they’re enabled for a while  (bug)
    • Felix Lefebvre added a second argument to the $ and $$ console helpers, which is the container element in which the query will happen (bug)
  • Nicolas made the console ignore console.clear() call when the “Enable persistent logs” is enabled (bug)
  • Hubert added HTTP proxy information in the Netmonitor headers panel (bug)
  • Hubert fixed an issue where preview popup were not displayed module scripts (bug), which got us a nice 5 to 10% improvement for debugger opening ((alert)
  • Alex is still refactoring of the Debugger codebase to make it more stable and easier to work with (bug,bug,bug,bug)
  • Nicolas fixed issues in the rule view when using CSS variables (bug, bug)
  • Nicolas fixed erroneous “overridden property” information in the rules view and the computed panel for rules that were using @layer and important (bug, bug)
  • Nicolas added inactive CSS indication on highlight pseudo-elements (e.g. ::selection) for unsupported properties (bug)
WebDriver BiDi
  • Henrik implemented the browsingContext.setViewport command, which allows to change the dimensions of the viewport (bug)
  • Julian added the serialization of headers and cookies to match the latest specification changes, which makes it easier to support non-UTF8 values (bug)
  • Julian updated various events and commands to return a consistent “navigation id” if they are related to a specific navigation. This way, users can initiate a navigation and easily find the corresponding browsing context and network events. (bug, bug, bug and bug)
  • Sasha implemented the browsingContext.fragmentNavigated event, which monitors same-document navigation such as hash changes (bug)
  • Sasha added support for the clip argument to browsingContext.captureScreenshot, which can be used to restrict the screenshot to a specific area or to a given Element (bug)

ESMification status

  • ESMified status:
    • browser: 85%
    • toolkit: 99%
    • Total:  95.13% (up from 94.64%)
  • #esmification on Matrix

Information Management

  • Persistence of recently closed tabs in the new Firefox View has landed in Nightly (behind the browser.sessionstore.persist_closed_tabs_between_sessions pref)

Lint, Docs and Workflow

  • Newtab’s ESLint configuration has been simplified & improved.
    • Previously some files were incorrectly identified as being in the wrong environments.
    • The configuration was structured for being a separate repository (which it originally was), now it uses the m-c configuration in the normal way.

Migration Improvements

Search and Navigation

  • Standard8 did an optimization to catch when the responsetype is an invalid JSON for search suggestions urls – Bug 1016808 – Use XMLHttpRequest’s responseType = “json” for search suggestions
  • Marc fixed a bug where entering characters with character or accent marks on macOS doesn’t remove the autofill selected part. Bug 1512013
  • Daisuke fixed a bug where the cursor jumps to the beginning of the address bar after tab swap.  Bug 1512013
  • Daisuke fixed a bug where copying some text, the user middle clicks on a bookmark to open it in a new tab and it results in the copied text pasted in the address bar rather than opening a new tab with nothing pasted into the address bar. Bug 1838743
  • Marco fixed a massive performance regression in Nightly for address bar where noticeable delay occurred after entering a character in the address bar. Bug 1842381
  • Gijs helped fix a regression where search shortcuts table in about:settings were missing headers. Bug 1842547
  • Marco did some refactoring promises and reduced flicker in the address bar. Bug 1843074, Bug 1843100
  • Daisuke has been doing some work with Pocket suggestions by appending UTM parameters to Pocket collection urls. Bug 1843186
  • Dale fixed a bug where rich suggestions disappear when there’s a duplicated heuristic suggestion because we are deduplicating the rich suggestion with heuristic results. We’ve avoided deduplicating the rich with non-rich suggestions to fix the issue. Bug 1843386

Mozilla ThunderbirdMake Thunderbird Yours: How To Get The Thunderbird 115 “Supernova” Look

Thunderbird 115 screenshot, showing a vertical layout with folder pane listing account folders and tags, as well as panels for message list, email messages, and a "Today" area for upcoming calendar events.

Thunderbird 115 “Supernova” ships with brand new layout options to give you a more beautiful and more productive email experience. But those new options aren’t on by default (for now), out of respect to those who have grown comfortable with Thunderbird’s Classic View throughout the years. Fortunately, getting that shiny new “Supernova” look is accomplished in just a few seconds. In this short guide, we’ll show you how to do it!

Step 1: Turn On Vertical View

First, click on the App Menu (≡) and choose “View”, followed by “Layout,” and then select “Vertical View.” This will rearrange the Folder pane, Message List pane, and Message pane to be displayed side-by-side.

Step 2: Turn On Cards View

Next, let’s switch on “Cards” view. This new way of displaying your message list is simpler and more compact, to help reduce cognitive burden when you view the list. And it’s easy to activate.

Look immediately to the right of the “Quick Filter” area for the “Message List Display Options” icon. Click it, and select “Cards View” as seen in the GIF above.

Cards View is still in active development. More features such as a message preview line and sender avatars will be added in the future.

💡 TIP #1: Are you having trouble finding that menu? The area of Thunderbird where you activate Cards View is called the Message List Header. Some people choose to hide this section to reclaim a bit of vertical space. It’s easy to get it back: Just return to the App menu (≡), then select View ➡ Layout, and make sure that “Message List Header” is checked ✓

💡 TIP #2: Cards View is also available when using Classic or Wide layouts.

Step 3: Turn On “Tags” Folder Mode

If Tags are an important part of your workflow, now it’s easier than ever to access them.

In the Folder Pane Options menu (⋯) next to the “New Message” button, click “Folder Modes” and then choose “Tags.” That’s it! But if you want to continue customizing Thunderbird 115, you can also use this menu to hide Local Folders.

💡 Tip: Want to move your Tags up or down in the Folder Pane? Click on the 3 vertical dots menu (⋮) next to Tags, and simply choose “Move Up” or “Move Down” as seen in the above GIF.

Step 4: Customize The Message Header

And finally, we arrive at the Message Header Settings. That’s the section at the top of your email showing all the information such as sender’s name, contact photo (which pulls from the Address Book), subject, any associated tags, and more. Configuring this to fit your preferences is easy. Just click the “More” button with the downward facing arrow, then select “Customize” and make it yours!


We hope this helps you enjoy an even better Thunderbird experience. Thanks for being part of the Thunderbird family, and make sure to check back later for more customization guides and usage tips.

The post Make Thunderbird Yours: How To Get The Thunderbird 115 “Supernova” Look appeared first on The Thunderbird Blog.

Hacks.Mozilla.OrgAutogenerating Rust-JS bindings with UniFFI

I work on the Firefox sync team at Mozilla. Four years ago, we wrote a blog post describing our strategy to ship cross-platform Rust components for syncing and storage on all our platforms. The vision was to consolidate the separate implementations of features like history, logins, and syncing that existed on Firefox Desktop, Android, and iOS.

We would replace those implementations with a core written in Rust and a set of hand-written foreign language wrappers for each platform: JavaScript for Desktop, Kotlin for Android, and Swift for iOS.

Since then, we’ve learned some lessons and had to modify our strategy. It turns out that creating hand-written wrappers in multiple languages is a huge time-sink. The wrappers required a significant amount of time to write, but more importantly, they were responsible for many serious bugs.

These bugs were easy to miss, hard to debug, and often led to crashes. One of the largest benefits of Rust is memory safety, but these hand-written wrappers were negating much of that benefit.

To solve this problem, we developed UniFFI: a Rust library for auto-generating foreign language bindings. UniFFI allowed us to create wrappers quickly and safely, but there was one issue: UniFFI supported Kotlin and Swift, but not JavaScript, which powers the Firefox Desktop front-end. UniFFI helped us ship shared components for Firefox Android and iOS, but Desktop remained out of reach.

This changed with Firefox 105 when we added support for generating JavaScript bindings via UniFFI which enabled us to continue pushing forward on our single component vision. This project validated some core concepts that have been in UniFFI from the start but also required us to extend UniFFI in several ways. This blog post will walk through some of the issues that arose along the way and how we handled them.

Prior Art

This project has already been tried at least once before at Mozilla. The team was able to get some of the functionality supported, but some parts remained out of reach. One of the first things we realized was that the general approach the previous attempts took would probably not support the UniFFI features we were using in our components.

Does this mean the previous work was a failure? Absolutely not. The team left behind a wonderful trove of design documents, discussions, and code that we made sure to study and steal from. In particular, there was an ADR that discussed different approaches which we studied, as well as a working C++/WebIDL code that we repurposed for our project.

Calling the FFI functions

UniFFI bindings live on top of an FFI layer using the C ABI that we call “the scaffolding.” Then the user API is defined on top of the scaffolding layer, in the foreign language. This allows the user API to support features not directly expressible in C and also allows the generated API to feel idiomatic and natural. However, JavaScript complicates this picture because it doesn’t have support for calling C functions. Privileged code in Firefox can use the Mozilla js-ctypes library, but its use is deprecated.

The previous project solved this problem by using C++ to call into the scaffolding functions, then leveraged the Firefox WebIDL code generation tools to create the JavaScript API. That code generation tool is quite nice and allowed us to define the user API using a combination of WebIDL and C++ glue code. However, it was limited and did not support all UniFFI features.

Our team decided to use the same WebIDL code generation tool, but to generate just the scaffolding layer instead of the entire user API. Then we used JavaScript to define the user API on top of that, just like for other languages. We were fairly confident that the code generation tool would no longer be a limiting factor, since the scaffolding layer is designed to be minimalistic and expressible in C.

Async functions

The threading model for UniFFI interfaces is not very flexible: all function and method calls are blocking. It’s the caller’s responsibility to ensure that calls don’t block the wrong thread. Typically this means executing UniFFI calls in a thread pool.

The threading model for Firefox frontend JavaScript code is equally inflexible: you must never block the main thread. The main JavaScript thread is responsible for all UI updates and blocking it means an unresponsive browser. Furthermore, the only way to start another thread in JavaScript is using Web Workers, but those are not currently used by the frontend code.

To resolve the unstoppable force vs. immovable object situation we found ourselves in, we simply reversed the UniFFI model and made all calls asynchronous. This means that all functions return a promise rather than their return value directly.

The “all functions are async” model seems reasonable, at least for the first few projects we intend to use with UniFFI. However, not all functions really need to be async – some are quick enough that they aren’t blocking. Eventually, we plan to add a way for users to customize which functions are blocking and which are async. This will probably happen alongside some general work for async UniFFI, since we’ve found that async execution is an issue for many components using UniFFI.

How has it been working?

Since landing UniFFI support in Firefox 105, we’ve slowly started adding some UniFFI’ed Rust components to Firefox. In Firefox 108 we added the Rust remote tabs syncing engine, making it the first component shared by Firefox on all three of our platforms. The new tabs engine uses UniFFI to generate JS bindings on Desktop, Kotlin bindings on Android, and Swift bindings on iOS.

We’ve also been continuing to advance our shared component strategy on Mobile. Firefox iOS has historically lagged behind Android in terms of shared component adoption, but the Firefox iOS 116 release will use our shared sync manager component. This means that both mobile browsers will be using all of the shared components we’ve written so far.

We also use UniFFI to generate bindings for Glean, a Mozilla telemetry library, which was a bit of an unusual case. Glean doesn’t generate JS bindings; it only generates the scaffolding API, which ends up in the GeckoView library that powers Firefox Android. Firefox Android can then consume Glean via the generated Kotlin bindings which link to the scaffolding in Geckoview.

If you’re interested in this project or UniFFI in general, please join us in #uniffi on the Mozilla Matrix chat.

The post Autogenerating Rust-JS bindings with UniFFI appeared first on Mozilla Hacks - the Web developer blog.

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 116-117)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 116 and 117 Nightly release cycles.

🚀 Performance

We’re working on improving performance for popular web frameworks such as React. We continue to make good progress, as you can see on this Speedometer 2 graph:

  • We added a fast path for JSON.stringify.
  • We’ve added a fast path for allocating from the nursery in C++ code.
  • We added an optimization for Object.keys to take advantage of cached for-in iterators if available.
  • We’ve extended the compilation hints mechanism to also cover Warp compilations. This means we spend less time in Baseline JIT code.
  • We added a trampoline to optimize polymorphic calls.
  • We’ve disabled Spectre mitigations in Fission content processes (Nightly-only for now).
  • We also disabled the use of mprotect for JIT code because this added significant performance overhead even though bypasses have been commoditized and this didn’t significantly impact attackers.
  • We fixed a performance cliff with Warp-compiled generators.
  • We changed some GC pointers in IC stubs to be weak pointers to reclaim more memory and to discard dead stubs.
  • A contributor rewrote some of our date computations to be much faster by reducing the number of branches and floating point operations.

👷🏽‍♀️ New features

We shipped some new JS features! 🎉

We also implemented features that are still disabled by default:

We want to give a big shout-out 📣 to André Bargull (anba) who volunteered to implement many of these features. Especially Temporal is a very large feature: André landed more than a hundred patches for it!

⚡ Wasm GC

High-level programming languages currently need to bring their own GC if they want to run on WebAssembly. This can result in memory leaks because it cannot collect cycles that form with the browser. The Wasm GC proposal adds struct and array types to Wasm so these languages can use the browser’s GC instead.

  • We added support for ‘final’ types.
  • We optimized allocation of struct and array objects more.
  • We also implemented casting for the remaining Wasm types.

📚 Miscellaneous

  • The final changes landed to remove the last uses of the JSContext type for helper threads. This is a large architectural improvement that unblocks exciting future improvements.
  • We tracked down and worked around a likely Samsung CPU bug.
  • We removed some code for older Windows versions because Firefox 116 will only support Windows 10+.

The Rust Programming Language Blog2022 Annual Rust Survey Results

Hello, Rustaceans!

For the 6th year in a row, the Rust Project conducted a survey on the Rust programming language, with participation from project maintainers, contributors, and those generally interested in the future of Rust. This edition of the annual State of Rust Survey opened for submissions on December 5 and ran until December 22, 2022.

First, we'd like to thank you for your patience on these long delayed results. We hope to identify a more expedient and sustainable process going forward so that the results come out more quickly and have even more actionable insights for the community.

The goal of this survey is always to give our wider community a chance to express their opinions about the language we all love and help shape its future. We’re grateful to those of you who took the time to share your voice on the state of Rust last year.

Before diving into a few highlights, we would like to thank everyone who was involved in creating the State of Rust survey with special acknowledgment to the translators whose work allowed us to offer the survey in English, Simplified Chinese, Traditional Chinese, French, German, Japanese, Korean, Portuguese, Russian, Spanish, and Ukrainian.

Participation

In 2022, we had 9,433 total survey completions and an increased survey completion rate of 82% vs. 76% in 2021. While the goal is always total survey completion for all participants, the survey requires time, energy, and focus – we consider this figure quite high and were pleased by the increase.

We also saw a significant increase in the number of people viewing but not participating in the survey (from 16,457 views in 2021 to 25,581 – a view increase of over 55%). While this is likely due to a number of different factors, we feel this information speaks to the rising interest in Rust and the growing general audience following its evolution.

In 2022, the survey had 11,482 responses, which is a slight decrease of 6.4% from 2021, however, the number of respondents that answered all survey questions has increased year over year. We were interested to see this slight decrease in responses, as this year’s survey was much shorter than in previous years – clearly, survey length is not the only factor driving participation.

Community

We were pleased to offer the survey in 11 languages – more than ever before, with the addition of a Ukrainian translation in 2022. 77% of respondents took this year’s survey in English, 5% in Chinese (simplified), 4% in German and French, 2% in Japanese, Spanish, and Russian, and 1% in Chinese (traditional), Korean, Portuguese, and Ukrainian. This is our lowest percentage of respondents taking the survey in English to date, which is an exciting indication of the growing global nature of our community!

The vast majority of our respondents reported being most comfortable communicating on technical topics in English (93%), followed by Chinese (7%).

Rust user respondents were asked which country they live in. The top 13 countries represented were as follows: United States (25%), Germany (12%), China (7%), United Kingdom (6%), France (5%), Canada (4%), Russia (4%), Japan (3%), Netherlands (3%), Sweden (2%), Australia (2%), Poland (2%), India (2%). Nearly 72.5% of respondents elected to answer this question.

While we see global access to Rust education as a critical goal for our community, we are proud to say that Rust was used all over the world in 2022!

Rust Usage

More people are using Rust than ever before! Over 90% of survey respondents identified as Rust users, and of those using Rust, 47% do so on a daily basis – an increase of 4% from the previous year.

30% of Rust user respondents can write simple programs in Rust, 27% can write production-ready code, and 42% consider themselves productive using Rust.

Of the former Rust users who completed the survey, 30% cited difficulty as the primary reason for giving up while nearly 47% cited factors outside of their control.

Graph: Why did you stop using Rust?

Similarly, 26% of those who did not identify as Rust users cited the perception of difficulty as the primary reason for not having used it, (with 62% reporting that they simply haven’t had the chance to prioritize learning Rust yet). Graph: Why don't you use Rust?

Rust Usage at Work

The growing maturation of Rust can be seen through the increased number of different organizations utilizing the language in 2022. In fact, 29.7% of respondents stated that they use Rust for the majority of their coding work at their workplace, which is a 51.8% increase compared to the previous year.

Graph: Are you using Rust at work?

There are numerous reasons why we are seeing increased use of Rust in professional environments. Top reasons cited for the use of Rust include the perceived ability to write "bug-free software" (86%), Rust's performance characteristics (84%), and Rust's security and safety guarantees (69%). We were also pleased to find that 76% of respondents continue to use Rust simply because they found it fun and enjoyable. (Respondents could select more than one option here, so the numbers don't add up to 100%.)

Graph: Why do you use Rust at work?

Of those respondents that used Rust at work, 72% reported that it helped their team achieve its goals (a 4% increase from the previous year) and 75% have plans to continue using it on their teams in the future.

But like any language being applied in the workplace, Rust’s learning curve is an important consideration; 39% of respondents using Rust in a professional capacity reported the process as “challenging” and 9% of respondents said that adopting Rust at work has “slowed down their team”. However, 60% of productive users felt Rust was worth the cost of adoption overall. Graph: Reasons for using Rust at work

It is exciting to see the continued growth of professional Rust usage and the confidence so many users feel in its performance, control, security and safety, enjoyability, and more!

Supporting the Future of Rust

A key goal of the State of Rust survey is to shed light on challenges, concerns, and priorities Rustaceans are currently sitting with.

Of those respondents who shared their main worries for the future of Rust, 26% have concerns that the developers and maintainers behind Rust are not properly supported – a decrease of more than 30% from the previous year’s findings. One area of focus in the future may be to see how the Project in conjunction with the Rust Foundation can continue to push that number towards 0%.

While 38% have concerns about Rust “becoming too complex”, only a small number of respondents were concerned about documentation, corporate oversight, or speed of evolution. 34% of respondents are not worried about the future of Rust at all.

This year’s survey reflects a 21% decrease in fears about Rust’s usage in the industry since the last survey. Faith in Rust’s staying power and general utility is clearly growing as more people find Rust and become lasting members of the community. As always, we are grateful for your honest feedback and dedication to improving this language for everyone.

Graph: Worries about the future of Rust

Another Round of Thanks

To quote an anonymous survey respondent, “Thanks for all your hard work making Rust awesome!” – Rust wouldn’t exist or continue to evolve for the better without the many Project members and the wider Rust community. Thank you to those who took the time to share their thoughts on the State of Rust in 2022!

Adrian GaudebertDawnmaker's endless conundrum of infinite replayability

Over the last few months we've had the opportunity to show Dawnmaker to a lot of people, and notably a few publishers. We had the good fortune of receiving very valuable feedback on the game, which allowed us to identify two important problems with it, or at least, with its demo. The first problem is that our artistic direction isn't compelling enough, but that will not be today's topic — though we are, of course, working on it.

The problem we're going to discuss today it that of replayability. Some of our players, and most of the publishers we talked to, have expressed that they do not feel inclined to restart a game after they lose. Once you've understood the patterns of the game, restarting a new game feels like doing the same thing again, and it is boring. That feeling was especially pronounced for players losing in the 2nd or 3rd region of the demo: you have to restart at level one, a level that you have already mastered and don't feel like going through again.

This is a pretty big problem for a game that wants to have a high replay value, which is what we're aiming for. So today, I'm going to tell you about the key thing we're currently adding to the game as a first step to solve this issue.

The Challenge Mode

The challenge mode is the "roguelike" way of playing Dawnmaker, and is aimed to be the main play mode for players who are familiar with the game. It is composed of 2 main parts. First, each time you start a new run, you get a (semi)random roster of buildings. "Semi" because we want to make sure you can still win and have some fun, so we're making sure you won't have a list of buildings that is just unable to win. But, apart from a few checks, that roster will be random and you will have to deal with what you've been given. Sometimes you'll have obvious synergies, other times it will be harder to figure out a way to victory. We're fine with that, as that means each run will be different.

The second element is: rewards! Each time you finish a city and secure a region, you'll get two main rewards. The first one is a booster of buildings. You will receive 3 new buildings, that get added to your roster and thus will become available in your market starting with your next city. The game will feature a world map, allowing you to choose which region to go to next, knowing what buildings you will receive — somewhat like the map in Slay the Spire.

The second reward you'll receive is a card to add to your starting deck. You'll get to choose a card that was in your deck at the end of the last city, and put it in your starting deck, while removing another card from the deck. We expect this will make the early game much easier. Generally the first level becomes less interesting as you progress: by giving you stronger starting cards, you should be able to upgrade faster. We also hope this will lead to interesting and unexpected situations.

We are doing three things with this challenge mode: adding more randomness, which increases the diversity of situations ; adding more choices, which is what players strive for in strategy games ; increasing the power of your starting deck, reducing the time to get to the interesting parts in the later regions.

Our first internal playtests indicate that it's working as intended: the game is more enjoyable for us (players who have already played the game a lot), more challenging, and has a lot more depth. We're looking to having that feature ready in a few short weeks, and when it's ready we'll roll it out in the demo and let you all have a go at it!

Lastly on this topic of replayability, we have another feature cooking up to increase it: random terrain! How about having some lakes, mountains or ruins on your map when you start your new city? We're preparing that, but it will be a topic for another newsletter.

(I hope y'all enjoy my attempts at doing cliffhangers in a newsletter!)


This piece was initially sent out to the readers of our newsletter! Wanna join in on the fun? Head out to Dawnmaker's presentation page and fill the form. You'll receive monthly stories about how we're making this game, and the latest news of its development.

Join our community!

The Rust Programming Language BlogAnnouncing Rust 1.71.1

The Rust team has published a new point release of Rust, 1.71.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.71.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.71.1 stable

Rust 1.71.1 fixes Cargo not respecting the umask when extracting dependencies, which could allow a local attacker to edit the cache of extracted source code belonging to another local user, potentially executing code as another user. This security vulnerability is tracked as CVE-2023-38497, and you can read more about it on the advisory we published earlier today. We recommend all users to update their toolchain as soon as possible.

Rust 1.71.1 also addresses several regressions introduced in Rust 1.71.0, including bash completion being broken for users of Rustup, and the suspicious_double_ref_op being emitted when calling borrow() even though it shouldn't.

You can find more detailed information on the specific regressions, and other minor fixes, in the release notes.

Contributors to 1.71.1

Many people came together to create Rust 1.71.1. We couldn't have done it without all of you. Thanks!

The Rust Programming Language BlogSecurity advisory for Cargo (CVE-2023-38497)

This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.

The Rust Security Response WG was notified that Cargo did not respect the umask when extracting crate archives on UNIX-like systems. If the user downloaded a crate containing files writeable by any local user, another local user could exploit this to change the source code compiled and executed by the current user.

This vulnerability has been assigned CVE-2023-38497.

Overview

In UNIX-like systems, each file has three sets of permissions: for the user owning the file, for the group owning the file, and for all other local users. The "umask" is configured on most systems to limit those permissions during file creation, removing dangerous ones. For example, the default umask on macOS and most Linux distributions only allow the user owning a file to write to it, preventing the group owning it or other local users from doing the same.

When a dependency is downloaded by Cargo, its source code has to be extracted on disk to allow the Rust compiler to read as part of the build. To improve performance, this extraction only happens the first time a dependency is used, caching the pre-extracted files for future invocations.

Unfortunately, it was discovered that Cargo did not respect the umask during extraction, and propagated the permissions stored in the crate archive as-is. If an archive contained files writeable by any user on the system (and the system configuration didn't prevent writes through other security measures), another local user on the system could replace or tweak the source code of a dependency, potentially achieving code execution the next time the project is compiled.

Affected Versions

All Rust versions before 1.71.1 on UNIX-like systems (like macOS and Linux) are affected. Note that additional system-dependent security measures configured on the local system might prevent the vulnerability from being exploited.

Users on Windows and other non-UNIX-like systems are not affected.

Mitigations

We recommend all users to update to Rust 1.71.1, which will be released later today, as it fixes the vulnerability by respecting the umask when extracting crate archives. If you build your own toolchain, patches for 1.71.0 source tarballs are available here.

To prevent existing cached extractions from being exploitable, the Cargo binary included in Rust 1.71.1 or later will purge the caches it tries to access if they were generated by older Cargo versions.

If you cannot update to Rust 1.71.1, we recommend configuring your system to prevent other local users from accessing the Cargo directory, usually located in ~/.cargo:

chmod go= ~/.cargo

Acknowledgments

We want to thank Addison Crump for responsibly disclosing this to us according to the Rust security policy.

We also want to thank the members of the Rust project who helped us disclose the vulnerability: Weihang Lo for developing the fix; Eric Huss for reviewing the fix; Pietro Albini for writing this advisory; Pietro Albini, Manish Goregaokar and Josh Stone for coordinating this disclosure; Josh Triplett, Arlo Siemen, Scott Schafer, and Jacob Finkelman for advising during the disclosure.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: June 2023 Progress Report

a dark background with thunderbird and k-9 mail logos centered, with the text "Thunderbird for Android, June progress report"

Apparently our July has been so busy that we didn’t find the time to write up the progress report for June. But a late report is better than no report 😄

If you need a refresher on what happened the previous month, read the May 2023 Progress Report.

Improved account setup

The roadmap item we’re currently working on is Improve Account Setup. Most of our time went into working on this. However, for June there’s no exciting news to share. We mostly worked on the internal plumbing; that is important to get right, but not necessarily great material for a blog post. Hopefully there will be new screenshots to share in July’s progress report.

App maintenance

Having an app with a large user base means we can’t spend all of our time working on new features. Fixing bugs is a large and important part of the job. Here’s a writeup of just three of the bugs we fixed in June.

Folder appears to be empty

A user reported that some of their folders appear to be empty in K-9 Mail. Using the provided debug log (❤) we were able to track this down to a message containing an invalid email address, specifically one whose local part (the text before the @ symbol) exceeds the limit of 64 characters.

The error was thrown by a newly added email address parser that is stricter than what we used before. At first it was a bit surprising that this would lead to messages in a folder not being shown. We deliberately kept this new implementation out of the code responsible for parsing emails after download and the code for displaying messages.

However, it turned out the new email address parser was used when getting the contact name belonging to an email address. This lookup is performed when loading the message list of a folder from the local database. When an error occurs during this step, an empty message list is shown to the user.

To fix this bug and limit the impact of similar problems in the future, we made the following changes:

  • Ignore most errors when parsing email addresses from messages that the user has received. The world is full of email addresses that violate the specification but work mostly fine in practice. However, we still want to be strict when it comes to the email addresses we accept, e.g. when setting up a new account.
  • Ignore errors with the email address when trying to fetch the system contact belonging to that email address. This may lead to the app not being able to fetch a contact name for an spec-violating email address. But this will no longer lead to failing to load the entire message list.
  • We added a message with an email address whose local part exceeds the length limit to our test account. That way we are likely to catch bugs related to such email addresses before they make it into a beta release.

We’re very grateful to our beta testers for finding and reporting bugs like this one. That way we can fix them before they make it into a stable release.

Adding an email address to an existing contact

With the introduction of the message details screen we added a button to add a message sender or recipient to the contacts database using an installed contact app. If the email address can already be found in the contacts database, this button is hidden and tapping the participant name, email address, or contact picture opens the contacts app.

<figcaption class="wp-element-caption">Message details screen</figcaption>

Previously the app didn’t make that distinction and tapping an email address or participant name would open the contacts app using the “show or create” action. Apparently this reliably allowed to add an email address to an existing contact. However, the “insert” action used by the details screen only allows adding the email address to an existing contact with some contacts apps, but not others 😞

We changed the action to start the contacts app from “insert” to “insert or edit”, and this seems to reliably offer the option to add the email address to an existing contact.

Reply behavior depends on message size

A user reported that the behavior when replying to a message retrieved via a mailing list was different depending on whether the message had been downloaded completely or only partially.

K-9 Mail supports only downloading parts of a message when the email exceeds a configured size. In that case also only selected parts of the message header are downloaded. Unfortunately, we forgot to include the List-Post header field that is used to determine the email address to which to reply.

The fix was simply adding List-Post to the list of header fields to fetch from the IMAP server.

Community contributions

In June we merged the following pull requests by external contributors:

Releases

In June 2023 we published the following beta versions:

If you want to help shape future versions of the app, become a beta tester and provide feedback on new features while they are still in development.

The post Thunderbird for Android / K-9 Mail: June 2023 Progress Report appeared first on The Thunderbird Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter — 116

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 116 release cycle.

Contributions

With Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, and geckodriver.


WebDriver BiDi

By enhancing the implementation of the WebDriver BiDi protocol we can offer more features to our users.

New command: session.end

With the support of the session.end command, users can terminate the automation session. This was previously only possible for sessions using both WebDriver Classic and WebDriver BiDi. It is now possible also for WebDriver BiDi-only sessions.

Capability matching for session.new

Capability matching is a feature already supported in WebDriver Classic. It allows to define expectations about the target browser, such as browser name, platform name, … But it is also used to configure the session to some extent. For instance to specify whether insecure certificates should be accepted.

When using the session.new command, users should provide a capabilities parameter, which can contain the alwaysMatch and the firstMatch properties. To learn more about those properties and capability matching in general, the WebDriver Capabilities page on MDN is a good reference.

Note that WebDriver BiDi sessions do not support all the capabilities from WebDriver classic, because some of them are irrelevant for WebDriver BiDi. The following capabilities are not supported for a WebDriver BiDi only session: pageLoadStrategy, timeouts, strictFileInteractability, unhandledPromptBehavior, webSocketUrl, moz:webdriverClick, moz:debuggerAddress, moz:firefoxOptions.

On top of this, the session.new result will also contain a capabilities property with the matched capabilities.

Bug fixes

The release 116 also comes with a few bug fixes, including:

Marionette (WebDriver classic)

Removing the moz:useNonSpecCompliantPointerOrigin capability

A deprecation warning for this capability was added since geckodriver 0.33.0. Support for this capability has now been removed in Firefox 116. Users who still need this feature can still use the Firefox 115 ESR release as long as it is supported.

Bug fixes

A couple of bugs have been fixed for Marionette and WebDriver Classic:

Firefox Developer ExperienceFirefox DevTools Newsletter — 116

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 116 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues


Networking

The Network Monitor now displays if a request was resolved with DoH (DNS over HTTPS) in the Headers panel (#1810195)

Netmonitor headers panel, displayed on a request resolved with DNS over HTTPS. There's a "DNS Resolution" label with the "DNS over HTTPS" value next to it

DoH improves privacy by hiding domain name lookups from someone lurking on public Wi-Fi, your ISP, or anyone else on your local network. DoH, when enabled, ensures that your ISP cannot collect and sell personal information related to your browsing

https://support.mozilla.org/en-US/kb/firefox-dns-over-https

We also don’t show the proxy-authentication HTTP header anymore, as it’s not actually sent to the server and could be confusing for users (#1816115).

In order to align with Chrome and Safari, the title of page entries in HAR export is not the value of document.title anymore, but the URL of the entry (#1828896).

Inspector fixes and improvements

There’s a new badge to win in the Inspector! Well, you’re not really winning anything beside from the joy of using CSS container queries. Now all elements with a container-type of inline-size or size will be decorated with a “container” badge (#1789193).

Firefox DevTools markup view. A `<section>` element is selected. Next to the tag, there's a "container" badge, which is hovered. A tooltip is displayed with the text: "container-type: inline-size"<figcaption class="wp-element-caption">The value of the container-type property is displayed when you hover the badge</figcaption>

This is a good opportunity to remind you that you can also get information about a query container by hovering the @container declaration in the rule view

Firefox DevTools Inspector rules view. A rule within a container query is displayed. The query is "@container hello (max-width: 800px)" A popup is visible, pointing to the "@container" text. It contains a header, which represents the <body> element. Below are 3 items: - container-type: size - inline-size: 777px - block-size: 1330px <figcaption class="wp-element-caption">The popup will reveal which element is used as container for this query, alongside its dimensions. Clicking on the crosshair icon will select the container element in the markup view.</figcaption>

Finally, we fixed so CSS variables set on the shadow DOM host element are displayed in the rule view (#1836755)


Happy debugging, see you next month for an exciting update!

Firefox Developer ExperienceFirefox DevTools Custom Object Formatters

Firefox 116 introduces an exciting new feature called custom formatters, designed to enhance the debugging experience for web developers. With custom formatters, websites can now define how to display specific JavaScript objects and functions within different parts of the DevTools. Custom Formatters have been part of Chrome DevTools for a while and we’re happy to add support for them in Firefox. The implementation of this feature was funded by Clojurists Together. So a big thank you to them!

This is especially useful for JavaScript frameworks that define individual objects or create wrappers for native variables, as well as web applications that deal with complex object structures. That also covers frameworks that compile to JavaScript like ClojureScript.

Custom formatting enhances the debugging process by providing developers with a more intuitive and informative representation of their objects. When working with complex object structures, the default display of objects in the DevTools can often be overwhelming or cryptic. With custom formatters, developers can tailor the presentation of objects to match their specific needs. They can highlight important properties by applying formatting styles, filter out irrelevant information, or even display nested structures in a more readable manner. This enables developers to quickly identify and understand the state of their objects, reducing the time and effort required for debugging and ultimately leading to more efficient development workflows.

Enabling custom formatting

To enable custom formatting, switch to the Settings panel and check the option called “Enable custom formatters” under “Advanced settings”. The setting takes effect the next time you open the DevTools.

<figcaption class="wp-element-caption">Settings panel showing the option for custom formatters being enabled</figcaption>

Controlling variable display

Once custom formatters are enabled, websites can customize how objects are displayed in the Web Console and the Debugger.

This is achieved by defining custom formatters using the global array called devtoolsFormatters. Each entry in this array represents a specific formatter for a particular type of variable. If there’s no formatter defined for a variable, it is displayed using its default formatting.

Each formatter must at least contain a header function. This function must either return a JsonML array or null. If null is returned, the default format is used to display the object.

In addition to the header function, a formatter can contain a body. Whether a formatter has a body is indicated by the hasBody function. If it returns true, the object can be expanded to show more details. The actual body is then returned by the body function. Like the header function it can either return a JsonML object or null.

All three functions take the object as first parameter an an optional configuration object as second parameter, which allows to pass additional information.

For a more detailed understanding of how custom formatters are created and the various options available, refer to the Firefox DevTools User Docs.

Examples

Let’s take a look at a simple example to illustrate the concept of custom formatters:

<figcaption class="wp-element-caption">Output of simple custom formatter within the Web Console.</figcaption>

And the code that achieves that output can be written as:

window.devtoolsFormatters = [
  {
    header: variable => {
      if (variable.hasOwnProperty('foo')) {
        return [
          'span', {
            'style': `
              font-family: "Comic Sans MS", fantasy;
              font-size: 3rem;
              color: green;
            `
          },
          'foo'
        ];
      }
      return null;
    }
  }
];

// …

console.log({foo: "bar"});

In the example above, a custom formatter is defined for a variable. The header property of the formatter is a function that determines how the variable is displayed. In this case, if the variable has a property named foo, it will be rendered as a <span> element with a specific style.

Complete example

Custom formatters can also be very complex and include nested structures. Here is an example showing how a Date object could be displayed. The code for that could look like this:

window.devtoolsFormatters = [
  {
    header: (obj) => {
      if (obj instanceof Date) {
        return [
          "div",
          { style: "font-weight: bold;" },
          `Date: ${obj.toLocaleDateString()} ${obj.toLocaleTimeString()}`,
        ];
      }
      return null;
    },
    hasBody: (obj) => obj instanceof Date,
    body: (obj) => {
      if (obj instanceof Date) {
        return [
          "div",
          {},
          ["div", {}, `Year: ${obj.getFullYear()}`],
          ["div", {}, `Month: ${obj.getMonth() + 1}`],
          ["div", {}, `Day: ${obj.getDate()}`],
          ["div", {}, `Hour: ${obj.getHours()}`],
          ["div", {}, `Minutes: ${obj.getMinutes()}`],
          ["div", {}, `Seconds: ${obj.getSeconds()}`],
        ];
      }
      return null;
    },
  },
];

// …

console.log(new Date());

Feel free to try it out by copying the code.

With this custom formatter, a Date object logged to the console will look like this:

<figcaption class="wp-element-caption">Date object logged via console.log(new Date()) using a custom formatter</figcaption>

Restrictions

There are some restrictions for HTML elements and CSS styles that can be applied to them.

These are the allowed HTML elements:

<span>, <div>, <ol>, <ul>, <li>, <table>, <tr>, <td>

The CSS properties that can be applied to the elements are:

  • align*
  • background* (background-image only allows data: URLs)
  • border*
  • box*
  • clear
  • color
  • cursor
  • display
  • float
  • font*
  • justify*
  • line*
  • margin*
  • padding*
  • position (only the values static and relative are accepted)
  • text*
  • transition*
  • outline*
  • vertical-align
  • white-space
  • word*
  • writing*
  • width, min-width, max-width
  • height, min-height, max-height

Debugging custom formatters

If a custom formatter contains an error, an error is logged to the console explaining what’s wrong.

<figcaption class="wp-element-caption">Error message logged to the Web Console for an invalid custom formatter</figcaption>

If possible, it also contains a source link pointing to the exact location of the error within the code. There are some errors like a missing body function, though, which can’t provide a source link. In those cases, the error message should still provide enough information to let you know in which custom formatter the error occurred and what the error was.

In addition to that you can also add breakpoints to your custom formatters within the Debugger. This allows step-debugging their code like any other JavaScript code.

Note that the same APIs are supported within Chromium based browsers, which means that these formatters function cross-browser, despite the lack of a specification.

Existing formatters

There is a bunch of existing formatters covering different needs:

Conclusion

With the introduction of custom formatters in Firefox DevTools, web developers now have more control over how objects are displayed during debugging sessions. So, when dealing with complex objects that may benefit from a more streamlined or prioritized display of information, custom formatters may come in handy. Feel free to give this feature a try and let us know what you think about it or whether you experience any issues using it!

The feature was implemented by me in cooperation with Nicolas Chevobbe. As Firefox and the DevTools are open source, they actively encourage contributions from volunteers in the development community.

Mozilla ThunderbirdAn Update On Thunderbird Sync

Hello Thunderbird family! First and foremost, we want to express our deepest appreciation for your patience. The road to Thunderbird 115 “Supernova” has been a long one, and we’re confident you’ll love the final result. If you’re already using Thunderbird 115, you may have noticed a feature that is conspicuously absent: Thunderbird Sync.

When we started creating our roadmap for Supernova, our feature targets were ambitious. As it turns out, a little too ambitious. We did our very best to finish up Thunderbird Sync for the initial release of version 115, but some technical blockers prevented us from moving forward fast enough to deliver it to you. Besides, this is a feature that absolutely must be secure and reliable. And it needs months of user testing to ensure that stability.

We do have the basic user interface for Thunderbird Sync designed already. However, what slowed us down is the need to hire a Site Reliability Engineer (SRE) to help us spin up our own back-end infrastructure that is independent of Firefox Sync. While our Sync functionality does use Firefox Sync code, they will end up being completely different products with different use-cases.

When Will Thunderbird Sync Be Finished?

We don’t have a solid release date, but our objective is to have Thunderbird Sync finished in time for the next ESR release, or shortly after we switch to a monthly release schedule (we’re aiming to complete that transition to monthly by early 2024).

Once we have a server and a proper back-end infrastructure, we’ll enable it on beta for you all to test.

What Data Will Thunderbird Sync Support?

We plan to support syncing of your email account definitions, credentials, signatures, saved searches, tags, tasks, filters, and most major preferences across multiple installations of Thunderbird on PC, (Yes, this is cross-compatible with Windows, macOS, and Linux.) You’ll also be able to sync your Thunderbird accounts with the forthcoming Thunderbird for Android.

Thank you again for being patient with us as we continue to build the best possible software for managing your email and personal communications. In the future, we’re hopeful that a switch to monthly releases will allow us to put new features in your hands faster than what’s previously been possible.

The post An Update On Thunderbird Sync appeared first on The Thunderbird Blog.

Mozilla Localization (L10N)L10n Report: July 2023 Edition

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

Deep dive: Firefox release schedule

If you’re new to Mozilla products, Firefox release schedule can be overwhelming. While whattrainisitnow.com is a useful resource to understand what’s shipping and when, let’s focus on the information that is relevant for localization:

  • Nightly should be your main focus as a localizer. New strings are exposed once or twice a week, the build is updated frequently (twice a day), so you can localize and test quickly before your translations are exposed to a larger audience with Beta and Release.
  • For Beta, we will automatically include updated translations up to the last week of the cycle (a cycle is usually 4 weeks long, but can occasionally be longer to accommodate for public holidays and other exceptions). The last week of the Beta cycle is “release candidate week” (RC week), and only urgent code fixes are accepted to avoid introducing new issues. The deadline you see in Pontoon is placed on the Sunday before RC week: that represents the last day to update translations and make sure they will be included in the actual RC build, which will then become Release a week later.
  • We normally don’t update translations in Firefox release, but it can be done manually in case of significant issues.

There is also another version of Firefox, called Extended Support Release (ESR): this version is targeted at users that don’t like frequent changes and updates, and it’s supported for about 9 months. The old and new ESR versions will overlap for a few weeks to guarantee a smooth transition, especially for enterprises with many installations.

Translations are not updated automatically for ESR after we ship the first build, but we normally update them 2-3 times during the ESR lifetime, to improve completion levels and include localization improvements. For example, 115.2 (the third build for ESR 115) will include a first localization update compared to the initial 115 release.

We try to hide all this complexity when it comes to localizing Firefox: you will only find one project in Pontoon, and that includes strings for all supported versions. Recently, we dropped support for the previous ESR version – Firefox ESR 102, which will stop receiving updates in September — which means all the strings used only in that version have been removed from Pontoon (about 1400 strings, including hundreds of legacy DTD strings).

New content

The amount of new content has been relatively small over the last months:

  • A new version of about:firefoxview is in the works. We will soon reach out through Pontoon notifications with more details and testing instructions.
  • There is a new feature to limit the execution of extensions on sites identified by Mozilla (called restricted sites). The goal is to protect users from known malicious actors, while still giving them the choice to manually allow extensions they trust.

What’s new or coming up in mobile

Things have been very quiet out in mobile land, and there is not much to report in this edition.

As the v116 l10n cycle comes to an end, string freeze for v117 is upon us and strings will be exposed within the next few days. There should be no additional strings landing for Firefox for iOS at this point.

On Android, we will be giving users the option to add a custom search engine URL.

Stay tuned for more updates in the next edition!

What’s new or coming up in web projects

Mozilla.org

The site will go through some changes throughout the rest of the year and to the next. The changes considered low hanging fruit will be made first, this includes the Home page. The new Home page will ensure all the locales will have the same look and feel.

Also, by the end of the month, some of the Relay Website content will be migrated to the mozilla.org site. For the communities that have been localizing the Relay Website project, the migration includes new content as well as existing localized content from Firefox Relay Website. Initially this content will only be available in the development environment, but strings will be visible in Pontoon once the migration is complete. If your locale is not enabled for the Relay project in Pontoon, you will see a lot of new content as a result. We will make an announcement after the migration has completed, please take some time to review the pages and ensure any minor glitches are identified and fixed.

Firefox Relay Website

The Relay Premium feature will be made available to more EU markets in a few weeks. These new markets include: Bulgaria, Croatia, the Czech Republic, Denmark, Estonia, Greece, Hungary, Lithuania, Latvia, Poland, Portugal, Romania. Slovakia, Slovenia and more.

This launch requires the localization of the Firefox Relay Website and Firefox Accounts in order to have a good user experience. If Relay is not enabled by the community in Pontoon, the product will be offered in English. If the Firefox Accounts is opted in by the community in Pontoon, but the completion is under 70%, the payment portion of the user flow will fall back to English but in the corresponding currency for the local. It’s never too late to enable the products if your locale has not. If it is enabled for both but the projects are falling behind, please give them higher priority and make time to catch up. Thank you!

Newly published localizer facing documentation

We recently completed a comprehensive update of our Pontoon documentation for localizers. This new documentation should accurately reflect the Pontoon environment as you see it today, with handy details on things like how to make the most of tools in your translation workspace, how to use search filters to find the strings you need efficiently, and everything you need to know on how to translate in Pontoon. Check it out!

If you spot any mistakes, have ideas to make the documentation better, or would otherwise like to contribute to our localizer documentation, visit our GitHub repository and check out the README for information on how to contribute.

Friends of the Lion

Image by Elio Qoshi

  • Victor is a passionate localizer, who has been spearheading the Mozilla mission in Tajikistan for quite a few years now. He is involved on many fronts, including digital literacy trainings in Tajikistan to introduce Firefox in the Tajik language and promoting it as the number one browser in the country. He also collaborates with marketplaces, to connect them with local farmers without intermediaries. He uses local opportunities to promote safer internet browsing and showcased the potential of Firefox in Tajik in that context. Victor is also heavily involved in a US embassy funded project aimed to enhance internet access and safety for leaders and their communities in Tajikistan, with a focus on independent media, countering violent extremism, women’s economic empowerment, environmental awareness, and more. The project also emphasizes the importance of diversity, equity, inclusion, and accessibility in its implementation. Thank you Victor for helping the internet to stay safe, open and accessible to all!

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.