David HumphreyHTTP Testing with Hurl in node.js

The JavaScript ecosystem has been benefiting lately from pieces of its dev tooling being (re)written in Rust.  Projects like swc, Parcel 2 and parcel-css, deno, dprint and others have brought us tremendous performance improvements with tasks like bundling, formatting, etc.  Recently, my favourite Rust-based, HTTP testing tool gained the ability to be run in node/npm projects, and I wanted to show you how it works.

Hurl is a command-line tool for running HTTP requests defined in simple text files (*.hurl).  I learned about it by chance on Twitter over a year ago, and have been using and teaching it to my programming students ever since.  The name comes from the fact that it builds on top of curl's HTTP code. The real benefit to Hurl is that it lets you write simple, declarative tests that read just like the HTTP requests and responses they model.  Oh, and it runs them ridiculously fast!

Here's an example test file that makes sure http://example.net/404.html returns a 404:

GET http://example.net/404.html

HTTP/1.0 404

You can get much fancier, by setting headers, cookies, auth, etc. on the request and assert things on the response, including using JSONPath, XPath, Regexes, and lots of other conveniences.  You can also capture data from the headers or body, and use these variables in subsequent chained requests. The docs are fantastic (including this tutorial), and go through all the various ways you can write your tests.

Here's a slightly more complex test, which uses a few of the techniques I've just mentioned:

# 1. Get the GitHub user info for @Orange-OpenSource
GET https://api.github.com/users/Orange-OpenSource

# 2. We expect to get back an HTTP/2 200 response. Also, assert
# various things about the Headers and JSON body. Finally
# capture the value of the `blog` property from the body into
# a variable, so we can use that in the next step.
HTTP/2 200
[Asserts]
header "access-control-allow-origin" == "*"
jsonpath "$.login" == "Orange-OpenSource"
jsonpath "$.public_repos" >= 286
jsonpath "$.folowers" isInteger
jsonpath "$.node_id" matches /^[A-Za-z0-9=]+$/
[Captures]
blog_url: jsonpath "$.blog" 

# 3. Get the blog URL we received earlier, GET it, and make
# sure it's an HTML page
GET {{blog_url}}

HTTP/2 200
[Asserts]
header "Content-Type" startsWith "text/html"

I've been using Hurl to write tests for node.js HTTP APIs, especially integration tests, and it's been a joy to use.  I still write unit tests in JavaScript-based testing frameworks, but one immediate benefit of adding Hurl is its speed, which helps shake out race conditions.  Many of my students are still learning asynchronous programming, and often forget to await Promise-based calls.  With JavaScript-based test runners, I've found that the test runs take long enough that the promises usually resolve in time (despite not being await'ed), and you often don't realize you have a bug.  However, when I have the students use Hurl, the tests run so fast that any async code path that is missing await becomes obvious: the tests pass in JS but start failing in Hurl.

I also found that Hurl is pretty easy to learn or teach.  My AWS Cloud students picked it up really quickly last term, and I think most node.js devs would have no trouble becoming productive with it in a short time.  Here's what one of my students wrote about getting started with Hurl in his blog:

"The learning curve is pretty simple (I managed to learn the basics in a couple of hours), less setup todo since it's just a plain file, the syntax is English friendly, besides the jsonPath that could take some times to adapt."

As I've been writing tests and teaching with Hurl over the past year, I've been pretty active filing issues. The devs are really friendly and open to suggestions, and the tool has gotten better and better with each new release.  Recently, I filed an issue to add support for running hurl via npm, and it was shipped a little over a week later!

Installing and Using Hurl with npm

Let me show you how to use Hurl in a node.js project.  Say you have a directory of *.hurl files, maybe inside ./test/integration.  First, install Hurl via npm:

$ npm install --save-dev @orangeopensource/hurl

This will download the appropriate Hurl binary for your OS/platform from the associated release, and create node_modules/.bin/hurl which you can call in your scripts within package.json.  For example:

"scripts": {
  "test:integration": "hurl --test --glob \"test/integration/**/*.hurl\""
}

Here I'm using the --test (i.e., run in test mode) and --glob (specify a pattern for input files) options, but there are many more that you can use.  NOTE: I'm not showing how to start a server before running these tests, since that's outside the scope of what Hurl does.  In my case, I typically run my integration tests against Docker containers, but you could do it lots of ways (e.g., use npm-run-all to start your server before running the tests).

In terms of Hurl's output, running the two tests I discussed above looks like this:

npm test

> hurl-npm-example@1.0.0 test
> hurl --test --glob *.hurl

expr=test1.hurl
test2.hurl: RUNNING [1/2]
error: Assert Failure
  --> test2.hurl:14:0
   |
14 | jsonpath "$.folowers" isInteger
   |   actual:   none
   |   expected: integer
   |

test2.hurl: FAILURE
test1.hurl: RUNNING [2/2]
error: Assert Http Version
  --> test1.hurl:3:6
   |
 3 | HTTP/1.0 404
   |      ^^^ actual value is <1.1>
   |

test1.hurl: FAILURE
--------------------------------------------------------------------------------
Executed:  2
Succeeded: 0 (0.0%)
Failed:    2 (100.0%)
Duration:  174ms

As you can see, both tests are failing.  The error message format is more Rust-like than most JS devs will be used to, but it's quite friendly.  In test2.hurl, I've got a typo in $.folowers, and in test1.hurl, the response is returning HTTP/1.1 vs. HTTP/1.0.  A few quick fixes and the tests are now passing:

$ npm test

> hurl-npm-example@1.0.0 test
> hurl --test --glob *.hurl

expr=test1.hurl
test2.hurl: RUNNING [1/2]
test2.hurl: SUCCESS
test1.hurl: RUNNING [2/2]
test1.hurl: SUCCESS
--------------------------------------------------------------------------------
Executed:  2
Succeeded: 2 (100.0%)
Failed:    0 (0.0%)
Duration:  169ms

Part of what's great about Hurl is that it isn't limited to a single language or runtime.  Despite the title of my post, Hurl isn't really a JS testing tool per se.  However, being able to "npm install" it and use it as part of your local or CI testing adds something new to your testing toolkit.  I still love, use, and teach tools like Jest, Playwright, and others, but I'm excited that JS devs now have an easy way to add Hurl to the mix.

Hopefully this will inspire you to try including Hurl in your node.js HTTP project testing.  I promise you that you'll write less test code and spend less time waiting to find out if everything works!

Will Kahn-GreeneSocorro/Tecken Overview: 2022, presentation

Socorro and Tecken make up the services part of our crash reporting system at Mozilla. We ran a small Data Sprint day to onboard a new ops person and a new engineer. I took my existing Socorro presentation and Tecken presentation 1, combined them, reduced them, and then fixed a bunch of issues. This is that presentation.

1

I never blogged the Tecken 2020 presentation.

Read more… (36 min remaining to read)

Mozilla Security BlogRevocation Reason Codes for TLS Server Certificates

In our continued efforts to improve the security of the web PKI, we are taking a multi-pronged approach to tackling some long-existing problems with revocation of TLS server certificates. In addition to our ongoing CRLite work, we added new requirements to version 2.8 of Mozilla’s Root Store Policy that will enable Firefox to depend on revocation reason codes being used consistently, so they can be relied on when verifying the validity of certificates during TLS connections. We also added a new requirement that CA operators provide their full CRL URLs in the CCADB. This will enable Firefox to pre-load more complete certificate revocation data, eliminating dependency on the infrastructure of CAs during the certificate verification part of establishing TLS connections. The combination of these two new sets of requirements will further enable Firefox to enforce revocation checking of TLS server certificates, which makes TLS connections even more secure.

Previous Policy Updates

Significant improvements have already been made in the web PKI, including the following changes to Mozilla’s Root Store Policy and the CA/Browser Forum Baseline Requirements (BRs), which reduced risks associated with exposure of the private keys of TLS certificates by reducing the amount of time that the exposure can exist.

  • TLS server certificates issued on or after 1 September 2020 MUST NOT have a Validity Period greater than 398 days.
  • For TLS server certificates issued on or after October 1, 2021, each dNSName or IPAddress in the certificate MUST have been validated within the prior 398 days.

Under those provisions, the maximum validity period and maximum re-use of domain validation for TLS certificates roughly corresponds to the typical period of time for owning a domain name; i.e. one year. This reduces the risk of potential exposure of the private key of each TLS certificate that is revoked, replaced, or no longer needed by the original certificate subscriber.

New Requirements

In version 2.8 of Mozilla’s Root Store Policy we added requirements stating that:

  1. Specific RFC 5280 Revocation Reason Codes must be used under certain circumstances; and
  2. CA operators must provide their full CRL URLs in the Common CA Database (CCADB).

These new requirements will provide a complete accounting of all revoked TLS server certificates. This will enable Firefox to pre-load more complete certificate revocation data, eliminating the need for it to query CAs for revocation information when establishing TLS connections.

The new requirements about revocation reason codes account for the situations that can happen at any time during the certificate’s validity period, and address the following problems:

  • There were no policies specifying which revocation reason codes should be used and under which circumstances.
  • Some CAs were not using revocation reason codes at all for TLS server certificates.
  • Some CAs were using the same revocation reason code for every revocation.
  • There were no policies specifying the information that CAs should provide to their certificate subscribers about revocation reason codes.

Revocation Reason Codes

Section 6.1.1 of version 2.8 of Mozilla’s Root Store Policy states that when a TLS server certificate is revoked for one of the following reasons the corresponding entry in the CRL must include the revocation reason code:

  • keyCompromise (RFC 5280 Reason Code #1)
    • The certificate subscriber must choose the “keyCompromise” revocation reason code when they have reason to believe that the private key of their certificate has been compromised, e.g., an unauthorized person has had access to the private key of their certificate.
  • affiliationChanged (RFC 5280 Reason Code #3)
    • The certificate subscriber should choose the “affiliationChanged” revocation reason code when their organization’s name or other organizational information in the certificate has changed.
  • superseded (RFC 5280 Reason Code #4)
    • The certificate subscriber should choose the “superseded” revocation reason code when they request a new certificate to replace their existing certificate.
  • cessationOfOperation (RFC 5280 Reason Code #5)
    • The certificate subscriber should choose the “cessationOfOperation” revocation reason code when they no longer own all of the domain names in the certificate or when they will no longer be using the certificate because they are discontinuing their website.
  • privilegeWithdrawn (RFC 5280 Reason Code #9)
    • The CA will specify the “privilegeWithdrawn” revocation reason code when they obtain evidence that the certificate was misused or the certificate subscriber has violated one or more material obligations under the subscriber agreement or terms of use.

RFC 5280 Reason Codes that are not listed above shall not be specified in the CRL for TLS server certificates, for reasons explained in the wiki page.

Conclusion

These new requirements are important steps towards improving the security of the web PKI, and are part of our effort to resolve long-existing problems with revocation of TLS server certificates. The requirements about revocation reason codes will enable Firefox to depend on revocation reason codes being used consistently, so they can be relied on when verifying the validity of certificates during TLS connections. The requirement that CA operators provide their full CRL URLs in the CCADB will enable Firefox to pre-load more complete certificate revocation data, eliminating dependency on the infrastructure of CAs during the certificate verification part of establishing TLS connections. The combination of these two new sets of requirements will further enable Firefox to enforce revocation checking of TLS server certificates, which makes TLS connections even more secure.

The post Revocation Reason Codes for TLS Server Certificates appeared first on Mozilla Security Blog.

Karl DubostHow to make « cidre » of Normandy

While I'm living in Japan, I'm from the Normandy region in France. That's me as a child helping my grand father in an orchard.

child filling bags with apples.

This is a region which is traditionally known for cows and dairy products (milk, butter, cheese like Camembert, Pont-L'evêque, Livarot, Neufchâtel, etc.) and also a region which led to the production of cidre (French only). The origin is not that clear but probably, people from Normandy have started to make « cidre » in the 12th century. Some competing stories have been developed about the origin. But people were growing apples for a long time already and probably were fermenting them. And a craft emerged.

The Web is also rich of its individual crafts, developed along the way. Some techniques have been lost, some are thriving. A long series of errors and trials has been essential in perfecting the art of making Websites.

Fast forward a couple of centuries, and here an image on my great grand father, René. He is collecting the apples in a big bag from his field to prepare « cidre ».

Man filling a bag of apples in an orchard.

The name of the apples is a long poetic list:

Blanc Mollet, Girard, Cimetière de Blangy, Argile nouvelle, Fréquin rouge, Gros matois rouge, Bon Henri, Médaille d'or, Petit amer, Binet rouge, Bouquet, Joly rouge, Longue, Bedan, Bouteille, Moulin à vent, Grise Dieppois, Gilet rouge, Doux Veret (petit), Rouge Bruyère, Reine des Pommes, Doux-Evêque précoce, Marin Onfroy, etc.

Each of them have their own qualities: sweetness, acidity, taste, … Once we have a good mix, we need to wash them carefully and then put them in the grinder.

My grand father, Jean, working at the grinder and we can see in the background the press in wood.

Man near the grinder with apples in a big barrel.

Grinder engraving.

Once the apples have been grinded, we need to let them with the juice exposed to the air for around 8 to 12 hours in a deep container covered by a cloth. The oxydation work will start. The must will get a better color, will be sweeter. The yeast will develop more rapidly. Containers must be as clean as possible.

Then will start the work with the press.

Press engraving.

The must is layered in 15 to 20 centimeters high layers, separated by layers of straws that will drain the juice.

Press detail engraving.

Once the juice has been drawn, it is put in big barrels where the fermentation process starts. After a while, the juice will be put in bottle. My grand-father was used to go in the cave and to turn the bottles according to the moon phases. He had 3 types of « cidre » in his cave: Very sweet, rough, and something very rough that was basically impossible to drink. The colors were on the bottles: red, grey and blue, a simple spot of paint.

These techniques are getting lost with the new generations and the industrializations. I wish I had spent more time with him for having a better understanding of the craft.

Different types of apples.

Now, I probably have a better understanding of the Web than the process of making « cidre ». It's probably why today is my first day working for Apple on the WebKit project to continue my journey in making the Web awesome for everyone: Web Compatibility, standards and interoperability.

Engravings coming from Le cidre by Labounoux and Touchard

Comments

If you have more questions, things I may have missed, different take on them. Feel free to comment…. Be mindful.

Otsukare!

Hacks.Mozilla.OrgImproved Process Isolation in Firefox 100

Introduction

Firefox uses a multi-process model for additional security and stability while browsing: Web Content (such as HTML/CSS and Javascript) is rendered in separate processes that are isolated from the rest of the operating system and managed by a privileged parent process. This way, the amount of control gained by an attacker that exploits a bug in a content process is limited.

Ever since we deployed this model, we have been working on improving the isolation of the content processes to further limit the attack surface. This is a challenging task since content processes need access to some operating system APIs to properly function: for example, they still need to be able to talk to the parent process. 

In this article, we would like to dive a bit further into the latest major milestone we have reached: Win32k Lockdown, which greatly reduces the capabilities of the content process when running on Windows. Together with two major earlier efforts (Fission and RLBox) that shipped before, this completes a sequence of large leaps forward that will significantly improve Firefox’s security.

Although Win32k Lockdown is a Windows-specific technique, it became possible because of a significant re-architecting of the Firefox security boundaries that Mozilla has been working on for around four years, which allowed similar security advances to be made on other operating systems.

The Goal: Win32k Lockdown

Firefox runs the processes that render web content with quite a few restrictions on what they are allowed to do when running on Windows. Unfortunately, by default they still have access to the entire Windows API, which opens up a large attack surface: the Windows API consists of many parts, for example, a core part dealing with threads, processes, and memory management, but also networking and socket libraries, printing and multimedia APIs, and so on.

Of particular interest for us is the win32k.sys API, which includes many graphical and widget related system calls that have a history of being exploitable. Going back further in Windows’ origins, this situation is likely the result of Microsoft moving many operations that were originally running in user mode into the kernel in order to improve performance around the Windows 95 and NT4 timeframe.

Having likely never been originally designed to run in this sensitive context, these APIs have been a traditional target for hackers to break out of application sandboxes and into the kernel.

In Windows 8, Microsoft introduced a new mitigation named PROCESS_MITIGATION_SYSTEM_CALL_DISABLE_POLICY that an application can use to disable access to win32k.sys system calls. That is a long name to keep repeating, so we’ll refer to it hereafter by our internal designation: “Win32k Lockdown“.

The Work Required

Flipping the Win32k Lockdown flag on the Web Content processes – the processes most vulnerable to potentially hostile web pages and JavaScript – means that those processes can no longer perform any graphical, window management, input processing, etc. operations themselves.

To accomplish these tasks, such operations must be remoted to a process that has the necessary permissions, typically the process that has access to the GPU and handles compositing and drawing (hereafter called the GPU Process), or the privileged parent process. 

Drawing web pages: WebRender

For painting the web pages’ contents, Firefox historically used various methods for interacting with the Windows APIs, ranging from using modern Direct3D based textures, to falling back to GDI surfaces, and eventually dropping into pure software mode.

These different options would have taken quite some work to remote, as most of the graphics API is off limits in Win32k Lockdown. The good news is that as of Firefox 92, our rendering stack has switched to WebRender, which moves all the actual drawing from the content processes to WebRender in the GPU Process.

Because with WebRender the content process no longer has a need to directly interact with the platform drawing APIs, this avoids any Win32k Lockdown related problems. WebRender itself has been designed partially to be more similar to game engines, and thus, be less susceptible to driver bugs.

For the remaining drivers that are just too broken to be of any use, it still has a fully software-based mode, which means we have no further fallbacks to consider.

Webpages drawing: Canvas 2D and WebGL 3D

The Canvas API provides web pages with the ability to draw 2D graphics. In the original Firefox implementation, these JavaScript APIs were executed in the Web Content processes and the calls to the Windows drawing APIs were made directly from the same processes.

In a Win32k Lockdown scenario, this is no longer possible, so all drawing commands are remoted by recording and playing them back in the GPU process over IPC.

Although the initial implementation had good performance, there were nevertheless reports from some sites that experienced performance regressions (the web sites that became faster generally didn’t complain!). A particular pain point are applications that call getImageData() repeatedly: having the Canvas remoted means that GPU textures must now be obtained from another process and sent over IPC.

We compensated for this in the scenario where getImageData is called at the start of a frame, by detecting this and preparing the right surfaces proactively to make the copying from the GPU faster.

Besides the Canvas API to draw 2D graphics, the web platform also exposes an API to do 3D drawing, called WebGL. WebGL is a state-heavy API, so properly and efficiently synchronizing child and parent (as well as parent and driver) takes great care.

WebGL originally handled all validation in Content, but with access to the GPU and the associated attack surface removed from there, we needed to craft a robust validating API between child and parent as well to get the full security benefit.

(Non-)Native Theming for Forms

HTML web pages have the ability to display form controls. While the overwhelming majority of websites provide a custom look and styling for those form controls, not all of them do, and if they do not you get an input GUI widget that is styled like (and originally was!) a native element of the operating system.

Historically, these were drawn by calling the appropriate OS widget APIs from within the content process, but those are not available under Win32k Lockdown.

This cannot easily be fixed by remoting the calls, as the widgets themselves come in an infinite amount of sizes, shapes, and styles can be interacted with, and need to be responsive to user input and dispatch messages. We settled on having Firefox draw the form controls itself, in a cross-platform style.

While changing the look of form controls has web compatibility implications, and some people prefer the more native look – on the few pages that don’t apply their own styles to controls – Firefox’s approach is consistent with that taken by other browsers, probably because of very similar considerations.

Scrollbars were a particular pain point: we didn’t want to draw the main scrollbar of the content window in a different manner as the rest of the UX, since nested scrollbars would show up with different styles which would look awkward. But, unlike the rather rare non-styled form widgets, the main scrollbar is visible on most web pages, and because it conceptually belongs to the browser UX we really wanted it to look native.

We, therefore, decided to draw all scrollbars to match the system theme, although it’s a bit of an open question though how things should look if even the vendor of the operating system can’t seem to decide what the “native” look is.

Final Hurdles

Line Breaking

With the above changes, we thought we had all the usual suspects that would access graphics and widget APIs in win32k.sys wrapped up, so we started running the full Firefox test suite with win32k syscalls disabled. This caused at least one unexpected failure: Firefox was crashing when trying to find line breaks for some languages with complex scripts.

While Firefox is able to correctly determine word endings in multibyte character streams for most languages by itself, the support for Thai, Lao, Tibetan and Khmer is known to be imperfect, and in these cases, Firefox can ask the operating system to handle the line breaking for it. But at least on Windows, the functions to do so are covered by the Win32k Lockdown switch. Oops!

There are efforts underway to incorporate ICU4X and base all i18n related functionality on that, meaning that Firefox will be able to handle all scripts perfectly without involving the OS, but this is a major effort and it was not clear if it would end up delaying the rollout of win32k lockdown.

We did some experimentation with trying to forward the line breaking over IPC. Initially, this had bad performance, but when we added caching performance was satisfactory or sometimes even improved, since OS calls could be avoided in many cases now.

DLL Loading & Third Party Interactions

Another complexity of disabling win32k.sys access is that so much Windows functionality assumes it is available by default, and specific effort must be taken to ensure the relevant DLLs do not get loaded on startup. Firefox itself for example won’t load the user32 DLL containing some win32k APIs, but injected third party DLLs sometimes do. This causes problems because COM initialization in particular uses win32k calls to get the Window Station and Desktop if the DLL is present. Those calls will fail with Win32k Lockdown enabled, silently breaking COM and features that depend on it such as our accessibility support. 

On Windows 10 Fall Creators Update and later we have a fix that blocks these calls and forces a fallback, which keeps everything working nicely. We measured that not loading the DLLs causes about a 15% performance gain when opening new tabs, adding a nice performance bonus on top of the security benefit.

Remaining Work

As hinted in the previous section, Win32k Lockdown will initially roll out on Windows 10 Fall Creators Update and later. On Windows 8, and unpatched Windows 10 (which unfortunately seems to be in use!), we are still testing a fix for the case where third party DLLs interfere, so support for those will come in a future release.

For Canvas 2D support, we’re still looking into improving the performance of applications that regressed when the processes were switched around. Simultaneously, there is experimentation underway to see if hardware acceleration for Canvas 2D can be implemented through WebGL, which would increase code sharing between the 2D and 3D implementations and take advantage of modern video drivers being better optimized for the 3D case.

Conclusion

Retrofitting a significant change in the separation of responsibilities in a large application like Firefox presents a large, multi-year engineering challenge, but it is absolutely required in order to advance browser security and to continue keeping our users safe. We’re pleased to have made it through and present you with the result in Firefox 100.

Other Platforms

If you’re a Mac user, you might wonder if there’s anything similar to Win32k Lockdown that can be done for macOS. You’d be right, and I have good news for you: we already quietly shipped the changes that block access to the WindowServer in Firefox 95, improving security and speeding process startup by about 30-70%. This too became possible because of the Remote WebGL and Non-Native Theming work described above.

For Linux users, we removed the connection from content processes to the X11 Server, which stops attackers from exploiting the unsecured X11 protocol. Although Linux distributions have been moving towards the more secure Wayland protocol as the default, we still see a lot of users that are using X11 or XWayland configurations, so this is definitely a nice-to-have, which shipped in Firefox 99.

We’re Hiring

If you found the technical background story above fascinating, I’d like to point out that our OS Integration & Hardening team is going to be hiring soon. We’re especially looking for experienced C++ programmers with some interest in Rust and in-depth knowledge of Windows programming.

If you fit this description and are interested in taking the next leap in Firefox security together with us, we’d encourage you to keep an eye on our careers page.

Thanks to Bob Owen, Chris Martin, and Stephen Pohl for their technical input to this article, and for all the heavy lifting they did together with Kelsey Gilbert and Jed Davis to make these security improvements ship.

The post Improved Process Isolation in Firefox 100 appeared first on Mozilla Hacks - the Web developer blog.

Firefox Add-on ReviewsExtension starter pack

You’ve probably heard about “ad blockers,” “tab managers,” “anti-trackers” or any number of browser customization tools commonly known as extensions. And maybe you’re intrigued to try one, but you’ve never installed an extension before and the whole notion just seems a bit obscure. 

Let’s demystify extensions. 

An extension is simply an app that runs on a browser like Firefox. From serious productivity and privacy enhancing features to really fun stuff like changing the way the web looks and feels, extensions give you the power to completely personalize your browsing experience. 

Addons.mozilla.org (AMO) is a discovery site that hosts thousands of independently developed Firefox extensions. It’s a vast and eclectic ecosystem of features, so we’ve hand-picked a small collection of great extensions to get you started…

I’ve always wanted an ad blocker!

uBlock Origin

Works beautifully “right out of the box.” Just add it to Firefox and uBlock Origin will automatically start blocking all types of advertising — display ads, banners, video pre-rolls, pop-ups — you name it. 

Of course, if you prefer deeper content blocking customization, uBlock Origin allows for fine control as well, like the ability to import your own custom block filters or access a data display that shows you how much of a web page was blocked by the extension. More than just an ad blocker, uBlock Origin also effectively thwarts some websites that may be infected with malware. 

For more insights about this excellent ad blocker, please see uBlock Origin — everything you need to know about the ad blocker, or to explore additional ad blocker options, you might want to check out What’s the best ad blocker for you?

I’m concerned about my digital privacy and being tracked around the web

Privacy Badger

The beauty of Privacy Badger is that once you install it on Firefox, it not only immediately begins blocking some of the sneakiest trackers on the web, it actually gets “smarter” the longer you use it. 

No complicated set-up required. Once installed, Privacy Badger automatically hunts down and blocks the most hidden types of trackers on the web. As you naturally navigate the internet, Privacy Badger will build on its knowledge base of invisible trackers and become increasingly effective at obscuring your online trail. 

Facebook Container

Mozilla’s very own Facebook Container was built to give Firefox users a no-fuss way to stop Facebook from tracking your moves outside of Facebook. 

As its title suggests, the extension “contains” your identity to just Facebook, without requiring you to  sign in/out from the website each time you use it (typically, the trade-off for the ease of always remaining signed in is that it gives Facebook a means of tracking you around the web). So the extension offers you the best of both worlds — maintain the convenience of auto sign-in and build privacy protection for yourself so Facebook can’t pry into the rest of your web activity. 

<figcaption>Social widgets like these give Facebook and other platforms a sneaky means of tracking your interests around the web.</figcaption>

I need an easier way to translate languages

Simple Translate

Do you do a lot of language translations on the web? If so, it’s a hassle always copying text and navigating away from the page you’re on just to translate a word or phrase. Simple Translate solves this problem by giving you the power to perform translations right there on the page. 

Just highlight the text you want translated and right-click to get instant translations in a handy pop-up display, so you never have to leave the page again. 

My grammar in speling is bad!

LanguageTool

Anywhere you write on the web, LanguageTool will be there to lend a guiding editorial hand. It helps fix typos, grammar problems, and even recognizes common word mix-ups like the there/their/they’re. 

Available in 25 languages, LanguageTool automatically works on any web-based publishing platform like Gmail, web docs, social media sites, etc. The clever extension will even spot words you’re possibly overusing and suggest alternatives to spruce up your prose. 

YouTube your way

Enhancer for YouTube

Despite offering dozens of creative customization features, Enhancer for YouTube is easy to grasp and gives you a variety of ways to radically alter YouTube functionality. 

Once the extension is installed you’ll find an additional set of controls just beneath YouTube’s video player (you can even select the extension features you want to appear in the control bar). 

Key features include… 

  • Customize video player size
  • Change YouTube’s look with a dark theme
  • Volume boost
  • Ad blocking (with ability to allow ads from channels you choose to support)
  • One-click screenshots
  • Change playback speed
  • High-def default video quality

I’m drowning in browser tabs! Send help! 

OneTab

You’ve got an overwhelming number of open web pages. You can’t close them. You need them. But you can’t organize them all right now either. You’re too busy. What to do?! 

If you have OneTab on Firefox you just click the toolbar button and suddenly all those open tabs become a clean list of text links listed on a single page. Ahhh serenity.

Not only will you create browser breathing room for yourself, but with all those previously open tabs now closed and converted to text links, you’ve also just freed up a bunch of CPU and memory, which should improve browser speed and performance. 

If you’ve never installed a browser extension before, we hope you found something here that piqued your interest to try. To continue exploring ways to personalize Firefox through the power of extensions, please see our collection of 100+ Recommended Extensions

Mozilla Thunderbird7 Great New Features Coming To Thunderbird 102

Welcome back to the Thunderbird blog! We’re really energized about our major 2022 release and cannot wait to put it in your hands. Thunderbird 102 includes several major new features for our global community of users, and we’re confident you’ll love them. So grab your favorite beverage, and let’s highlight seven features from Thunderbird 102 we’re most excited about.

Before we jump in, it’s worth mentioning that we’ve been rapidly expanding our team in order to power up your productivity and improve your favorite email client. From major milestones like a completely modernized UI/UX in next year’s Thunderbird 114 (codenamed “Supernova”) to smaller touches like new iconography, elegant new address book functionality, and an Import/Export wizard, all of it happens for you and because of you. Thunderbird not only survives but thrives thanks to your generous donations. Every amount, large or small, makes a difference. Please consider donating what you can, and know that we sincerely appreciate your support!

OK! Here's an overview of the new features in Thunderbird 102. Stay tuned to our blog for in-depth updates and deeper dives leading up to the late June release.

#1: The New Address Book In Thunderbird 102

We’ve teased a new address book in the past, and it’s finally coming in Thunderbird 102. Not only does the refreshed design make it easier to navigate and interact with your contacts, but it also boasts new features to help you better understand who you’re communicating with.

Complete address book entry in Thunderbird 102<figcaption>Address Book gets a new look and enhanced functionality in Thunderbird 102</figcaption>

The new Address Book has compatibility with vCard specs, the defacto standard for saving contacts. If your app (like Google Contacts) or device (iPhone, Android) can export existing contacts into vCard format, Thunderbird can import them. And as you can see from the above screenshot, each contact card acts as a launchpad for messaging, email, or event creation involving that contact.

We’re also adding several more fields to each contact entry, and they’re displayed in a much better, clearer way than before.

Your contacts are getting a serious upgrade in Thunderbird 102! There’s so much more to share on this front, so please watch this blog for a standalone deep-dive on the new Address Book in the near future.

#2: The Spaces Toolbar

One of the underlying themes of Thunderbird 102 is making the software easier to use, with smarter visual cues that can enhance your productivity. The new Spaces Toolbar is an easy, convenient way to move between all the different activities in the application. Such as managing your email, working with contacts via that awesome new address book, using the calendar and tasks functionality, chat, and even add-ons!

The Spaces Toolbar, on the left-hand side of Thunderbird<figcaption>The Spaces Toolbar, on the left-hand side of Thunderbird</figcaption>

If you want to save screen real estate, the Spaces Toolbar can be dismissed, and you can instead navigate the different activities Thunderbird offers with the new pinned Spaces tab. (Pictured to the left of the tabs at the top)

Pinned spaces tab showing the different activities, to the left of the tabs.<figcaption>Pinned Spaces Tab</figcaption>

#3: Link Preview Cards

Want to share a link with your friends or your colleagues, but do it with a bit more elegance? Our new Link Preview Cards do exactly that. When you paste a link into the compose window, we’ll ask you (via a tooltip you can turn off) if you’d like to display a rich preview of the link. It’s a great way for your recipient to see at a glance what they’re clicking out to, and a nice way for your emails to have a bit more polish if desired!

Embedded Link Previews in Thunderbird 102<figcaption>Embedded Link Previews in Thunderbird 102</figcaption>

#4: Account Setup Hub In Thunderbird 102

In past releases, we have improved first-time account setup. When setting up an email, autodiscovery of calendars and address books works really well! But managing accounts and setting up additional accounts beyond your initial setup has lagged behind. We are updating that experience in Thunderbird 102.

Want to use Thunderbird without an email account? We know you exist, and we’re making this much easier for you! After installing the software, from now on you’ll be taken to the below account hub instead of being forced to set up a new mail account. You’re free to configure Thunderbird in the order you choose, and only the elements you choose.

New Account Setup Hub in Thunderbird 102<figcaption>New Account Setup Hub in Thunderbird 102</figcaption>

#5: Import/Export Wizard

And that’s a perfect segue into the brand new Import and Export tool. Moving accounts and data in and out of Thunderbird should be a breeze! Until now, you’ve had to use add-ons for this, but we’re excited to share that this is now core functionality with Thunderbird 102.

A step-by-step wizard will provide a guided experience for importing all that data that’s important to you. Moving from Outlook, SeaMonkey, or another Thunderbird installation will be easier than ever.

A screenshot from the new Import/Export wizard<figcaption>A screenshot from the new Import/Export wizard</figcaption>

We’ve also taken extra precautions to ensure that no data is accidentally duplicated in your profile after an import. To that end, none of the actions you choose are executed until the very last step in the process. As with the new Address Book, watch for a deeper dive into the new Import/Export tool in a future blog post.

#6: Matrix Chat Support

We obviously love open source, which is one of the reasons why we’ve added support for the popular, decentralized chat protocol Matrix into Thunderbird 102. Those of you enjoying the Beta version know it’s been an option since version 91, but it will finally be usable out-of-the-box in this new release. We’re going to continuously develop updates to the Matrix experience, and we welcome your feedback.

#7: Message Header Redesign

Another UX/Visual update can be seen in the redesign of the all-important message header. The refreshed design better highlights important info, making it more responsive and easier for you to navigate.

Redesigned message header in Thunderbird 102<figcaption>Redesigned message header in Thunderbird 102</figcaption>

All of these improvements are gradual but confident steps toward the major release of Thunderbird 114 “Supernova” in 2023, which is set to deliver a completely modernized overhaul to the Thunderbird interface.

Thunderbird 102 Availability?

We think you’re going to love this release and can’t wait for you to try it!

Interested in experiencing Thunderbird 102 early? It should be available in our Beta channel by the end of May 2022. We encourage you to try it! We’ve entered “feature freeze” for version 102, and are focusing on polishing it up now. That means your Beta experience should be quite stable.

For everyone who’s enjoying the Stable version, you can expect it by the end of June 2022.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. A donation will allow us to hire developers, pay for infrastructure, expand our userbase, and continue to improve.

Click here to make a donation

The post 7 Great New Features Coming To Thunderbird 102 appeared first on The Thunderbird Blog.

This Week In RustThis Week in Rust 442

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is enum_dispatch, a proc-macro-attribute to replace dynamic dispatch with enum dispatch to gain performance.

Thanks to David Mason for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

377 pull requests were merged in the last week

Rust Compiler Performance Triage

A good week: Several performance improvements, many around macro expansion. Only one regression of note, and that PR author opened an issue to follow up on it.

Triage done by @pnkfelix. Revision range: 468492c2..c51871c4

Summary:

Regressions 😿
(primary)
Regressions 😿
(secondary)
Improvements 🎉
(primary)
Improvements 🎉
(secondary)
All 😿 🎉
(primary)
count 11 37 117 65 128
mean 0.7% 0.7% -1.2% -1.6% -1.1%
max 1.5% 1.9% -6.5% -5.2% -6.5%

2 Regressions, 4 Improvements, 1 Mixed; 1 of them in rollups 59 artifact comparisons made in total

See the full report for more.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
New and Updated RFCs

Upcoming Events

Rusty Events between 2022-05-11 - 2022-06-08 🦀

Virtual
North America
Europe
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Element

NXLog

Quickwit

Timescale

Enso

Estuary

Stockly

Kraken

Tempus Ex

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

At Cloudflare we have big Rust projects/teams and onboard new developers regularly.

There is a learning curve. Rust is rigid and unforgiving, and noobs need assistance when the compiler says “no” (although error messages and Clippy do a good job for common mistakes).

However, the big upside is that noobs can contribute safely to Rust projects. Rust limits severity of the damage an inexperienced programmer can cause. Once they manage to get the code to compile, it already has lots of correctness guarantees. “Bad” Rust code may just clone more than strictly necessary, or write 10 lines of code for something that has a helper method in the stdlib, but it won’t corrupt memory or blindly run the happy path without checking for errors. Rust prefers to be locally explicit, so it’s also easy to review.

Kornel.Lesiński on lobste.rs

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Karl DubostMozilla, Bye!

This year, 2022, May the 4th was my last day at Mozilla.

Trunk from an oak tree and its canopy.

Alors, l'arbre et son rêveur, ensemble, s'ordonnent, grandissent. Jamais l'arbre, dans le monde du songe, ne s'établit comme un être achevé. — Poétique de l'espace, Gaston Bachelard

in English

Then, together, the tree and its dreamer, take their places, grow tall. Never, in the dream world, does a tree appear as a completed being. — The poetics of space, Gaston Bachelard

I started on July 2, 2013 on a 6 months contract at Mozilla to work on Firefox OS on Web Compatibility issues. I was living in Montréal, Canada at the time.

Lawrence Mandel (now at Shopify) trusted and hired me. His first words on our Web Compatibility work at Mozilla were aligned with my ideas and stance for the open Web.

We are here to make the Web more open, not only for making the Web usable on Firefox products. — Lawrence Mandel

After these 6 months, I moved to Japan and I'm still living there. I'm currently in Tokyo. On the span of 8 years and 10 months, I focused my energy on this mission inside the Web Compatibility.

A person should be able to use the Web with any devices and any browsers.

I was not alone. The success of a project never relies on a single individual, but a full team of people dedicated to make this mission a reality. At the very beginning, we were three coming from Opera Software, we all had an experience on Web compatibility issues: Mike Taylor, Hallvord R.M. Steen and me. Then Adam Stevenson joined. None of the initial team is still at Mozilla. I miss working with Eric Tsai too. Some people (open contributors) have also participated to the project like Abdul Rauf, Alexa Roman, Kate Manning, Guillaume Demesy, Reinhart Previano.

webcompat.com was setup on purpose without Mozilla branding to invite the participation of all browser implementers (Apple, Google, Microsoft, Opera, etc.) on solving issues resulting from website mistakes or interoperability issues. Mozilla put the main effort into it and in return webcompat.com helped Mozilla and Firefox Core team to fix a lot of issues.

The current Web Compatibility team is composed of Dennis Schubert (Germany), James Graham (UK), Ksenia Berezina (Canada), Oana Arbuzov (Romania), Raul Bucata (Romania) and Thomas Wisniewski (Canada). This team was distributed across three continents (two since I left), working around the clock to help solving Web compatibility issues. All the work done was in public, shared with others, written down and tracked in the open. This leveraged autonomy and responsibility from everyone in the team. Apart of a lack of a resource, my departure doesn't put in peril the work of the team. Even as I became the team manager 18 months ago, I was not a gatekeeper.

There is the Webcompat team… then there is the amazing group of Core Engineers who have the open Web deep in their heart. Many left Mozilla, but some of them are still there and they were instrumental in solving interoperability issues.

Emilio Cobos Álvarez, Daniel Holbert, Jonathan Kew, Masayuki Nakano, Makoto Kato, Brian Birtles, Boris Zbarsky, Hiroyuki Hikezoe, Botond Ballo, Olli Pettay, Henri Sivonen, Anne van Kesteren, Ting-Yu Lin, Cameron McCormack. These lists are dangerous, I keep forgetting people.

I could talk about all the things which have been solved around text input, CSS flexbox, JavaScript features, DOM and SVG, … but this starts to be long.

And finally the diagnosis ability of the Webcompat team would be nothing without the dedication of the devtools and performance teams. They helped us to work better, they develop amazing tools which are useful for the webcompat team and the web developers. They always care about what we do. Nicolas Chevobbe, Julien Wajsberg, Daisuke Akatsuka, Jan Odvarko (Honza), and many others …

But as Bachelard said above:

Never, in the dream world, does a tree appear as a completed being.

The new chapter is starting on May 16, 2022. More information on that later apart of the lunar eclipse.

Comments

If you have more questions, things I may have missed, different take on them. Feel free to comment…. Be mindful.

Otsukare!

The Rust Programming Language BlogSecurity advisory: malicious crate rustdecimal

This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.

The Rust Security Response WG and the crates.io team were notified on 2022-05-02 of the existence of the malicious crate rustdecimal, which contained malware. The crate name was intentionally similar to the name of the popular rust_decimal crate, hoping that potential victims would misspell its name (an attack called "typosquatting").

To protect the security of the ecosystem, the crates.io team permanently removed the crate from the registry as soon as it was made aware of the malware. An analysis of all the crates on crates.io was also performed, and no other crate with similar code patterns was found.

Keep in mind that the rust_decimal crate was not compromised, and it is still safe to use.

Analysis of the crate

The crate had less than 500 downloads since its first release on 2022-03-25, and no crates on the crates.io registry depended on it.

The crate contained identical source code and functionality as the legit rust_decimal crate, except for the Decimal::new function.

When the function was called, it checked whether the GITLAB_CI environment variable was set, and if so it downloaded a binary payload into /tmp/git-updater.bin and executed it. The binary payload supported both Linux and macOS, but not Windows.

An analysis of the binary payload was not possible, as the download URL didn't work anymore when the analysis was performed.

Recommendations

If your project or organization is running GitLab CI, we strongly recommend checking whether your project or one of its dependencies depended on the rustdecimal crate, starting from 2022-03-25. If you notice a dependency on that crate, you should consider your CI environment to be compromised.

In general, we recommend regularly auditing your dependencies, and only depending on crates whose author you trust. If you notice any suspicious behavior in a crate's source code please follow the Rust security policy and report it to the Rust Security Response WG.

Acknowledgements

We want to thank GitHub user @safinaskar for identifying the malicious crate in this GitHub issue.

The Talospace ProjectFirefox 100 on POWER

You know, it's not been a great weekend. Between striking out on some classic hardware projects, leaving printed circuit board corpses in my living room like some alternative universe fusion of William Gibson and Jeffrey Dahmer, one of the yard sprinkler valves has decided it will constantly water the hedge (a couple hundred bucks to fix) and I managed to re-injure my calf muscle sprinting to try to get a phone call (it went pop, I yelped and they hung up anyway). But Firefox is now at version 100, so there's that. Besides the pretty version window when you start it up, it has captions on picture-in-picture and various performance improvements.

Fx100 also broke our PGO-LTO gcc patches again; mach now needs to be patched to ignore how gcc captures profiling information or it will refuse to start a PGO build. This is rolled into the ensemble PGO-LTO patch, which works with the same .mozconfigs from FIrefox 95.

Between everything that's been going on and other projects I've wanted to pay attention to I don't think we're going to make the Fx102 ESR window for the POWER9 JIT. I'll still offer patches for 102ESR; you'll just have to apply them like you do for Firefox 91ESR. Meanwhile, I'll keep trying to get the last major bugs out as I have time, inclination and energy, but although I know people want this really badly, we need more eyes on the code than just me.

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 100-101)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 100 and 101 Nightly release cycles.

👷🏽‍♀️ New features

⚙️ Modernizing JS modules

We’re working on improving our implementation of modules. This includes supporting modules in Workers, adding support for Import Maps, and ESM-ification (replacing the JSM module system for Firefox internal JS code with standard ECMAScript modules).

  • We added support for caching module scripts in the bytecode cache.
  • We landed support for caching modules in the StartupCache.
  • We’ve landed many changes to improve the browser’s module loader code.
  • We’ve added a common base class to work towards supporting modules in Workers.
  • We’ve landed a cleanup for the Worker Script Loader.

🚀 JS Performance

  • We’ve added support for functions with simple try-finally blocks to the optimizing JIT, fixing a very old performance cliff. Support for more complicated cases will be added in a later release.
  • We’ve improved instanceof performance by removing the non-standard JSClass hook, proxy trap, and by optimizing polymorphic cases better in the JITs.
  • We changed how function.call and function.apply are optimized. This is more robust and fixes some performance cliffs.
  • We added support for optimizing builtin functions in CacheIR when called through function.call.
  • We used this to optimize the common slice.call(arguments) pattern in the JITs.
  • We optimized certain calls into C++ from optimized JIT code by removing (unnecessary) dynamic stack alignment.
  • We improved CacheIR support for property-init operations.
  • We reimplemented new.target as a real binding.
  • We added support for scalar replacement of array iterator objects.

🏎️ WebAssembly Performance

  • We moved trap instructions out-of-line to improve branch prediction.
  • We merged wasm::Instance and TlsData. This eliminates some memory loads.
  • We improved Baseline code performance by pinning the Instance/TLS register on most platforms.
  • We fixed some DevTools performance issues: opening the web console no longer results in using slower debugging code for Wasm modules and we fixed debugging support to not clone Wasm code.
  • We optimized a common instruction sequence with SIMD instructions.
  • We added AVX support for all binary SIMD operations.
  • We enabled support for AVX instructions for Wasm SIMD on Nightly.
  • We optimized table.get/set for anyref tables.
  • We optimized memory.copy/fill for Memory64.

📚 Miscellaneous

  • We fixed a number of compiler errors when compiling in C++20 mode.
  • We’ve updated ICU to version 71.
  • We landed a workaround for a glibc pthreads bug that was causing intermittent hangs in CI for JS shell tests on Linux.
  • We stopped using extended functions for most class methods, to reduce memory usage.

This Week In RustThis Week in Rust 441

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research

Crate of the Week

This week's crate is shuttle, a rustic declarative deployment solution for and at your service.

Thanks to Nodar Daneliya for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

343 pull requests were merged in the last week

Rust Compiler Performance Triage

Performance overall improved in the last week, but some of this is due to fixing regressions from prior weeks. This week also brings an average of 4% improvement in memory usage across all profiles due to #95171 bumping the LLVM/clang used on x86_64-unknown-linux-gnu to compile C and C++ code linked into rustc.

Triage done by @simulacrum. Revision range: 1c988cfa..468492

Summary:

Regressions 😿
(primary)
Regressions 😿
(secondary)
Improvements 🎉
(primary)
Improvements 🎉
(secondary)
All 😿 🎉
(primary)
count 13 1 78 29 91
mean 0.8% 0.3% -0.9% -0.8% -0.7%
max 1.5% 0.3% -2.7% -2.1% -2.7%

4 Regressions, 3 Improvements, 1 Mixed; 1 of them in rollups

52 artifact comparisons made in total

See the full report for more.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
New and Updated RFCs

Upcoming Events

Rusty Events between 2022-05-04 - 2022-06-01 🦀

Virtual
North America
Europe

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Element

Meilisearch

Aembit

PropelAuth

ANIXE

Stockly

Parity

Kraken

Rust Foundation

Tempus Ex

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

"Ah but logic errors can happen with all languages" yes and I'm sure trains occasionally run into trees as well, but cars are way more likely to. 🙃

amos on twitter

Thanks to Jacques “erelde” Rimbault for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdOpenPGP keys and SHA-1

As you may know, Thunderbird offers email encryption and digital email signatures using the OpenPGP technology and uses Ribose’s RNP library that provides the underlying functionality.

To strengthen the security of the OpenPGP implementation, a recent update of the RNP library had included changes to refuse the use of several unsafe algorithms, such as MD5 and SHA-1. The Thunderbird team had delivered RNP version 0.16.0 as part of the Thunderbird 91.8.0 update.

Unfortunately, this change resulted in some users no longer being able to use their OpenPGP keys. We learned that the affected users still depend on keys that were created or modified with OpenPGP software that used SHA-1 for the signatures that are part of OpenPGP keys.

After analyzing and discussing the issue, it was decided to continue to allow SHA-1 for this use of signatures, also known as binding signatures. This matches the behavior of other popular OpenPGP software like GnuPG. Thunderbird 91.9.0 includes this fix, and will be released today.

While some attacks on SHA-1 are possible, the currently known attacks are difficult to apply on OpenPGP binding signatures. In addition, RNP 0.16.0 includes an improvement that provides SHA-1 collision detection code, and it is assumed it makes it even more difficult for an attacker to abuse the fact that Thunderbird accepts SHA-1 in binding signatures.

More details on the background, on the affected and future versions, and considerations for other OpenPGP software, can be found in the following knowledge base article:

https://support.mozilla.org/en-US/kb/openpgp-unsafe-key-properties-ignored

 

Follow Thunderbird on Mastodon.

The post OpenPGP keys and SHA-1 appeared first on The Thunderbird Blog.

The Mozilla BlogCredit card autofill now enabled in the United Kingdom, France and Germany

With the rapid growth of e-commerce in the last few years, more people are expecting their browser to help them out when shopping online. Firefox has supported credit card autofill in the United States and Canada since 2018. Starting today Firefox can remember and fill credit card details additionally in the United Kingdom, France and Germany.

How to use credit card autofill

In order to use credit card autofill, enter your credit card information when checking out online, as you normally do. After submitting your payment details, Firefox will prompt you to save your card information if it is a new credit card, or to update if you have saved this card before.

After saving or updating your details, you will be able to quickly fill out payment forms in the future.

For more on Firefox:

The post Credit card autofill now enabled in the United Kingdom, France and Germany appeared first on The Mozilla Blog.

The Mozilla BlogCelebrating Firefox: How we got to 100

Whether it’s celebrating the first 100 days of school or turning 100 years old, reaching a 100th milestone is a big deal worthy of confetti, streamers and cake, and of course, reflection. Today, Firefox is releasing its 100th version to our users and we wanted to take a moment to pause and look back on how we got to where we are today together as well as what features we are releasing in our 100th version. 

In 2004, we announced the release of Firefox 1.0 with a crowdfunded New York Times ad, which listed the names of every single person who contributed to building that first release — hundreds of people. Our goal with Firefox 1.0 was to provide a robust, user-friendly and trustworthy web experience. We received praise for our features that helped users avoid pop-ups, increased online fraud protection, made tabbed browsing more efficient and gave people the ability to customize their browser with add-ons. Our goal was to put our users first and personalize their web experience and that goal still holds true today. 

Since then our community of users  — past and current — has grown to thousands of people and continues to shape who we are today. We are so grateful for all the work they have done from filing bugs, submitting ideas through Mozilla Connect, downloading and testing Firefox Nightly and Beta releases and continuing to choose Firefox as their browser. Thank you!

Before we delve into the new features for Firefox 100, we wanted to share artwork we received when we sent out a call to submit fan art inspired by Firefox. Throughout this month, we will be sharing the art we received on our social media channels over the next couple of weeks. If you have your own Firefox inspired art you’d like to share? Tag us @Firefox for a chance to be featured. In the meantime, here is one we received that we wanted to share.

<figcaption>Credit: @gr.af.ik</figcaption>

Here are the new features for Firefox 100: 

Picture-in-Picture now available with subtitles – a Mozilla Connect community requested feature

In 2019 we released Picture-in-Picture and it quickly became a favorite among users. Since its release we continued to improve it, first, by making it available across Windows, Mac and Linux, then making multiple Picture-in-Pictures (which coincidentally was a plus for sports enthusiasts), and today with subtitles and captions. This feature just keeps getting better and we owe it to our users sharing their feedback. 

Earlier this year we launched Mozilla Connect, a collaborative space for users to share product feedback, submit ideas for new features and participate in meaningful discussions that help shape future releases. It’s where we received the idea to enhance Picture-in-Picture with subtitles and captions support. 

The subtitles and captions in Picture-in-Picture will initially be available on three websites — YouTube, Prime Video and Netflix — plus websites that support WebVTT format like coursera.org and Twitter. We hope to expand the feature to even more sites. Now whether you’re hard-of-hearing, a multi-tasker or a multilingual user, we have you covered with Picture-in-Picture subtitles. To learn more about the impact of captions and how Picture-in-Picture will become more usable to a range of users with varying needs, we’ve written an article here.

To hear more about the journey PiP subtitles followed from community idea to shipping feature, or contribute ideas and thoughts of your own, join us over in Mozilla Connect. We’d love to hear from you.

Dress up your Firefox mobile browsers with new browser wallpapers 

Do you ever walk into a room with plain white solid walls and just feel, well, meh? We understand the feeling. There’s something about bright colors and fun patterns that bring us joy. Whether you’re a minimalist or a maximalist, we’ve got a wallpaper for you. In March, we announced new wallpapers on FirefoxAndroid and iOS for the US. Today, we added two new wallpapers — Beach vibes and Twilight hills —  available globally. You can find the wallpapers feature in your Settings and pick the one that suits your style. On iOS, we will be roll out wallpapers this week.

<figcaption>Beach Vibe</figcaption>
<figcaption>Twilight Hills</figcaption>

Firefox on Android and iOS: Your history and tabs are now clutter-free 

Today, we have clutter-free history and clutter-free tabs available on Android. We will roll out on iOS this week. Two separate features with the same intention: to simplify and organize your stuff so you can jump back into the stuff you care about.

For clutter-free history, instead of an endless sea of URLs leaving you feeling overwhelmed, we’ve organized it for you in an intuitive way. One of the ways we’ve done this is by grouping history items by the original item, for example if you’re looking for shoes and you’ve looked at several models, you can find them grouped in one folder under your search term. Another way we’ve organized your history is removing duplicate sites thus reducing the visual clutter. Lastly, you can now search within your history. Keep in mind that we are doing this in a privacy preserving way so you can rest assured that your data isn’t being sent back to Mozilla. 

<figcaption>Clutter-free history</figcaption>

For clutter-free tabs, we heard from users who struggle with “tab clutter” but keep tabs open so they can go back to them later. Now, we will focus on your recent tabs in your tab tray so you can easily switch between the tabs depending on what you’re working on.  Once you haven’t visited a tab within 14 days, we will move it to an “inactive state” in your tab tray. This means that you can still hold on and go back to those tabs but they will no longer clutter your view and you can easily close them when you’re ready. Clutter-free tabs was released on Android last year and and will roll out on iOS this week.

<figcaption>Clutter-free tabs</figcaption>

Firefox for Android: Get more private with HTTPS-only mode

In March, we added HTTPS-only mode to Firefox Focus, our simple, privacy by default companion app. We thought that our Firefox for Android app users could also benefit from this privacy-first feature and today HTTPS-only mode is available on Android. We break down HTTPS in detail here

TL;DR: Firefox for Android now automatically opts into HTTPS for the best available security and privacy – supporting life’s get in and out of moments where you may want to do quick searches with no distractions or worries.

Plus, new features for our users around the world including our first-run language switcher and credit card auto fill

The first-run language switcher is a new feature that makes it easier for people to use their preferred language. When first run, Firefox will recognize that you have a device language preference and, where it differs from Firefox’s default language, ask whether you’d like to switch to one of the more than 100  languages we have available to choose from. The new credit card auto fill feature was launched previously in North America, and is now enabled in the United Kingdom, France and Germany. For more information on how the new credit card auto fill works, please visit here

Check out these new features today and download them on your desktop and mobile devices: Firefox, Firefox on Android and Firefox for iOS.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

For more on Firefox:

The post Celebrating Firefox: How we got to 100 appeared first on The Mozilla Blog.

The Mozilla BlogFirefox’s Picture-in-Picture rolls out subtitles – a Mozilla Connect community requested feature

There are so many different ways that Firefox users count on Picture-in-Picture for their best browsing and content viewing experiences:

  • Toddler duty? Don’t sweat it! Stream your child’s favorite movie to watch on your desktop while you hold down your job and the title of “favorite parent!”
  • Learning new coding skills? Open an online tutorial video while keeping your developer software full screen where you test the code.
  • Trying a new recipe? With Picture-in-Picture you can watch Emily Mariko’s viral salmon rice bowl how-to while also having the in-text recipe open on your main browser tab.

Beginning with Firefox 1.0, we’ve continued to put our users first to develop and deliver on the features most important to them. Our mission then – to build an internet open and accessible to all – still remains the same today. That’s why, nearly 99 releases later, we’re excited to introduce subtitles and captions to Firefox’s Picture-in-Picture (PiP). 

A community-led effort

In 2019, we released Picture-in-Picture and it soon topped our best features lists. This Firefox browser feature allows you to seamlessly pop any video out from the webpage into a floating, size-adjustable window in just one click. After hearing from Firefox users who wanted more than just one picture-in-picture view, we launched Multiple Picture-in-Picture (multi-PiP), which allows users to pop out and play multiple videos at the same time and on top of other tabs and windows. Now, we’re rolling out PiP with subtitles and captions – another highly requested feature straight from our community!

In fact, our product managers engaged our recently launched Mozilla Connect community – a collaborative space for users to share product feedback, submit ideas for new features, and participate in meaningful discussions that help shape future releases – to help develop exciting updates to the Firefox video and PiP experiences. This is where we received an astounding amount of feedback to enhance Picture-in-Picture with subtitles and captions support to make it more functional for a wider range of users.

So, how does the subtitles and captions feature on Picture-In-Picture work? 

The subtitles and captions feature in Picture-in-Picture is available on three major streaming sites – YouTube, Prime Video, and Netflix – plus all websites that support the emerging W3C standard called WebVTT (Web Video Text Track) format, such as Twitter and Coursera. It’s really as simple as turning on the subtitles or captions on the in-page video player of your favorite website, and they will appear in PiP when launched. 

There are many sites that don’t yet support the emerging web standard, so if you don’t see captions when using PiP on your favorite streaming site, let the website know that you want them to support WebVTT or vote on the community page to help Firefox select the streaming sites we should support next by building special site-specific adapters for these websites. If you would like to contribute your own site-specific adapter, you can do it here (how-to included), so that more websites support subtitles in PiP. 

How do captions/subtitles make videos more usable, and therefore more accessible? 

It’s true that adding captions and subtitles can make videos more accessible to a wider range of users. According to the U.S. Census Bureau, almost 20% of North Americans and more than 55% of Europeans speak more than one language.  It can prove handy to watch videos with subtitles in your second language, or when you’re interested in watching something in a language you don’t know at all. For viewers that are non-native speakers, subtitles help them learn a new language faster and with a greater depth of understanding.

Another highly visible example, is that Firefox users who experience a degree of hearing loss may rely on captions to follow along to video content. Interestingly, the majority of people who like watching videos with captions are hearing, yet still opt in for their own benefit! Reading captions can improve comprehension and memorization and it might be helpful to use them while multitasking and studying at a library or shared workspace, or any other situation where sound may be distracting or inappropriate. There are also many other situations when users would want to watch videos with the sound off and captions on.

“Today users with disabilities have a choice when it comes to accessible browsers, but accessibility is never actually ‘done’ and there’s still a lot more browsers could do to better support users with disabilities. So, with that, Firefox is looking to see what innovations can be provided to improve the accessibility of web browsing and empower users with disabilities in more ways.”

Asa Dotzler, product manager for Firefox Accessibility

As Firefox continues to make its products more usable to a range of users with varying needs, the accessibility of our products will only increase, and we have our community to thank for that! Now, whether you identify as being hard-of-hearing, a non-native language speaker or having an affinity for having multiple tabs open at one time, we have you covered with Picture-in-Picture subtitles. 

Got things to do and things to watch? Go ahead and download the latest version of Firefox here to use Picture-in-Picture in Firefox.

The post Firefox’s Picture-in-Picture rolls out subtitles – a Mozilla Connect community requested feature appeared first on The Mozilla Blog.

William DurandDeveloping Firefox in Firefox with Gitpod

Gitpod provides Linux-based development environments on demand along with a web editor frontend (VS Code). There is apparently no limit on what you can do in a Gitpod workspace, e.g., I ran my own toy kernel in QEMU in the browser.

I like Gitpod because it…

  • avoids potential issues when setting up a new project, which is great for the maintainers (e.g., it is easier to reproduce an issue when you have access to the same environment) and the newcomers (e.g., no fatal error when trying to contribute for the first time)
  • allows anyone to have access to a (somewhat) powerful machine because not everyone can afford a MacBook Pro. I suppose one needs a pretty reliable Internet connection, though

Motivations

My main machine runs MacOS. I also have a Windows machine on my desk (useful to debug annoying Windows-only intermittent failures), which I access with Microsoft Remote Desktop.

It looks like Gitpod, runs like Gitpod, and quacks like Gitpod but it isn’t Gitpod!

Except my ever growing collection of single-board computers, I don’t have a Linux-based machine at home^W work, though. I haven’t used Virtual Machines on Apple Silicon yet and I’d rather keep an eye on Asahi Linux instead.

This is why I wanted to give Gitpod a try and see if it could become my Linux environment for working on Firefox. Clearly, this is a nice to have for me (I don’t necessarily need to build Firefox on Linux) but that might be useful to others.

Assuming a Gitpod-based setup would work for me, this could possibly become a strong alternative for other contributors as well. Then, mayyybe I could start a conversation about it internally in the (far) future.

Note that this isn’t a novel idea, Jan Keromnes was already working on a similar tool called “Janitor” for Mozilla needs in 2015.

Firefox development with Gitpod = ❤️

I recently put together a proof of concept and played with it since then. This GitHub repository contains the Gitpod configuration to checkout the Firefox sources as well as the tools required to build Firefox.

It takes about 7 minutes to be able to run ./mach run in a fresh workspace (artifact mode). It is not super fast, although I already improved the initial load time by using a custom Docker image. It is also worth mentioning that re-opening an existing workspace is much faster!

./mach run executed in a Gitpod workspace

Gitpod provides a Docker image with X11 and VNC, which I used as the base for my custom Docker image. This is useful to interact with Firefox as well as observing some of the tests running.

A “mochitest” running in a Gitpod workspace

I don’t know if this is the right approach, though. My understanding is that Gitpod works best when its configuration lives next to the sources. For Firefox, that would mean having the configuration in the official Mozilla Mercurial repositories but then we would need hgpod.io 😅

On the other hand, we develop Firefox using a stacked diff workflow. Therefore, we probably don’t need most of the VCS features and Gitpod does not want to be seen as an online IDE anyway. I personally rely on git with the excellent git-cinnabar helper, and moz-phab to submit patches to Phabricator. Except for the latter, which can easily be installed with ./mach install-moz-phab, these tools are already available in Gitpod workspaces created with my repository.

In terms of limitations, cloning mozilla-unified is the most time-consuming task at the moment. Gitpod has a feature called Prebuilds that could help but I am not sure how that would work when the actual project repository isn’t the one that contains the Gitpod configuration.

In case you’re wondering, I also started a full build (no artifact) and it took about 40 minutes to finish 🙁 I was hoping to have better performances out of the box even if it isn’t that bad. For comparison, it takes ~15min on my MacBook Pro with M1 Max (and 2 hours on my previous Apple machine).

There are other things that I’d like to poke around. For instance, I would love to have rr support in Gitpod. I gave it a quick try and it does not seem possible so far, maybe because of how Docker is configured.

gitpod /workspace/gitpod-firefox-dev/mozilla-unified (bookmarks/central) $ rr record -n /usr/bin/ls
[FATAL /tmp/rr/src/PerfCounters.cc:224:start_counter() errno: EPERM] Failed to initialize counter

After a few messages exchanged on Twitter, Jan Keromnes (yeah, same as above) from Gitpod filed a feature request to support rr 🤞

Next steps

As I mentioned previously, this is a proof of concept but it is already functional. I’ll personally continue to evaluate this Linux-based development environment. If it gets rr support, this might become my debugging environment of choice!

Now, if you are interested, you can go ahead and create a new workspace automatically. Otherwise please reach out to me and let’s discuss!

One more thing: Mozilla employees (and many other Open Source contributors) qualify for the Gitpod for Open Source program.

Cameron KaiserApril patch set for TenFourFox

I've had my hands full with the POWER9 64-bit JIT (a descendant of TenFourFox's 32-bit JIT), but I had a better idea about the lazy image loader workaround in February's drop and got the rest of the maintenance patches down at the same time. These patches include the standard security fixes and updates to timezones, pinned certificates and HSTS, as well as another entry for the ATSUI font blacklist. In addition, a bug in the POWER9 JIT turns out to affect TenFourFox as well (going back to at least TenFourFox 38), which I ported a fix for. It should correct some residual issues with IonPower-NVLE on a few sites, though it may allow code to run that couldn't run before that may have its own issues, of course. A clobber is not required for this update. The commits are already hot and live on Github.

The next ESR, version 102, is due in June. I'll change the EV and certificate roots over around that time as usual, but we might also take the opportunity to pull up some of the vendored libraries like zlib, so it might be a somewhat bigger update than it would ordinarily be.

William DurandMoziversary #4

Today is my fourth Moziversary 🎂 I have been working at Mozilla as a full-time employee for 4 years. I blogged two times before: in 2020 and 2021. What happened in 2019? I. Don’t. Know.

I was hired as a Senior Web Developer on addons.mozilla.org (AMO). I am now a Staff Software Engineer in the Firefox WebExtensions team. I officially joined this team in January. Since then, I became a peer of the Add-ons Manager and WebExtensions modules.

Farewell AMO!

As mentioned above, I moved to another team after many years in the AMO team. If I had to summarize my time in this team, I would probably say: “I did my part”.

Earlier this year, I transferred ownership of more than 10 projects that I either created or took over to my former (AMO) colleagues 😬 I was maintaining these projects in addition to the bulk of my work, which has been extremely diverse. As far as I can remember, I…

  • worked on countless user-facing features on AMO, quickly becoming the top committer on addons-frontend (for what it’s worth)
  • contributed many improvements to the AMO backend (Django/Python). For example, I drastically improved the reliability of the Git extraction system that we use for signed add-ons
  • developed a set of anti-malware scanning and code search tools that have been “a game changer to secure the add-ons ecosystem for Firefox”
  • introduced AWS Lambda to our infrastructure for some (micro) services
  • created prototypes, e.g., I wrote a CLI tool leveraging SpiderMonkey to dynamically analyze browser extension behaviors
  • almost integrated Stripe with AMO 🙃
  • shipped entire features spanning many components end-to-end like the AMO Statistics (AMO frontend + backend, BigQuery/ETL with Airflow, and some patches in Firefox)
  • created various dev/QA tools

Last but not least, I started and developed a collaboration with Mozilla’s Data Science team on a Machine Learning (security-oriented) project. This has been one of my best contributions to Mozilla to date.

What did I do over the last 12 months?

I built the “new” AMO blog. I played with Eleventy for the first time and wrote some PHP to tweak the WordPress backend for our needs. I also “hijacked” our addons-frontend project a bit to reuse some logic and components for the “enhanced add-on cards” used on the blog (screenshot below).

The AMO blog with an add-on card for the “OneTab” extension. The “Add to Firefox” button is dynamic like the install buttons on the main AMO website.

Next, I co-specified and implemented a Firefox feature to detect search hijacking at scale. After that, I started to work on some of the new Manifest V3 APIs in Firefox and eventually joined the WebExtensions team full-time.

I also…

This period has been complicated, though. The manager who hired me and helped me grow as an engineer left last year 😭 Some other folks I used to work with are no longer at Mozilla either. That got me thinking about my career and my recent choices. I firmly believe that moving to the WebExtensions team was the right call. Yet, most of the key people who could have helped me grow further are gone. Building trustful relationships takes time and so does having opportunities to demonstrate capabilities.

Sure, I still have plenty of things to learn in my current role but I hardly see what the next milestone will be in my career at the moment. That being said, I love what I am doing at Mozilla and my team is fabulous ❤️

Thank you to everyone in the Add-ons team as well as to all the folks I had the pleasure to work with so far!

Hacks.Mozilla.OrgCommon Voice dataset tops 20,000 hours

The latest Common Voice dataset, released today, has achieved a major milestone: More than 20,000 hours of open-source speech data that anyone, anywhere can use. The dataset has nearly doubled in the past year.

Why should you care about Common Voice?

  • Do you have to change your accent to be understood by a virtual assistant? 
  • Are you worried that so many voice-operated devices are collecting your voice data for proprietary Big Tech datasets?
  • Are automatic subtitles unavailable for you in your language?

Automatic Speech Recognition plays an important role in the way we can access information, however, of the 7,000 languages spoken globally today only a handful are supported by most products.

Mozilla’s Common Voice seeks to change the language technology ecosystem by supporting communities to collect voice data for the creation of voice-enabled applications for their own languages. 

Common Voice Dataset Release 

This release wouldn’t be possible without our contributors — from voice donations to initiating their language in our project, to opening new opportunities for people to build voice technology tools that can support every language spoken across the world.

Access the dataset: https://commonvoice.mozilla.org/datasets

Access the metadata: https://github.com/common-voice/cv-dataset 

Highlights from the latest dataset:

  • The new release also features six new languages: Tigre, Taiwanese (Minnan), Meadow Mari, Bengali, Toki Pona and Cantonese.
  • Twenty-seven languages now have at least 100 hours of speech data. They include Bengali, Thai, Basque, and Frisian.
  • Nine languages now have at least 500 hours of speech data. They include Kinyarwanda (2,383 hours), Catalan (2,045 hours), and Swahili (719 hours).
  • Nine languages now all have at least 45% of their gender tags as female. They include Marathi, Dhivehi, and Luganda.
  • The Catalan community fueled major growth. The Catalan community’s Project AINA — a collaboration between Barcelona Supercomputing Center and the Catalan Government — mobilized Catalan speakers to contribute to Common Voice. 
  • Supporting community participation in decision making yet. The Common Voice language Rep Cohort has contributed feedback and learnings about optimal sentence collection, the inclusion of language variants, and more. 

 Create with the Dataset 

How will you create with the Common Voice Dataset?

Take some inspiration from technologists who are creating conversational chatbots, spoken language identifiers, research papers and virtual assistants with the Common Voice Dataset by watching this talk: 

https://mozilla.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=6492f3ae-3a0d-4363-99f6-adc00111b706 

Share with us how you are using the dataset on social media using #CommonVoice or sharing on our Community discourse. 

 

The post Common Voice dataset tops 20,000 hours appeared first on Mozilla Hacks - the Web developer blog.

Hacks.Mozilla.OrgMDN Plus now available in more countries

Almost a month ago, we announced MDN Plus, a new premium service on MDN that allows users to customize their experience on the website.

We are very glad to announce today that it is now possible for MDN users around the globe to create an MDN Plus free account, no matter where they are.

Click here to create an MDN Plus free account*.

Also starting today, the premium version of the service will be available in 16 more countries: Austria, Belgium, Finland, France, United Kingdom, Germany, Ireland, Italy, Malaysia, the Netherlands, New Zealand, Puerto Rico, Sweden, Singapore, Switzerland, and Spain. We continue to work towards expanding this list even further.

Click here to create an MDN Plus premium account**.

* Now available to everyone

** You will need to subscribe from one of the countries mentioned above to be able to have an MDN Plus premium account at this time

The post MDN Plus now available in more countries appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 440

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is czkawka, a GTK-based duplicate finder.

Despite a lack of nominations, llogiq is pleased with his pick.

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

278 pull requests were merged in the last week

Rust Compiler Performance Triage

This was, in general, a positive week for compiler performance. There were many concentrated efforts on improving rustdoc performance with a lot of real world crates showing ~4-7% improvements in full build times. Additionally, there was further improvement to macro_rules! performance with many real world crates improving performance by as much as 18% in full builds! On the other hand, the regressions were mostly minor and largely relegated to secondary benchmarks.

Triage done by @rylev. Revision range: 4ca19e0..1c988cf

4 Regressions, 6 Improvements, 3 Mixed; 1 of them in rollups 45 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2022-04-27 - 2022-05-25 🦀

Virtual
North America
Europe

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Element

Bytewax

Cambrian Works

HashCloak

NXLog

Zcash Foundation

RDX Works

KidsLoop

Stockly

Enso

Kraken

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

This is the most fundamental philosophy of both the Rust language and the Rust project: we don't think it's sufficient to build robust systems by only including people who don't make mistakes; we think it's better to provide tooling and process to catch and prevent mistakes.

Jane Lusby on the inside Rust blog

Thanks to farnbams for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox NightlyThese Weeks In Firefox: Issue 114

Highlights

  • The devtools console is now significantly faster! If you are a developer who heavily uses the console, this should be a substantial quality of life improvement. This was a great collaboration between the devtools team and the performance team, and further performance improvements are inbound. – Bug 1753177
  • Thank you to Max, who added several video wrappers for Picture-in-Picture subtitles support
  • Font size of Picture-in-Picture subtitles can be adjusted using the preference media.videocontrols.picture-in-picture.display-text-tracks.size. Options are small, medium, and large. – Bug 1757219
  • Starting in Firefox >= 101, there will be a change to our WebExtension storage API. Each storage API’s storageArea (storage.local, storage.sync etc) will provide its own onChanged event (e.g. browser.storage.local.onChanged and browser.storage.sync.onChanged), in addition to browser.storage.onChanged API event – Bug 1758475
  • Daisuke has fixed an issue in the URL bar where the caret position would move on pages with long load times

Friends of the Firefox team

Introductions/Shout-Outs

  • Thanks to everyone that has helped mentor and guide Outreachy applicants so far, and a huge shout-out to the applicants!

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • av
  • irwp
  • Janvi Bajoria [:janvi01]
  • Max
  • Oriol Brufau [:Oriol]
  • sayuree
  • serge-sans-paille
  • Shane Hughes [:aminomancer]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • As part of the ongoing ManifestVersion 3 work:
    • Menu API support for event pages – Bug 1748558 / Bug 1762394 / Bug 1761814
    • Prevent event page to be terminated on idle while the Addon Debugging devtools toolbox is attached to the extension – Bug 1748530
    • Deprecated tabs APIs have been hidden (for add-ons with manifest_version explicitly set to 3) – Bug 1764566
    • Relax validation of the manifest properties unsupported for manifest_version 3 add-ons – Bug 1723156
      • Firefox will be showing warnings in “about:debugging” for the manifest properties deprecated in manifest_version 3, but the add-on will install successfully.
      • The deprecated manifest properties will be set as undefined in the normalized manifest data.
  • Thanks to Jon Coppeard work in Bug 1761938, a separate module loaders is being used for WebExtensions content scripts dynamic module imports in Firefox >= 101
  • Thanks to Emilio work in Bug 1762298 WebExtensions popups and sidebars will be using the preferred color scheme inherited from the one set for the browser chrome
    • Quoting from Bug 1762298 comment 1: The prefers-color-scheme of a page (and all its subframes) loaded in a <browser> element depends on the used color-scheme property value of that <browser> element”.

Developer Tools

  • Toolbox
    • Fixed pin-to-bottom issue with tall messages in the Console panel (e.g. error with large stacktrace) bug
    • Fixed CSS Grid/Flexbox highlighter in Browser Toolbox (bug)
      • Flex structure of a flexbox highlighted correctly.
  • WebDriver BiDi
    • Started working on a new command, browsingContext.create, which is used to open new tabs and windows. This command is important both for end users as well as for our own tests, to remove another dependency on WebDriver HTTP.
    • We landed a first implementation of the browsingContext.navigate command. Users can rely on this command to navigate tabs or frames, with various wait conditions.
    • On April 11th geckodriver 0.31.0, which is our proxy for W3C WebDriver compatible clients to interact with Gecko-based browsers, was released including some fixes and improvements in our WebDriver BiDi support and a new command for retrieving a DOM node’s ShadowRoot.

Form Autofill

Lint, Docs and Workflow

  • There are various mentored patches in work/landing to fix ESLint no-unused-vars issues in xpcshell-tests. Thank you to the following who have landed fixes so far:
    • Roy Christo
    • Karnik Kanojia
  • Patches have been posted for upgrading to the ESLint v8 series.
    • One thing of note is that ESLint will now catch cases of typeof foo == undefined. These will always return false as they should be compared with the string undefined, i.e. typeof foo == “undefined”.

Password Manager

Picture-in-Picture

Search and Navigation

James added additional flexibility to generating locale and region names based on the users locale/region in search-config.json

The Mozilla BlogInternet spring cleaning: How to delete Instagram, Facebook and other accounts

So you’ve washed your sheets and vacuumed under the couch in the name of spring cleaning. But what about your online clutter?

Apps that delight, inform and keep you connected deserve room in your digital space. But if you haven’t used your Facebook account in years, or you’re looking to permanently take back some time from doomscrolling, you might consider getting rid of some accounts once and for all. Of course, like other services that profit off of you – in this case, off your user informationsocial media and other online platforms can make it hard to cut ties. Here’s a guide to that elusive delete button for when you’ve made up your mind.

How to delete Facebook 

On Firefox, you can limit the social media giant tracking your online activities through the Facebook Container extension without deleting your Facebook account. Though, *wildly gestures at news headlines over the past couple of years*, we don’t blame you if you want to stop using the app for good. You may have even deactivated your Facebook account before and found that you can log right back in. Here’s how to actually delete Facebook through the mobile app:

  • Find the menu icon on the bottom right.
  • Scroll all the way down.
  • From there, go to settings & privacy > personal and account information > account ownership and control > deactivation and deletion.
  • Select delete account, then continue to account deletion > continue to account deletion > delete account.
  • Enter your password, hit continue and click on delete account. Facebook will start deleting your account in 30 days, just make sure not to log back in before then.

Here’s how to delete Facebook through your browser:

  • Find the account icon (it looks like an upside-down triangle in the top right corner).
  • From there, go to settings & privacy > settings > your Facebook information > deactivation and deletion.
  • Select delete account, then continue to account deletion > delete account > confirm deletion.

It may take up to 90 days to delete everything you’ve posted, according to Facebook. Also note that the company says after that period, it may keep copies of your data in “backup storage” to recover in some instances such as software error and legal issues. 

More information from Facebook

How to delete Instagram 

So you’ve decided to shut down your Instagram account. Maybe you want to cleanse your digital space from its parent company, Meta. Perhaps you’re tired of deleting the app only to reinstall it later. Whatever your reason, here’s how to delete your Instagram:

  • Visit https://instagram.com/accounts/remove/request/permanent and log in, if you aren’t already logged in.
  • You’ll see a question about why you want to delete your account. Pick an option from the dropdown menu. 
  • Re-enter your password.
  • Click on delete [username].
  • When prompted, confirm that you want to delete your account. 
  • You’ll see a page saying your account will be deleted after a month. You’ll be able to log in before then if you choose to keep your account.

More information from Instagram

How to delete Snapchat

Whether you’ve migrated to another similar social media platform, or have simply outgrown it, you may be tempted to just delete the Snapchat app from your phone and get on with it. But you’ll want to scrub your data, too. Here’s how to delete your Snapchat account from an iOS app:

  • Click on the profile icon on the top left, then the settings icon on the top right.
  • Scroll all the way down and hit delete account.
  • Enter your password then continue. Your account will be deleted in 30 days.

Here’s how to delete your Snapchat account from your browser:

More information from Snapchat

How to delete Twitter

Twitter can be a trove of information. It can also enable endless doomscrolling. If you’d rather get your news and and debate people on the latest hot take elsewhere, here’s how to delete your Twitter account from a browser: 

  • Once you’re logged in, click on more on the left-hand side of your Twitter homepage. 
  • Click on settings & privacy > your account > deactivate your account > deactivate.
  • Enter your password and hit confirm.

Here’s how to delete your Twitter account from the app:

  • Click on the profile icon, then go to settings and privacy > your account > deactivate your account > deactivate.
  • Enter your password.

More information from Twitter

How to delete Google

Google’s share of the global search market stands at about 85%. While the tech giant will likely continue to loom large over our lives, from search to email to our calendars, we can delete inactive or unnecessary Google accounts. Here’s how to do that:

More information from Google

How to delete Amazon

Amazon has had its fair share of controversies, particularly about data collection and how the retail giant treats its workers. If you’ve decided that easy access and quick deliveries aren’t worth the price anymore, here’s how to delete your Amazon account:

  • Go to https://www.amazon.com/privacy/data-deletion.
  • Make sure to read which Amazon services you won’t have access to after you delete your account. 
  • Check “Yes, I want to permanently close my Amazon Account and delete my data.”
  • Hit close my account.
  • Check your text messages or emails for a notification from Amazon.
  • Click on the confirm account closure link. 
  • Enter your password. 

More information from Amazon

How to delete Venmo

Payment app Venmo has made it easier to split bills and pay for things without cash. But if you’ve decided to use other ways to do that, you’ll want to delete your account along with your bank information with it. You’ll first have to transfer any funds in your Venmo account to your bank account. Another option: return the funds to sender. If you have any pending transactions, you’ll need to address them before you can close your account. Once you’re set, here’s how to delete your Venmo account on your browser:

Here’s how to close your Venmo account in the app:

  • On the bottom right, click on the person icon.
  • On the top right, go to settings by clicking on the gear icon
  • Click on account > close Venmo account > continue > confirm

More information from Venmo

How to delete TikTok

TikTok has exploded in popularity, surpassing Twitter and Snapchat’s combined ad revenue in February. If you’ve tried the app and decided it’s not for you, here’s how to delete your TikTok account: 

  • In the app, click the profile icon on the bottom right.
  • Click the three-line icon on the top right.
  • Click on settings and privacy > manage account > delete account.
  • Follow the prompts.

More information from TikTok

How to delete Spotify 

Whether you want to follow in Neil Young’s footsteps or are already streaming music and podcasts through another service, deleting your stagnant Spotify account is a good idea. If you have a subscription, you’ll need to cancel that first. Once you’re ready, here’s how to delete your Spotify account. 

More information from Spotify

With our lives so online, our digital space can get messy with inactive and unnecessary accounts — and forgetting about them can pose a security risk. While you’re off to a good start with our one-stop shop for deleting online accounts, it’s far from exhaustive. So here’s a bonus tip: Sign up for Firefox Monitor. It alerts you when your data shows up in any breaches, including on websites that you’ve forgotten giving your information to. 

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post Internet spring cleaning: How to delete Instagram, Facebook and other accounts appeared first on The Mozilla Blog.

The Mozilla BlogHow to delete your Spotify account

Whether you want to follow in Neil Young’s footsteps or are already streaming music and podcasts through another service, deleting your stagnant Spotify account is a good idea. If you have a subscription, you’ll need to cancel that first. Once you’re ready, here’s how to delete your Spotify account.

More information from Spotify

With our lives so online, our digital space can get messy with inactive and unnecessary accounts — and forgetting about them can pose a security risk. You’ll be off to a good start with our one-stop shop for deleting online accounts, but it’s far from exhaustive. So here’s a bonus tip: Sign up for Firefox Monitor. It alerts you when your data shows up in any breaches, including on websites that you’ve forgotten giving your information to.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post How to delete your Spotify account appeared first on The Mozilla Blog.

The Mozilla BlogHow to delete your Facebook account

On Firefox, you can limit the social media giant tracking your online activities through the Facebook Container extension without deleting your Facebook account. Though, *wildly gestures at news headlines over the past couple of years*, we don’t blame you if you want to stop using the app for good. You may have even deactivated your Facebook account before and found that you can log right back in. Here’s how to actually delete Facebook through the mobile app:

  • Find the menu icon on the bottom right.
  • Scroll all the way down.
  • From there, go to settings & privacy > personal and account information > account ownership and control > deactivation and deletion.
  • Select delete account, then continue to account deletion > continue to account deletion > delete account.
  • Enter your password, hit continue and click on delete account. Facebook will start deleting your account in 30 days, just make sure not to log back in before then.

Here’s how to delete Facebook through your browser:

  • Find the account icon (it looks like an upside-down triangle in the top right corner).
  • From there, go to settings & privacy > settings > your Facebook information > deactivation and deletion.
  • Select delete account, then continue to account deletion > delete account > confirm deletion.

It may take up to 90 days to delete everything you’ve posted, according to Facebook. Also note that the company says after that period, it may keep copies of your data in “backup storage” to recover in some instances such as software error and legal issues. 

More information from Facebook

With our lives so online, our digital space can get messy with inactive and unnecessary accounts — and forgetting about them can pose a security risk. You’ll be off to a good start with our one-stop shop for deleting online accounts, but it’s far from exhaustive. So here’s a bonus tip: Sign up for Firefox Monitor. It alerts you when your data shows up in any breaches, including on websites that you’ve forgotten giving your information to.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post How to delete your Facebook account appeared first on The Mozilla Blog.

The Mozilla BlogHow to delete your Instagram account

So you’ve decided to shut down your Instagram account. Maybe you want to cleanse your digital space from its parent company, Meta. Perhaps you’re tired of deleting the app only to reinstall it later. Whatever your reason, here’s how to delete your Instagram:

  • Visit https://instagram.com/accounts/remove/request/permanent and log in, if you aren’t already logged in.
  • You’ll see a question about why you want to delete your account. Pick an option from the dropdown menu.
  • Re-enter your password.
  • Click on delete [username].
  • When prompted, confirm that you want to delete your account.
  • You’ll see a page saying your account will be deleted after a month. You’ll be able to log in before then if you choose to keep your account.

More information from Instagram

With our lives so online, our digital space can get messy with inactive and unnecessary accounts — and forgetting about them can pose a security risk. You’ll be off to a good start with our one-stop shop for deleting online accounts, but it’s far from exhaustive. So here’s a bonus tip: Sign up for Firefox Monitor. It alerts you when your data shows up in any breaches, including on websites that you’ve forgotten giving your information to.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post How to delete your Instagram account appeared first on The Mozilla Blog.

The Mozilla BlogHow to delete your Snapchat account

Whether you’ve migrated to another similar social media platform, or have simply outgrown it, you may be tempted to just delete the Snapchat app from your phone and get on with it. But you’ll want to scrub your data, too. Here’s how to delete your Snapchat account from an iOS app:

  • Click on the profile icon on the top left, then the settings icon on the top right.
  • Scroll all the way down and hit delete account.
  • Enter your password then continue. Your account will be deleted in 30 days.

Here’s how to delete your Snapchat account from your browser:

More information from Snapchat

With our lives so online, our digital space can get messy with inactive and unnecessary accounts — and forgetting about them can pose a security risk. You’ll be off to a good start with our one-stop shop for deleting online accounts, but it’s far from exhaustive. So here’s a bonus tip: Sign up for Firefox Monitor. It alerts you when your data shows up in any breaches, including on websites that you’ve forgotten giving your information to.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post How to delete your Snapchat account appeared first on The Mozilla Blog.

The Mozilla BlogHow to delete your Twitter account

Twitter can be a trove of information. It can also enable endless doomscrolling.

If you’d rather get your news and debate people on the latest hot take elsewhere, or have any other concerns about the platform, here’s how to delete your Twitter account from a browser:

  • Once you’re logged in, click on more on the left-hand side of your Twitter homepage.
  • Click on settings & privacy > your account > deactivate your account > deactivate.
  • Enter your password and hit confirm.

Here’s how to delete your Twitter account from the app on your phone or mobile device:

  • Click on the profile icon, then go to settings and privacy > your account > deactivate your account > deactivate.
  • Enter your password.

More information from Twitter

With our lives so online, our digital space can get messy with inactive and unnecessary accounts — and forgetting about them can pose a security risk. You’ll be off to a good start with our one-stop shop for deleting online accounts, but it’s far from exhaustive. So here’s a bonus tip: Sign up for Firefox Monitor. It alerts you when your data shows up in any breaches, including on websites that you’ve forgotten giving your information to.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post How to delete your Twitter account appeared first on The Mozilla Blog.

The Mozilla BlogHow to delete your Google account

Google’s share of the global search market stands at about 85%. While the tech giant will likely continue to loom large over our lives, from search to email to our calendars, we can delete inactive or unnecessary Google accounts. Here’s how to do that:

More information from Google

With our lives so online, our digital space can get messy with inactive and unnecessary accounts — and forgetting about them can pose a security risk. You’ll be off to a good start with our one-stop shop for deleting online accounts, but it’s far from exhaustive. So here’s a bonus tip: Sign up for Firefox Monitor. It alerts you when your data shows up in any breaches, including on websites that you’ve forgotten giving your information to.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post How to delete your Google account appeared first on The Mozilla Blog.

Support.Mozilla.OrgIntroducing Dayana Galeano

Hi everybody, 

I’m excited to welcome Dayana Galeano, our new Community Support Advocate, to the Customer Experience team.

Here’s a short introduction from Dayana: 

Hi everyone! My name is Dayana and I’ll be helping out with mobile support for Firefox. I’ll be pitching in to help respond to app reviews and identifying trends to help track feedback. I’m excited to join this community and work alongside all of you!

Since the Community Support Advocate role is new for Mozilla Support, we’d like to take a moment to describe the role and how it will enhance our current support efforts. 

Open-source culture has been at the center of Mozilla’s identity since the beginning, and this has been our guide for how we support our products. Our “peer to peer” support model, powered by the SUMO community, has enabled us to support Firefox and other products through periods of rapid growth and change, and it’s been a crucial strategy to our success. 

With the recent launches of premium products like Mozilla VPN and Firefox Relay, we’ve adapted our support strategy to meet the needs and expectations of subscribers. We’ve set up processes to effectively categorize and identify issues and trends, enabling us to pull meaningful insights out of each support interaction. In turn, this has strengthened our relationships with product teams and improved our influence when it comes to improving customer experience. With this new role, we hope to apply some of these processes  to our peer to peer support efforts as well.

To be clear about our intentions, this is not a step away from peer to peer support at Mozilla. Instead, we are optimistic that this will deepen the impact our peer to peer support strategy will have with the product teams, enabling us to better segment our support data, share more insightful reports on releases, and showcase the hard work that our community is putting into SUMO each and every day. This can then pave the way for additional investment into resources, training, and more effective onboarding for new members of the community. 

Dayana’s primary focus will be supporting the mobile ecosystem, including Firefox for Android, Firefox for iOS, Firefox Focus (Android and iOS), as well as Firefox Klar. The role will initially emphasize support question moderation, including tagging and categorizing our inbound questions, and the primary support channels will be app reviews on iOS and Android. This will evolve over time, and we will be sure to communicate about these changes.

And with that, please join me to give a warm welcome to Dayana! 

Mozilla Open Policy & Advocacy BlogThe FTC and DOJ merger guidelines review is an opportunity to reset competition enforcement in digital markets

As the internet becomes increasingly closed and centralized, consolidation and the opportunity for anti-competitive behavior rises. We are encouraged to see legislators and regulators in many jurisdictions exploring how to update consumer protection and competition policies. We look forward to working together to advance innovation, interoperability, and consumer choice.

Leveling the playing field so that any developer or company can offer innovative new products and people can shape their own online experiences has long been at the core of Mozilla’s vision and of our advocacy to policymakers. Today, we focus on the call for public comments on merger enforcement from the US Federal Trade Commission (FTC) and the US Department of Justice (DOJ) – a key opportunity for us to highlight how existing barriers to competition and transparency in digital markets can be addressed in the context of merger rules.

Our submission focuses on the below key themes, viewed particularly through the lens of increasing competition in browsers and browser engines – technologies that are central to how consumers engage on the web.

  • The Challenge of Centralization Online: For the internet to fulfill its promise as a driver for innovation, a variety of players must be able to enter the market and grow. Regulators need to be agile in their approach to tackle walled gardens and vertically-integrated technology stacks that tilt the balance against small, independent players.
  • The Role of Data: Data aggregation can be both the motive and the effect of a merger, with potential harms to consumers from vertically integrated data sharing being increasingly recognised and sometimes addressed. The roles of privacy and data protection in competition should be incorporated into merger analysis in this context.
  • Greater Transparency to Inform Regulator Interventions: Transparency tools can provide insight into how data is being used or how it is shared across verticals and are important both for consumer protection and to ensure effective competition enforcement. We need to create the right regulatory environment for these tools to be developed and used, including safe harbor access to data for public interest researchers.
  • Enabling Effective Interoperability as a Remedy: Interoperability should feature as an essential tool in the competition enforcer’s toolkit. In particular, web compatibility – ensuring that services and websites work equally no matter what operating system, browser, or device a person is using – may prove useful in addressing harms arising from a vertically integrated silo of technologies.
  • Critical Role of Open Standards in Web Compatibility: The role of Standards Development Organizations and standards processes is vital to an open and decentralized internet.
  • Harmful Design Practices Impede Consumer Choice: The FTC and the DOJ should ban design practices that inhibit consumer control. This includes Dark Patterns and Manipulative Design Techniques used by companies to trick consumers into doing something they don’t mean to do.

 

The post The FTC and DOJ merger guidelines review is an opportunity to reset competition enforcement in digital markets appeared first on Open Policy & Advocacy.

Hacks.Mozilla.OrgAdopting users’ design feedback

On March 1st, 2022, MDN Web Docs released a new design and a new brand identity. Overall, the community responded to the redesign enthusiastically and we received many positive messages and kudos. We also received valuable feedback on some of the things we didn’t get quite right, like the browser compatibility table changes as well as some accessibility and readability issues.

For us, MDN Web Docs has always been synonymous with the term Ubuntu, “I am because we are.” Translated in this context, “MDN Web Docs is the amazing resource it is because of our community’s support, feedback, and contributions.”

Since the initial launch of the redesign and of MDN Plus afterwards, we have been humbled and overwhelmed by the level of support we received from our community of readers. We do our best to listen to what you have to say and to act on suggestions so that together, we make MDN better. 

Here is a summary of how we went about addressing the feedback we received.

Eight days after the redesign launch, we started the MDN Web Docs Readability Project. Our first task was to triage all issues submitted by the community that related to readability and accessibility on MDN Web Docs. Next up, we identified common themes and collected them in this meta issue. Over time, this grew into 27 unique issues and several related discussions and comments. We collected feedback on GitHub and also from our communities on Twitter and Matrix.

With the main pain points identified, we opened a discussion on GitHub, inviting our readers to follow along and provide feedback on the changes as they were rolled out to a staging instance of the website. Today, roughly six weeks later, we are pleased to announce that all these changes are in production. This was not the effort of any one person but is made up of the work and contributions of people across staff and community.

Below are some of the highlights from this work.

Dark mode

We updated the color palette used in dark mode in particular.

  • We reworked the initial color palette to use colors that are slightly more subtle in dark mode while ensuring that we still meet AA accessibility guidelines for color contrast.
  • We reconsidered the darkness of the primary background color in dark mode and settled on a compromise that improved the experience for the majority of readers.
  • We cleaned up the notecards that indicate notices such as warnings, experimental features, items not on the standards track, etc.

Readability

We got a clear sense from some of our community folks that readers found it more difficult to skim content and find sections of interest after the redesign. To address these issues, we made the following improvements:

Browser compatibility tables

Another area of the site for which we received feedback after the redesign launch was the browser compatibility tables. Almost its own project inside the larger readability effort, the work we invested here resulted, we believe, in a much-improved user experience. All of the changes listed below are now in production:

  • We restored version numbers in the overview, which are now color-coded across desktop and mobile.
  • The font size has been bumped up for easier reading and skimming.
  • The line height of rows has been increased for readability.
  • We reduced the table cells to one focusable button element.
  • Browser icons have been restored in the overview header.
  • We reordered support history chronologically to make the version range that the support notes refer to visually unambiguous.

We also fixed the following bugs:

  • Color-coded pre-release versions in the overview
  • Showing consistent mouseover titles with release dates
  • Added the missing footnote icon in the overview
  • Showing correct support status for edge cases (e.g., omit prefix symbol if prefixed and unprefixed support)
  • Streamlined mobile dark mode

We believe this is a big step in the right direction but we are not done. We can, and will, continue to improve site-wide readability and functionality of page areas, such as the sidebars and general accessibility. As with the current improvements, we invite you to provide us with your feedback and always welcome your pull requests to address known issues.

This was a collective effort, but we’d like to mention folks who went above and beyond. Schalk Neethling and Claas Augner from the MDN Team were responsible for most of the updates. From the community, we’d like to especially thank Onkar Ruikar, Daniel Jacobs, Dave King, and Queen Vinyl Da.i’gyu-Kazotetsu.

 

The post Adopting users’ design feedback appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 439

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is ttrpc, a GRPC-like protocol implementation for memory-constrained environments.

Thanks to George Hahn for the suggestion.

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

388 pull requests were merged in the last week

(note: We now also track rust-analyzer, which recently joined the rust-lang org)

Rust Compiler Performance Triage

A week with a large amount of changes in rollups, which makes performance triage difficult. The performance team and the infra team are working on finding ways to automate marking PRs as likely a poor choice for rolling up. Otherwise, the week overall saw a ~1% improvement in incremental check builds, with smaller improvements to incremental debug and release builds. A number of benchmarks have been updated in the last few weeks, which has meant a decrease in the automated noise assessment's algorithm performance, but that should settle out to steady state behavior on its own in the next few days.

Triage done by @simulacrum. Revision range: 949b98ca..4e1927d

5 Regressions, 4 Improvements, 7 Mixed; 7 of them in rollups 50 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New and Updated RFCs

Upcoming Events

Rusty Events between 2022-04-20 - 2022-05-18 🦀

Virtual
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Element

GMTO Organization

NXLog

KidsLoop

Timescale

Stockly

Kollider

Tempus Ex

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Alas, this week went by without any memorable quote.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Wladimir PalantAdobe Acrobat hollowing out same-origin policy

It’s unclear whether all the countless people who have the Adobe Acrobat browser extension installed actually use it. The extension being installed automatically along with the Adobe Acrobat application, chances are that they don’t even know about it. But security-wise it doesn’t matter, an extension that’s installed and unused could still be exploited by malicious actors. So a few months ago I decided to take a look.

A PDF file displayed in the browser. The address bar says: Adobe Acrobat. Adobe Acrobat icon is also visible in the browser’s toolbar.

To my surprise, the extension itself did almost nothing despite having a quite considerable code size. It’s in fact little more than a way to present Adobe Document Cloud via an extension, all of the user interface being hosted on Adobe’s servers. To make this work smoother, Adobe Acrobat extension grants documentcloud.adobe.com website access to some of its functionality, in particular a way to circumvent the browser’s same-origin policy (SOP). And that’s where trouble starts, it’s hard to keep these privileges restricted to Adobe properties.

Companies don’t usually like security reports pointing out that something bad could happen. So I went out on a quest to find a Cross-site Scripting (XSS) vulnerability allowing third-party websites to abuse the privileges granted to documentcloud.adobe.com. While I eventually succeeded, this investigation yielded a bunch of dead ends that are interesting on their own. These have been reported to Adobe, and I’ll outline them in this article as well.

TL;DR: Out of six issues reported, only one is resolved. The main issue received a partial fix, two more got fixes that didn’t quite address the issue. Two (admittedly minor) issues haven’t been addressed at all within 90 days from what I can tell.

Why does same-origin policy matter?

The same-origin policy is the most fundamental security concept of the web. It mandates that example.com cannot simply access your data on other websites like google.com or amazon.com, at least not without the other websites explicitly allowing it by means of CORS for example. So even if you visit a malicious website, that website is limited to doing mischief within its own bounds – or exploiting websites with security vulnerabilities.

What happens if that security boundary breaks down? Suddenly a malicious website can impersonate you towards other websites, even if these don’t have any known vulnerabilities. Are you logged into Gmail for example? A malicious website can request your data from Gmail, downloading all your email conversations. And then it can ask Gmail to send out emails in your name. Similarly if you are logged into Twitter or Facebook, your private messages are no longer private. And your active online banking session will allow that malicious website to check your transaction history (luckily not making any transfers, that usually requires authorization via a second factor).

Now you hopefully get an idea why a hole in the same-origin policy is a critical vulnerability and needs to be prevented at any cost. Next: Adobe Acrobat extension.

SOP circumvention in the Adobe Acrobat extension

As I mentioned before, the Adobe Acrobat extension doesn’t actually do anything by itself. So when you edit a PDF file for example, you aren’t actually in the extension – you are in Adobe’s Document Cloud. You are using a web application.

Now that web application has a problem: in order to do something with a PDF file, it needs to access its data. And with it hosted anywhere on the web, same-origin policy gets in the way. The usual solution would be using a proxy: let some Adobe server download the PDF file and provide the data to the web application. Downside here: proxy server cannot access PDF files hosted on some company intranet, and neither PDF files that require the user to be logged in. These can only be accessed via user’s browser.

So Adobe went with another solution: let the extension “help” the web application by downloading the PDF data for it. How this works:

  • When you navigate to a PDF file like https://example.com/test.pdf in your browser, the extension redirects you to its own page: chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/viewer.html?pdfurl=https://example.com/test.pdf.
  • The extension’s viewer.html is merely a shell for https://documentcloud.adobe.com/proxy/chrome-viewer/index.html that it loads in a frame.
  • The extension page will attempt to download data from the address it received via pdfurl parameter and send it to the frame via window.postMessage().

This would be mostly fine if you navigating to some PDF file were a necessary step of the process. But viewer.html is listed under web_accessible_resources in the extension’s manifest. This means that any website is allowed to load chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/viewer.html and give it whatever value for pdfurl. For example, ?pdfurl=https://www.google.com/ would result in the extension downloading Google homepage and intercepting the resulting data would give attackers access to your Google user name for example.

The good news: only a page that is itself hosted on documentcloud.adobe.com can intercept the messages exchange here. The bad news: Cross-site Scripting (XSS) vulnerabilities are very common, and any such vulnerability in a page on documentcloud.adobe.com would give attackers this access. Even worse news: while documentcloud.adobe.com uses Content Security Policy (CSP) mechanism to protect against XSS vulnerabilities, it doesn’t do so consistently. Even where CSP is used, its protection is weakened considerably by allowing scripts from a multitude of different services and by using keywords like 'unsafe-eval'.

The fix

I made sure that Adobe receives a complete proof of concept, a page abusing an XSS vulnerability to get into documentcloud.adobe.com. That access is then leveraged to download google.com and extract your user name. Should demonstrate the issue nicely, what could possibly go wrong? Well, for once Adobe could fix the XSS vulnerability before even looking at my proof of concept for this issue. And that’s exactly what they did of course. More than a month after the report they asked me why they couldn’t reproduce the issue.

To their defense, they didn’t give up on this issue even though I couldn’t deliver a new proof of concept. As of Adobe Acrobat 15.1.3.10, it is partially resolved. I could confirm that exploiting it to download regular pages no longer works. Now malicious websites exploiting an XSS vulnerability on documentcloud.adobe.com can only download PDF files, even if doing so requires user’s privileges (files hidden on a company intranet or behind a login screen).

The reason is a change to the page’s _sendMessage function:

var readyReceived, seenPdf;

_sendMessage = (message, origin) =>
{
  if (this.iframeElement && isValidOrigin(origin))
  {
    const timeout = 10000;
    var startTime = Date.now();
    new Promise(function check(resolve, reject)
    {
      if (readyReceived && seenPdf)
        resolve();
      else if (timeout && Date.now() - startTime >= timeout)
        reject(new Error("timeout"));
      else
        setTimeout(check.bind(this, resolve, reject), 30);
    }).then(() => this.iframeElement.contentWindow.postMessage(message, origin));
  }
};

The part waiting for readyReceived and seenPdf variables to be set is new. Now responses will be delayed until documentcloud.adobe.com frame loads and the code deems the file to be a valid PDF. Note that the logic recognizing PDF files isn’t terribly reliable:

function isPdf(request, url)
{
  const type = request.getResponseHeader("content-type");
  const disposition = request.getResponseHeader("content-disposition");
  if (type)
  {
    const typeTrimmed = type.toLowerCase().split(";", 1)[0].trim();
    // Yes, this checks disposition.value which should be always undefined
    if (disposition && /^\s*attachment[;]?/i.test(disposition.value))
      return false;
    if ("application/pdf" === typeTrimmed)
      return true;
    if ("application/octet-stream" === typeTrimmed)
    {
      if (url.toLowerCase().indexOf(".pdf") > 0)
        return true;
      if (disposition && /\.pdf(["']|$)/i.test(disposition.value))
        return true;
    }
  }
  return false;
}

So any file with MIME type application/octet-stream can be considered a PDF file, all it takes is adding #file.pdf to the URL.

There was one more notable change to the code, a check in a message event handler:

function isValidSource(event)
{
  try
  {
    return event && event.source &&
        event.source.top.location.origin === "chrome-extension://" + chrome.runtime.id;
  }
  catch (e)
  {
    return false;
  }
}

if (event.data && event.origin && isValidOrigin(event.origin) && isValidSource(event))
  ...

The isValidSource() function is new and essentially boils down to checking event.source.top == window – only events coming from the own frame are accepted. This is probably meant to address my proof of concept where the message source happened to be an external page. It doesn’t provide any value beyond what isValidOrigin() already does however. If there is an external documentcloud.adobe.com page sending messages, this page has full access to the documentcloud.adobe.com frame within the viewer by virtue of being same-origin. This access can be used to run code in the frame and thus send messages with the frame being the message source.

Open Redirect via the fallback mechanism

Before we delve into my search for XSS vulnerabilities, there is another interesting aspect of this viewer.html page, namely its fallback mechanism. The extension developers thought: what should we do if we cannot download that PDF file after all? Rather than displaying an error message of their own, they decided to leave this scenario to the browser. So in case of a download error the page will redirect back to the PDF file. Which might not be a PDF file because, as we already learned, a malicious website can open the viewer with any value for the pdfurl parameter.

What happens if a page loads viewer.html?pdfurl=javascript:alert(1) for example? The page will run the following code:

window.location.href = "javascript:alert(1)";

This would have been an XSS vulnerability in the extension (very bad), but luckily the extension’s Content Security Policy stops this attack.

So this Open Redirect vulnerability seems fairly boring. Still, there is another way to exploit it: viewer.html?pdfurl=data/js/options.html. This will make the viewer redirect to the extension’s options page. The options page isn’t listed under web_accessible_resources and normally shouldn’t be exposed to attacks by websites. Well, thanks to this vulnerability it is. It’s pure luck that it was coded in a way that didn’t quite allow malicious websites to change extension settings.

At the time of writing this vulnerability was still present in the latest extension version.

The (not so) clever message origin check

When looking for XSS vulnerabilities, I tend to focus on the client-side code. This has multiple reasons. First of all, it’s impossible to accidentally cause any damage if you don’t mess with any servers. Second: client-side code is out there in the open, you only need to go through it looking for signs of vulnerabilities rather than blindly guessing which endpoints might be exploitable and how. And finally: while server-side vulnerabilities are reasonably understood by now, the same isn’t true for the client side. Developers tend to be unaware of security best practices when it comes to the client-side code of their web applications.

Now Adobe uses React for their client side code, a framework where introducing an XSS vulnerability takes effort and determination. Still, I started checking out message event handlers, these being notorious sources of security vulnerabilities. It didn’t take long to find the first issue, in a library called Adobe Messaging Client:

this.receiveMessage = function(event)
{
  var origin = event.origin || event.originalEvent.origin;
  if (getMessagingUIURL(mnmMode).substr(0, origin.length) !== origin)
  {
    log("Ignoring message received as event origin and expected origin do not match");
    return;
  }
  ...
}

The getMessagingUIURL(mnmMode) call returns an address like https://ui.messaging.adobe.com/2.40.3/index.html. Normally, one would parse that address, get the origin and compare it to the message origin. But somebody found a clever shortcut: just check whether this address starts with the origin! And in fact, the address https://ui.messaging.adobe.com/2.40.3/index.html starts with the valid origin https://ui.messaging.adobe.com but it doesn’t start with the wrong origin https://example.com. Nice trick, and it saves calling new URL() to parse the address.

Except that this address also happens to start with https://ui.messaging.ad and with https://ui.me, so these origins would be considered valid as well. And neither messaging.ad nor ui.me domain is registered, so anyone wishing to abuse this code could register them.

Probably not worth the effort however. None of the actions performed by this message handler seem particularly dangerous. A dead end. Still, reported it to Adobe so that they can replace this by a proper origin check.

The fix

Fifty days later Adobe reported having fixed this issue. And they in fact did. The new check looks like this:

this.receiveMessage = function(_feeble_board_)
{
  var origin = event.origin || event.originalEvent.origin;
  var url = getMessagingUIURL(mnmMode);
  var expectedOrigin = "";
  if (url.startsWith("https://") || url.startsWith("http://"))
  {
    var parts = url.split("/");
    expectedOrigin = parts[0] + "//" + parts[2];
  }
  if (expectedOrigin !== origin)
  {
    log("Ignoring message received as event origin and expected origin do not match");
    return;
  }
}

I’m not sure why Adobe is using this crude parsing approach instead of calling new URL(). This approach would certainly be a concern if used with untrusted data. But they use it on their own data, so it will do here. And they are now expecting an exact origin match, as they should.

The insecure single sign-on library

On most Adobe properties, you can log in with your Adobe ID. This is handled by a library called imslib. In fact, two versions of this library exist on Adobe websites: imslib 1.x and imslib v2. The latter seems to be a rewrite for the former, and it’s where I found another vulnerable message event handler. This one doesn’t check event origin at all:

this.receiveMessage = function(event)
{
  if (this.onProcessLocation)
    this.onProcessLocation(event.data);
};

There are several levels of indirection for the onProcessLocation handler but it essentially boils down to:

this.onProcessLocation = function(url)
{
  window.location.replace(url);
}

Here we have our XSS vulnerability. Any page can do wnd.postMessage("javascript:alert(1)", "*") and this code will happily navigate to the provided address, executing arbitrary JavaScript code in the process.

There is a catch however: this message event handler isn’t always present. It’s being installed by the openSignInWindow() function, executed when the user clicks the “Sign In” button. It is meant to reload the page once the login process succeeds.

Tricking the user into clicking the “Sign In” button might be possible via Clickjacking but there is another catch. The library has two ways of operating: the modal mode where it opens a pop-up window and the redirect mode where the current page is replaced by the login page. And all Adobe pages I’ve seen used the latter which isn’t vulnerable. Another dead end.

At the time of writing imslib v2 received at least two version bumps since I reported the issue. The vulnerability is still present in the latest version however.

Increasing the attack surface

I got somewhat stuck, so I decided to check out what else is hosted on documentcloud.adobe.com. That’s when I discovered this embed API demo. And that suddenly made my job much easier:

  • This page contains a View SDK frame with an address like https://documentcloud.adobe.com/view-sdk/<version>/iframe.html.
  • This frame is meant to be embedded by any website, so there is no framing protection.
  • The frame is in fact meant to communicate with arbitrary websites, and it will accept all kinds of messages.

In fact, I learned that initializing the frame would make it set document.domain. All I needed to do was sending it the following message:

frame.postMessage({
  sessionId: "session",
  type: "init",
  typeData: {
    config: {
      serverEnv: "prod"
    }
  }
}, "*");

And it would change document.domain to adobe.com.

I hope that you’ve never actually heard about document.domain before. It’s a really old and a really dangerous mechanism for cross-origin communication. The idea is that a page from subdomain.example.com could declare: “I’m no longer subdomain.example.com, consider me to be just example.com.” And then a page from anothersubdomain.example.com could do the same. And since they now have the same origin, these pages could do whatever they want with each other: access each other’s DOM and variables, run code in each other’s context and so on.

The effect of setting document.domain to adobe.com is a massively increased attack surface. Now the requirement is no longer to find an XSS vulnerability on documentcloud.adobe.com. Finding an XSS vulnerability anywhere on the adobe.com domain is sufficient. Once you are running JavaScript code somewhere on adobe.com, you can set document.domain to adobe.com. You can then load the View SDK frame and make it do the same. Boom, you now have full access to the View SDK frame and can run your code inside a documentcloud.adobe.com page.

The “fix”?

When I reported this issue I recommended dropping document.domain usage altogether. Really, there is exactly zero reason to use it in a modern web application highly reliant on window.postMessage() which is the modern replacement. I’m not sure whether Adobe attempted to address this issue but they didn’t remove document.domain usage. Instead, they buried it deeper in their code and added a check.

if (this._config.noRestriction && this._config.documentDomain)
{
  window.document.domain = this._config.documentDomain;
}

And the noRestriction flag is reserved for trusted websites:

function isTrustedOrigin(origin)
{
  if (!isValidUrl(origin))
    return false;
  try
  {
    var trustedDomains = [
      ".acrobat.com",
      ".adobe.com",
      ".adobeprojectm.com"
    ];
    var hostname = new URL(origin).hostname;
    var trusted = false;
    trustedDomains.forEach(function(domain)
    {
      if (-1 !== hostname.indexOf(domain, hostname.length - domain.length))
        trusted = true;
    });
    return trusted;
  }
  catch (error)
  {
    return false;
  }
};

So any vulnerable website hosted under adobe.com will be considered a trusted origin and can trick the View SDK into setting document.domain to adobe.com. If this change was supposed to be a fix, it doesn’t really achieve anything.

XSS via config injection

But there are way more issues in this View SDK frame, making it the final destination of my journey. I mean, in the init message above we gave it a configuration. What other configuration values are possible beyond serverEnv? Turns out, there are plenty. So some validation is meant to prevent abuse.

For example, there are these configuration presets which depend on the server environment:

var configPresets = {
  ...
  prod: {
    dcapiUri: "https://dc-api.adobe.io/discovery",
    floodgateUri: "https://p13n.adobe.io/fg/api",
    floodgateApiKey: "dc-prod-virgoweb",
    loggingUri: "https://dc-api.adobe.io",
    licenseUri: "https://viewlicense.adobe.io/viewsdklicense/jwt",
    internalLogToConsoleEnabled: false,
    internalLogToServerEnabled: true,
    floodgateEnabled: true,
    defaultNoRestriction: false,
    viewSDKAppVersion: "2.22.1_2.8.0-5d611c6",
    sdkDocumentationUrl: "https://www.adobe.com/go/dcviewsdk_docs",
    documentDomain: "adobe.com",
    brandingUrl: "https://documentcloud.adobe.com/link/home",
    otDomainId: "7a5eb705-95ed-4cc4-a11d-0cc5760e93db"
  }
}

And this code makes sure these presets take precedence over any config options received:

var finalConfig = Object.assign({}, config, configPresets[config.serverEnv]);

Wait, it chooses the presets based on the server environment we give it? Then we could choose local for example and we’d get defaultNoRestriction set to true. Sounds nice. But why choose a preset at all if we can pass dummy for serverEnv and none of our configuration settings will be overwritten? Yes, this protection isn’t actually working.

The initialization itself doesn’t do much, we need to start the app. This requires an additional message like the following:

frame.postMessage({
  sessionId: "session",
  type: "preview",
  typeData: {
    fileInfo: [{
      metaData: {
        fileName: "Hi there.pdf"
      }
    }],
    previewConfig: {
      embedMode: "INTEGRATION"
    }
  }
}, "*");

Looks like we have an additional piece of configuration here. But the app also applies some additional restrictions:

PRESET_FORCE_CONFIG = {
  INTEGRATION: {
    config: {
      showTopBar: true,
      leftAlignFileName: false,
      backgroundColor: "#eaeaea",
      externalJSComponentURL: ""
    },
    actionConfig: {
      exitPDFViewerType: "",
      enableBookmarkAPIs: true,
      enableAttachmentAPIs: true,
      showFullScreen: false,
      dockPageControls: false,
      showDownloadPDFInPageControl: false,
      showFullScreenInHUD: false,
      enableLinearization: false
    }
  },
  ...
};

Here is how they are enforced:

var forceConfig = PRESET_FORCE_CONFIG[actionConfig.embedMode];
actionConfig = Object.assign({}, actionConfig, forceConfig.actionConfig);
config = Object.assign({}, config, forceConfig.config);

Again, embedMode is a value we can often choose. We cannot set it to some invalid value to avoid any preset values, validation is stricter here. But we can choose a value like FULL_WINDOW where externalJSComponentURL isn’t being overwritten. And this value in fact does exactly what you think, it loads external JavaScript code. So the final message combination is:

frame.postMessage({
  sessionId: "session",
  type: "init",
  typeData: {
    config: {
      serverEnv: "production",
      defaultNoRestriction: true,
      externalJSComponentURL: "data:text/javascript,alert(1)")
    }
  }
}, "*");

frame.postMessage({
  sessionId: "session",
  type: "preview",
  typeData: {
    fileInfo: [{
      metaData: {
        fileName: "Hi there.pdf"
      }
    }],
    previewConfig: {
      embedMode: "FULL_WINDOW"
    }
  }
}, "*");

Yes, this will run arbitrary JavaScript code. No, this clearly isn’t the only way to abuse the configuration, at the very least there is also:

  • A bunch of addresses such as brandingUrl will be displayed in the user interface without any additional checks, it’s possible to pass javascript: URLs here.
  • Some endpoints such as loggingUrl can be redirected to own servers, potentially resulting in leaking private information and security tokens.
  • localizationStrings configuration allows overwriting default localization. There is again potential for XSS here as some of these strings are interpreted as HTML code.

The fix

Adobe changed the logic here to make sure serverEnv is no longer passed in by the caller but rather deduced from the frame address. They also implemented additional restrictions on the externalJSComponentURL value, only trusted (meaning: Adobe’s own) websites are supposed to set this value now. Both of these provisions turned out to be flawed, and I could set externalJSComponentURL from a third-party website. At the time of writing Adobe still has to address this issue properly.

With these provisions presets can no longer be avoided however. So passing in malicious values for brandingUrl or various server endpoints is no longer possible. I’m not sure whether or how the issue of passing malicious localization has been addressed, the code being rather complicated here. The code still uses FormattedHTMLMessage however, a feature removed from react-intl package two years ago due to the risk of introducing XSS vulnerabilities.

Niko MatsakisCoherence and crate-level where-clauses

Rust has been wrestling with coherence more-or-less since we added methods; our current rule, the “orphan rule”, is safe but overly strict. Roughly speaking, the rule says that one can only implement foreign traits (that is, traits defined by one of your dependencies) for local types (that is, types that you define). The goal of this rule was to help foster the crates.io ecosystem — we wanted to ensure that you could grab any two crates and use them together, without worrying that they might define incompatible impls that can’t be combined. The rule has served us well in that respect, but over time we’ve seen that it can also have a kind of chilling effect, unintentionally working against successful composition of crates in the ecosystem. For this reason, I’ve come to believe that we will have to weaken the orphan rule. The purpose of this post is to write out some preliminary exploration of ways that we might do that.

So wait, how does the orphan rule protect composition?

You might be wondering how the orphan rule ensures you can compose crates from crates.io. Well, imagine that there is a crate widget that defines a struct Widget:

// crate widget
#[derive(PartialEq, Eq)]
pub struct Widget {
    pub name: String,
    pub code: u32,
}

As you can see, the crate has derived Eq, but neglected to derive Hash. Now, I am writing another crate, widget-factory that depends on widget. I’d like to store widgets in a hashset, but I can’t, because they don’t implement Hash! Today, if you want Widget to implement Hash, the only way is to open a PR against widget and wait for a new release.1 But if we didn’t have the orphan rule, we could just define Hash ourselves:

// Crate widget-factory
impl Hash for Widget {
    fn hash(&self) {
        // PSA: Don’t really define your hash functions like this omg.
        self.name.hash() ^ self.code.hash()
    }
}

Now we can define our WidgetFactory using HashSet<Widget>

pub struct WidgetFactory {
    produced: HashSet<Widget>,
}

impl WidgetFactory {
    fn take_produced(&mut self) -> HashSet<Widget> {
        self.produced.take()
    }
}

OK, so far so good, but what happens if somebody else defines a widget-delivery crate and they too wish to use a HashSet<Widget>? Well, they will also define Hash for Widget, but of course they might do it differently — maybe even very badly:

// Crate widget-factory
impl Hash for Widget {
    fn hash(&self) {
        // PSA: You REALLY shouldn’t define your hash functions this way omg
        0
    }
}

Now the problem comes when I try to develop my widget-app crate that depends on widget-delivery and widget-factory. I now have two different impls of Hash for Widget, so which should the compiler use?

There are a bunch of answers we might give here, but most of them are bad:

  • We could have each crate use its own impl, in theory: but that wouldn’t work so well if the user tried to take a HashSet<Widget> from one crate and pass it to another crate.
  • The compiler could pick one of the two impls arbitrarily, but how do we know which one to use? In this case, one of them would give very bad performance, but it’s also possible that some code is designed to expect the exact hash algorithm it specified.
    • This is even harder with associated types.
  • Users could tell us which impl they want, which is maybe better, but it also means that the widget-delivery crates have to be prepared that any impl they are using might be switched to another one by some other crate later on. This makes it impossible for us to inline the hash function or do other optimizations except at the very last second.

Faced with these options, we decided to just rule out orphan impls altogether. Too much hassle!

But the orphan rules make it hard to establish a standard

The orphan rules work well at ensuring that we can link two crates together, but ironically they can also work to make actual interop much harder. Consider the async runtime situation. Right now, there are a number of async runtimes, but no convenient way to write code that works with any runtime. As a result, people writing async libraries often wind up writing directly against one specific runtime. The end result is that we cannot combine libraries that were written against different runtimes, or at least that doing so can result in surprising failures.

It would be nice if we could implement some traits that allowed for greater interop. But we don’t quite know what those traits should look like (we also lack support for async fn in traits, but that’s coming!), so it would be nice if we could introduce those traits in the crates.io ecosystem and iterate a bit there — this was indeed the original vision for the futures crate! But if we do that, in practice, then the same crate that defines the trait must also define an implementation for every runtime. The problem is that the runtimes won’t want to depend on the futures crate, as it is still unstable; and the futures crate doesn’t want to have to depend on every runtime. So we’re kind of stuck. And of course if the futures crate were to take a dependency on some specific runtime, then that runtime couldn’t later add futures as a dependency, since that would result in a cycle.

Distinguishing “I need an impl” from “I prove an impl”

At the end of the day, I think we’re going to have to lift the orphan rule, and just accept that it may be possible to create crates that cannot be linked together because they contain overlapping impls. However, we can still give people the tools to ensure that composition works smoothly.

I would like to see us distinguish (at least) two cases:

  • I need this type to implement this trait (which maybe it doesn’t, yet).
  • I am supplying an impl of a trait for a given type.

The idea would be that most crates can just declare that they need an impl without actually supplying a specific one. Any number of such crates can be combined together without a problem (assuming that they don’t put inconsistent conditions on associated types).

Then, separately, one can have a crate that actually supplies an impl of a foreign trait for a foreign type. These impls can be isolated as much as possible. The hope is that only the final binary would be responsible for actually supplying the impl itself.

Where clauses are how we express “I need an impl” today

If you think about it, expressing “I need an impl” is something that we do all the time, but we typically do it with generic types. For example, when I write a function like so…

fn clone_list<T: Clone>(v: &[T]) {
    
}

I am saying “I need a type T and I need it to implement Clone”, but I’m not being specific about what those types are.

In fact, it’s also possible to use where-clauses to specify things about non-generic types…

fn example()
where 
    u32: Copy,
{
{

…but the compiler today is a bit inconsistent about how it treats those. The plan is to move to a model where we “trust” what the user wrote — e.g., if the user wrote where String: Copy, then the function would treat the String type as if it were Copy, even if we can’t find any Copy impl. It so happens that such a function could never be called, but that’s no reason you can’t define it2.

Where clauses at the crate scope

What if we could put where clauses at the crate scope? We could use that to express impls that we need to exist without actually providing those impls. For example, the widget-factory crate from our earlier example might add a line like this into its lib.rs:

// Crate widget-factory
where Widget: Hash;

As a result, people would not be able to use that crate unless they either (a) supplied an impl of Hash for Widget or (b) repeated the where clause themselves, propagating the request up to the crates that depend on them. (Same as with any other where-clause.)

The intent would be to do the latter, propagating the dependencies up to the root crate, which could then either supply the impl itself or link in some other crate that does.

Allow crates to implement foreign traits for foreign impls

The next part of the idea would be to allow crates to implement foreign traits for foreign impls. I think I would convert the orphan check into a “deny by default” lint. The lint text would explain that these impls are not permitted because they may cause linker errors, but a crate could mark the impl with #[allow(orphan_impls]) to ignore that warning. Best practice would be to put orphan impls into their own crate that others can use.

Another idea: permit duplicate impls (especially those generated via derive)

Josh Triplett floated another interesting idea, which is that we could permit duplicate impls. One common example might be if the impl is defined via a derive (though we’d have to extend derive to permit one to derive on a struct definition that is not local somehow).

Conflicting where clauses

Even if you don’t supply an actual impl, it’s possible to create two crates that can’t be linked together if they contain contradictory where-clauses. For example, perhaps widget-factory defines Widget as an iterator over strings…

// Widget-factory
where Widget: Iterator<Item = String>;

…whilst widget-lib wants Widget to be an iterator over UUIDs:

// Widget-lib
where Widget: Iterator<Item = UUID>;

At the end of the day, at most one of these where-clauses can be satisfied, not both, so the two crates would not interoperate. That seems inevitable and ok.

Expressing target dependencies via where-clauses

Another idea that has been kicking around is the idea of expressing portability across target-architectures via traits and some kind of Platform type. As an example, one could imagine having code that says where Platform: NativeSimd to mean “this code requires native SIMD support”, or perhaps where Platform: Windows to mean “this msut support various windows APIs. This is just a “kernel” of an idea, I have no idea what the real trait hierarchy would look like, but it’s quite appealing and seems to fit well with the idea of crate-level where-clauses. Essentially the idea is to allow crates to “constrain the environment that they are used in” in an explicit way.

Module-level generics

In truth, the idea of crate-level where clauses is kind of a special case of having module-level generics, which I would very much like. The idea would be to allow modules (like types, functions, etc) to declare generic parameters and where-clauses.3 These would be nameable and usable from all code within the module, and when you referenced an item from outside the module, you would have to specify their value. This is very much like how a trait-level generic gets “inherited” by the methods in the trait.

I have wanted this for a long time because I often have modules where all the code is parameterized over some sort of “context parameter”. In the compiler, that is the lifetime ’tcx, but very often it’s some kind of generic type (e.g., Interner in salsa).

Conclusion

I discussed a few things in this post:

  • How coherence helps composability by ensuring that crates can be linked together, but harms composability by making it much harder to establish and use interoperability traits.
  • How crate-level where-clauses can allow us to express “I need someone to implement this trait” without actually providing an impl, providing for the ability to link things together.
  • A sketch of how crate-level where-clauses might be generalized to capture other kinds of constraints on the environment, such as conditions on the target platform, or to module-level generics, which could potentially be an ergonomic win.

Overall, I feel pretty excited about this direction. I feel like more and more things are becoming possible if we think about generalizing the trait system and making it more uniform. All of this, in my mind, builds on the work we’ve been doing to create a more precise definition of the trait system in a-mir-formality and to build up a team with expertise in how it works (see the types team RFC). I’ll write more about those in upcoming posts though! =)

  1. You could also create a newtype and making your hashmap key off the newtype, but that’s more of a workaround, and doesn’t always work out. 

  2. It might be nice of us to give a warning. 

  3. Fans of ML will recognize this as “applicative functors”. 

William Lachance90 days out and in

The 90 day mark just passed at my new gig at Voltus, feels like a good time for a bit of self-reflection.

In general, I think it’s been a good change and that it was the right time to leave Mozilla. Since I left, a few people have asked me why I chose to do so: while the full answer is pretty complicated (these things are never simple!), I think it does ultimately come down to wanting to try something new after 10+ years. I’ve accumulated a fair amount of expertise in web development and data engineering and I wanted to see if I could apply them to a new area that I cared about— in this case, climate change and the energy transition.

Voltus is a much younger and different company than Mozilla was, and there’s no shortage of things to learn and do. Energy markets are a rather interesting technical domain to work in— a big intersection between politics, technology, and business. Lots of very old and very new things all at once. As a still-relatively young company, there is definitely more of a feeling that it’s possible to shape Voltus’s culture and practices, which has been interesting. There’s a bit of a balancing act between sharing what you’ve learned in previous roles while having the humility to recognize that there’s much you still don’t understand in a new workplace.

On the downside, I have to admit that I do miss being able to work in the open. Voltus is currently in the process of going public, which has made me extra shy about saying much of anything about what I’ve been working on in a public forum.

To some extent I’ve been scratching this itch by continuing to work on Irydium when I have the chance. I’ve done up a few new releases in the last couple of months, which I think have been fairly well received inside my very small community of people doing like-minded things. I’m planning on attending (at least part of) a pyodide sprint in early May, which I think should be a lot of fun as well as an opportunity to push browser-based data science forward.

I’ve also kept more of a connection with Mozilla than I thought I would have: some video meetings with former colleagues, answering questions on Element (chat.mozilla.org), even some pull requests where I felt like I could make a quick contribution. I’m still using Firefox, which has actually given me more perspective on some problems that people at Mozilla might not experience (e.g. this screensharing bug which you’d only see if you’re using a WebRTC-based video conferencing solution like Google Meet).

That said, I’m not sure to what extent this will continue: even if the source code to Firefox and the tooling that supports it is technically “open source”, outsiders like myself really have very limited visibility into what Mozilla is doing these days. This makes it difficult to really connect with much of what’s going on or know how I might be able to contribute. While it might be theoretically possible to join Mozilla’s Slack (at least last I checked), that feels like a rabbit hole I’d prefer not to go down. While I’m still interested in supporting Mozilla’s mission, I really don’t want more than one workplace chat tool in my life: there’s a lot of content there that is no longer relevant to me as a non-employee and (being honest) I’d rather leave behind. There’s lots more I could say about this, but probably best to leave it there: I understand that there’s reasons why things are the way they are, even if they make me a little sad.

Alex GibsonMy ninth year working at Mozilla

April 15th marks my ninth year working for Mozilla! Last year’s mozillaversary post was a bit of a stop gap. Truth be told, I just didn’t have the energy to write about what I had been doing at work given all the unrest that was happening in the world. This year, despite the world still being in ongoing states of WTF, I’m going to try and talk a bit more about what I’ve been keeping my brain busy with at work. Here goes:

Mozilla VPN

Supporting Mozilla VPN has continued to be one of my main focus areas. After a successful launch in Germany and France in 2021, we have continued to expand into new markets (now available in 17 countries!). By working closely with the product and marketing teams, we have developed a technical framework to help us roll out availability in new countries relatively easily. This has been achieved by implementing features such as a flexible subscription ID matrix based on language & currency, as well as by adding new geo-location based features to the site.

I also spent a good amount of of my time working on attribution and analytics functions to help support newly generated subscriptions and referrals. Subscription based products are something that Mozilla is still relatively new at, so making sure we can understand where new customers are coming from has been an important area of focus for numerous teams. We still have some way to go here, but are making good progress.

Bedrock Technical Roadmap

I’ve continued my efforts to try and document bedrock’s ongoing technical roadmap. We had a bunch of new folks join our team over the last year, so I moved the roadmap to a public GitHub wiki page to try make it easier to contribute to (and so other teams at Mozilla can see at a glance what’s in there). We’ve made a lot of progress over the last couple of years paying off technical debt, are finally starting to make some good modernisation efforts.

Build System Improvements

One of my personal goals for H2 2021 was to replace bedrock’s ageing front-end build system (Gulp) with a more modern alternative. This doesn’t sound like the most exciting thing to talk about in a blog post perhaps, but given the size of bedrock it was actually a pretty daunting task. The site has literally thousands of web pages, and hundreds of individual JS / CSS bundles to compile. Our team had made some efforts toward migrating to Webpack in the past (which we decided was the most suitable alternative given our bundling requirements), but never quite managed to get it across the finish line due to various technical hurdles and time restraints.

After a re-evaluation of our options and some work to removing various blockers, this time we finally managed to migrate bedrock to Webpack. Whilst Webpack is still more complicated than I would like sometimes, switching has reduced a lot of complex boilerplate code we had previously. It has also made it much easier to take advantage of more modern tooling options. Since migrating, we’ve also now incorporated things such as Babel (at last!) for transpiling our JS, and Prettier for formatting.

Glean

Another area of focus I’d like to spend some time on this year is with regard to web analytics. We’ve used Google Analytics in most of our projects for years now, however it’s good to look at other solutions and the benefits they might offer. Mozilla’s own Glean telemetry platform is now available for the web (exciting!), so I’m currently exploring what this might look like for us to use in some of our projects.

Hacks.Mozilla.OrgMozilla partners with the Center for Humane Technology

We’re pleased to announce that we have partnered with the Center for Humane Technology, a nonprofit organization that radically reimagines the digital infrastructure. Its mission is to drive a comprehensive shift toward humane technology that supports the collective well-being, democracy and shared information environment. Many of you may remember the Center for Humane Tech from the Netflix documentary ‘Social Dilemma’, solidifying the saying “If you’re not paying for the product, then you are the product”. The Social Dilemma, is all about the dark side of technology, focusing on the individual and societal impact of algorithms. 

It’s no surprise that this decision to partner was a no brainer and supports our efforts for a safe and open web that is accessible and joyful for all. Many people do not understand how AI and algorithms regularly touch our lives and feel powerless in the face of these systems. We are dedicated to making sure the public understands that we can and must have a say in when machines are used to make important decisions – and shape how those decisions are made. 

Over the last few years, our work has been increasingly focused on building more trustworthy AI and safe online spaces. From challenging YouTube’s algorithms, where Mozilla research shows that the platform keeps pushing harmful videos and its algorithm is recommending videos with misinformation, violent content, hate speech and scams to its over two billion users to developing Enhanced Tracking Protection in Firefox that automatically protects your privacy while you browse, and Pocket which recommends high-quality, human-curated articles without collecting your browsing history or sharing your personal information with advertisers.

Let’s face it, most, if not all people, would probably prefer to use social media platforms that are safer and technologists should design products that reflect all users and without bias. As we collectively continue to think about our role in these areas — now and in the future, this course from the Center for Humane Tech is a great addition to the many tools necessary for change to take place. 

The course rightly titled ‘Foundations of Humane Technologylaunched out of beta in March of this year, after rave reviews from hundreds of beta testers!

It explores the personal, societal, and practical challenges of being a humane technologist. Participants will leave the course with a strong conceptual framework, hands-on tools, and an ecosystem of support from peers and experts. Topics range from respecting human nature to minimizing harm to designing technology that deliberately avoids reinforcing inequitable dynamics of the past. 

The course is completely free of charge and is centered towards building awareness and self-education through an online, at-your-own pace or binge-worthy set of eight modules. The course is marketed to professionals, with or without a technical background involved in shaping tomorrow’s technology. 

It includes interactive exercises and reflections to help you internalize what you’re learning and regular optional Zoom sessions to discuss course content, connect with like-minded people, learn from experts in the field and even rewards a credential upon completion that can be shared with colleagues and prospective employers.

The problem with tech is not a new one, but this course is a stepping stone in the right direction.

The post Mozilla partners with the Center for Humane Technology appeared first on Mozilla Hacks - the Web developer blog.

Data@MozillaThis Week in Glean: What Flips Your Bit?

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

The idea of “soft-errors”, particularly “single-event upsets” often comes up when we have strange errors in telemetry. Single-event upsets are defined as: “a change of state caused by one single ionizing particle (ions, electrons, photons…) striking a sensitive node in a micro-electronic device, such as in a microprocessor, semiconductor memory, or power transistors. The state change is a result of the free charge created by ionization in or close to an important node of a logic element (e.g. memory “bit”)”. And what exactly causes these single-event upsets? Well, from the same Wikipedia article: “Terrestrial SEU arise due to cosmic particles colliding with atoms in the atmosphere, creating cascades or showers of neutrons and protons, which in turn may interact with electronic circuits”. In other words, energy from space can affect your computer and turn a 1 into a 0 or vice versa.

There are examples in our data collected by Glean from Mozilla projects like Firefox, that appear to be malformed by a single bit from the value we would expect. In almost every case we cannot find any plausible explanation or bug in any of the infrastructure from client to analysis, so we often shrug and say “oh well, it must be cosmic rays”. A totally fantastical explanation for an empirical measurement of some anomaly that we cannot explain.

What if it wasn’t just some fantastical explanation? What if there was some grain of truth in there and somehow we could detect cosmic rays with browser telemetry data? I was personally struck with these questions, recently, as I became aware of a recent bug that was filed that described just these sorts of errors in their data. These errors were showing up as strings with a single character different in the data (well, a single bit actually). At about the same time, I read an article about a geomagnetic storm that hit at the end of March. Something clicked and I started to really wonder if we could possibly have detected a cosmic event through these single-event upsets in our telemetry data.

I did a little research to see if there was any data on the frequency of these events and found a handful of articles (for instance) that kept referring to a study done by IBM in the 1990’s that referenced 1 cosmic ray bit flip per 256MB of memory per month. After a little digging, I was able to come up with two papers by J.F. Ziegler, an IBM researcher. The first paper, from 1979, on “The Effects of Cosmic Rays on Computer Memories”, goes into the mechanisms by which cosmic rays can affect bits in computer memory, and makes some rough estimates on the frequency of such events, as well as the effect of elevation on the frequency. The later article from the 1990’s, “Accelerated Testing For Cosmic Soft-Error Rate”, went more in detail in measuring the soft-error rates of different chips by different manufacturers. While I never found the exact source of the “1 bit-flip per 256MB per month” quote in either of these papers, the figure could possibly be generalized from the soft-error rate data in the papers. So, while I’m not entirely sure that that number for the rate is accurate, it’s probably close enough for us to do some simple calculations.

So, now that I had checked out the facts behind cosmic ray induced errors, it was time to see if there was any evidence of this in our data. First of all, where could I find these errors, and where would I most likely find these sorts of errors? I thought about the types of data that we collect and decided that a numeric field would be nearly impossible to detect a bit-flip within, unless it was a field with a very limited expected range. String fields seemed to be a little easier candidate to search for, since single bit flips tend to make strings a little weird due to a single unexpected character. There are also some good places to go looking for bit flips in our error streams too, such as when a column or table name is affected. Secondly, I had to make a few hand-wavy assumptions in order to crunch some numbers. The main assumption is that every bit in our data has the same chance of being flipped as any other bit in any other memory. The secondary assumption is that the bits are getting flipped at the client side of the connection, and not while on our servers.

We have a lot of users, and the little bit of data we collect from each client really adds up. Let’s convert that error rate to some more compatible units. Using the 1/256MB/month figure from the article, that’s 4096 cosmic soft-errors per terabyte per month. According to my colleague, chutten, we receive about 100 terabytes of data per day, or 2800 TB in a 4 week period. If we multiply that out, it looks like we have the potential to find 11,468,800 bit flips in a given 4 week period of our data. WHAT! That seemed like an awful lot of possibilities, even if I suspect a good portion of them to be undetectable just due to not being an “obvious” bit flip.

Looking at the Bugzilla issue that had originally sparked my interest in this, it contained some evidence of labels embedded in the data being affected by bit-flips. This was pretty easy to spot because we knew what labels we were expecting and the handful of anomalies stood out. Not only that, the effect seemed to be somewhat localized to a geographical area. Maybe this wasn’t such a bad place to try and correlate this information with space-weather forecasts. Back to the internet and I find an interesting space-weather article that seems to line up with the dates from the bug. I finally hit a bit of a wall in this fantastical investigation when I found it difficult to get data on solar radiation by day and geographical location. There is a rather nifty site, SpaceWeatherLive.com which has quite a bit of interesting data on solar radiation, but I was starting to hit the limits of my current knowledge and the limits on time that I had set out for myself to write this blog post.

So, rather reluctantly, I had to set aside any deeper investigations into this for another day. I do leave the search here feeling that not only is it possible that our data contains signals for cosmic activity, but that it is very likely that it could be used to correlate or even measure the impact of cosmic ray induced single-event upsets. I hope that sometime in the future I can come back to this and dig a little deeper. Perhaps someone reading this will also be inspired to poke around at this possibility and would be interested in collaborating on it, and if you are, you can reach me via the Glean Channel on Matrix as @travis. For now, I’ve turned something that seemed like a crazy possibility in my mind into something that seems a lot more likely than I ever expected. Not a bad investigation at all.

Niko MatsakisImplied bounds and perfect derive

There are two ergonomic features that have been discussed for quite some time in Rust land: perfect derive and expanded implied bounds. Until recently, we were a bit stuck on the best way to implement them. Recently though I’ve been working on a new formulation of the Rust trait checker that gives us a bunch of new capabilities — among them, it resolved a soundness formulation that would have prevented these two features from being combined. I’m not going to describe my fix in detail in this post, though; instead, I want to ask a different question. Now that we can implement these features, should we?

Both of these features fit nicely into the less rigamarole part of the lang team Rust 2024 roadmap. That is, they allow the compiler to be smarter and require less annotation from you to figure out what code should be legal. Interestingly, as a direct result of that, they both also carry the same downside: semver hazards.

What is a semver hazard?

A semver hazard occurs when you have a change which feels innocuous but which, in fact, can break clients of your library. Whenever you try to automatically figure out some part of a crate’s public interface, you risk some kind of semver hazard. This doesn’t necessarily mean that you shouldn’t do the auto-detection: the convenience may be worth it. But it’s usually worth asking yourself if there is some way to lessen the semver hazard while still getting similar or the same benefits.

Rust has a number of semver hazards today.1 The most common example is around thread-safety. In Rust, a struct MyStruct is automatically deemed to implement the trait Send so long as all the fields of MyStruct are Send (this is why we call Send an auto trait: it is automatically implemented). This is very convenient, but an implication of it is that adding a private field to your struct whose type is not thread-safe (e.g., a Rc<T>) is potentially a breaking change: if someone was using your library and sending MyStruct to run in another thread, they would no longer be able to do so.

What is “perfect derive”?

So what is the perfect derive feature? Currently, when you derive a trait (e.g., Clone) on a generic type, the derive just assumes that all the generic parameters must be Clone. This is sometimes necessary, but not always; the idea of perfect derive is to change how derive works so that it instead figures out exactly the bounds that are needed.

Let’s see an example. Consider this List<T> type, which creates a linked list of T elements. Suppose that List<T> can be deref’d to yield its &T value. However, lists are immutable once created, and we also want them to be cheaply cloneable, so we use Rc<T> to store the data itself:

#[derive(Clone)]
struct List<T> {
    data: Rc<T>,
    next: Option<Rc<List<T>>>,
}

impl<T> Deref for List<T> {
    type Target = T;

    fn deref(&self) -> &T { &self.data }
}

Currently, derive is going to generate an impl that requires T: Clone, like this…

impl<T> Clone for List<T> 
where
    T: Clone,
{
    fn clone(&self) {
        List {
            value: self.value.clone(),
            next: self.next.clone(),
        }
    }
}

If you look closely at this impl, though, you will see that the T: Clone requirement is not actually necessary. This is because the only T in this struct is inside of an Rc, and hence is reference counted. Cloning the Rc only increments the reference count, it doesn’t actually create a new T.

With perfect derive, we would change the derive to generate an impl with one where clause per field, instead. The idea is that what we really need to know is that every field is cloneable (which may in turn require that T be cloneable):

impl<T> Clone for List<T> 
where
    Rc<T>: Clone, // type of the `value` field
    Option<Rc<List<T>>: Clone, // type of the `next` field
{
    fn clone(&self) { /* as before */ }
}

Making perfect derive sound was tricky, but we can do it now

This idea is quite old, but there were a few problems that have blocked us from doing it. First, it requires changing all trait matching to permit cycles (currently, cycles are only permitted for auto traits like Send). This is because checking whether List<T> is Send would not require checking whether Option<Rc<List<T>>> is Send. If you work that through, you’ll find that a cycle arises. I’m not going to talk much about this in this post, but it is not a trivial thing to do: if we are not careful, it would make Rust quite unsound indeed. For now, though, let’s just assume we can do it soundly.

The semver hazard with perfect derive

The other problem is that it introduces a new semver hazard: just as Rust currently commits you to being Send so long as you don’t have any non-Send types, derive would now commit List<T> to being cloneable even when T: Clone does not hold.

For example, perhaps we decide that storing a Rc<T> for each list wasn’t really necessary. Therefore, we might refactor List<T> to store T directly, like so:

#[derive(Clone)]
struct List<T> {
    data: T,
    next: Option<Rc<List<T>>>,
}

We might expect that, since we are only changing the type of a private field, this change could not cause any clients of the library to stop compiling. With perfect derive, we would be wrong.2 This change means that we now own a T directly, and so List<T>: Clone is only true if T: Clone.

Expanded implied bounds

An implied bound is a where clause that you don’t have to write explicitly. For example, if you have a struct that declares T: Ord, like this one…

struct RedBlackTree<T: Ord> {  }

impl<T: Ord> RedBlackTree<T> {
    fn insert(&mut self, value: T) {  }
}

…it would be nice if functions that worked with a red-black tree didn’t have to redeclare those same bounds:

fn insert_smaller<T>(red_black_tree: &mut RedBlackTree<T>, item1: T, item2: T) {
    // Today, this function would require `where T: Ord`:
    if item1 < item2 {
        red_black_tree.insert(item);
    } else {
        red_black_tree.insert(item2);
    }   
}\

I am saying expanded implied bounds because Rust already has two notions of implied bounds: expanding supertraits (T: Ord implies T: PartialOrd, for example, which is why the fn above can contain item1 < item2) and outlives relations (an argument of type &’a T, for example, implies that T: ‘a). The most maximal version of this proposal would expand those implied bounds from supertraits and lifetimes to any where-clause at all.

Implied bounds and semver

Expanding the set of implied bounds will also introduce a new semver hazard — or perhaps it would be better to say that is expands an existing semver hazard. It’s already the case that removing a supertrait from a trait is a breaking change: if the stdlib were to change trait Ord so that it no longer extended Eq, then Rust programs that just wrote T: Ord would no longer be able to assume that T: Eq, for example.

Similarly, at least with a maximal version of expanded implied bounds, removing the T: Ord from BinaryTree<T> would potentially stop client code from compiling. Making changes like that is not that uncommon. For example, we might want to introduce new methods on BinaryTree that work even without ordering. To do that, we would remove the T: Ord bound from the struct and just keep it on the impl:

struct RedBlackTree<T> {  }

impl<T> RedBlackTree<T> {
    fn len(&self) -> usize { /* doesn’t need to compare `T` values, so no bound */ }
}

impl<T: Ord> RedBlackTree<T> {
    fn insert(&mut self, value: T) {  }
}

But, if we had a maximal expansion of implied bounds, this could cause crates that depend on your library to stop compiling, because they would no longer be able to assume that RedBlackTree<X> being valid implies X: Ord. As a general rule, I think we want it to be clear what parts of your interface you are committing to and which you are not.

PSA: Removing bounds not always semver compliant

Interestingly, while it is true that you can remove bounds from a struct (today, at least) and be at semver complaint3, this is not the case for impls. For example if I have

impl<T: Copy> MyTrait for Vec<T> { }

and I change it to impl<T> MyTrait for Vec<T>, this is effectively introducing a new blanket impl, and that is not a semver compliant change (see RFC 2451 for more details).

Summarize

So, to summarize:

  • Perfect derive is great, but it reveals details about your fields—- sure, you can clone your List<T> for any type T now, but maybe you want the right to require T: Clone in the future?
  • Expanded implied bounds are great, but they prevent you from “relaxing” your requirements in the future— sure, you only ever have a RedBlackTree<T> for T: Ord now, but maybe you want to support more types in the future?
  • But also: the rules around semver compliance are rather subtle and quick to anger.

How can we fix these features?

I see a few options. The most obvious of course is to just accept the semver hazards. It’s not clear to me whether they will be a problem in practice, and Rust already has a number of similar hazards (e.g., adding a Box<dyn Write> makes your type no longer Send).

Another extreme alternative: crate-local implied bounds

Another option for implied bounds would be to expand implied bounds, but only on a crate-local basis. Imagine that the RedBlackTree type is declared in some crate rbtree, like so…

// The crate rbtree
struct RedBlackTree<T: Ord> { .. }

impl<T> RedBlackTree<T> {
    fn insert(&mut self, value: T) {
        
    }
}

This impl, because it lives in the same crate as RedBlackTree, would be able to benefit from expanded implied bounds. Therefore, code inside the impl could assume that T: Ord. That’s nice. If I later remove the T: Ord bound from RedBlackTree, I can move it to the impl, and that’s fine.

But if I’m in some downstream crate, then I don’t benefit from implied bounds. If I were going to, say, implement some trait for RedBlackTree, I’d have to repeat T: Ord

trait MyTrait { }

impl<T> MyTrait for rbtrait::RedBlackTree<T>
where
    T: Ord, // required
{ }

A middle ground: declaring “how public” your bounds are

Another variation would be to add a visibility to your bounds. The default would be that where clauses on structs are “private”, i.e., implied only within your module. But you could declare where clauses as “public”, in which case you would be committing to them as part of your semver guarantee:

struct RedBlackTree<T: pub Ord> { .. }

In principle, we could also support pub(crate) and other visibility modifiers.

Explicit perfect derive

I’ve been focused on implied bounds, but the same questions apply to perfect derive. In that case, I think the question is mildly simpler— we likely want some way to expand the perfect derive syntax to “opt in” to the perfect version (or “opt out” from it).

There have been some proposals that would allow you to be explicit about which parameters require which bounds. I’ve been a fan of those, but now that I’ve realized we can do perfect derive, I’m less sure. Maybe we should just want some way to say “add the bounds all the time” (the default today) or “use perfect derive” (the new option), and that’s good enough. We could even make there be a new attribute, e.g. #[perfect_derive(…)] or #[semver_derive]. Not sure.

Conclusion

In the past, we were blocked for technical reasons from expanding implied bounds and supporting perfect derive, but I believe we have resolved those issues. So now we have to think a bit about semver and decide how much explicit we want to be.

Side not that, no matter what we pick, I think it would be great to have easy tooling to help authors determine if something is a semver breaking change. This is a bit tricky because it requires reasoning about two versions of your code. I know there is rust-semverer but I’m not sure how well maintained it is. It’d be great to have a simple github action one could deploy that would warn you when reviewing PRs.

  1. Rules regarding semver are documented here, by the way. 

  2. Actually, you were wrong before: changing the types of private fields in Rust can already be a breaking change, as we discussed earlier (e.g., by introducing a Rc, which makes the type no longer implement Send). 

  3. Uh, no promises — there may be some edge cases, particularly involving regions, where this is not true today. I should experiment. 

Mozilla Open Policy & Advocacy BlogCompetition should not be weaponized to hobble privacy protections on the open web

Recent privacy initiatives by major tech companies, such as Google’s Chrome Privacy Sandbox (GCPS) and Apple’s App Tracking Transparency, have brought into sharp focus a key underlying question – should we maintain pervasive data collection on the web under the guise of preserving competition?

Mozilla’s answer to this is that the choice between a more competitive or a more privacy-respecting web is a false one and should be scrutinized. Many parties on the Internet, including but also beyond the largest players, have built their business models to depend on extensive user tracking. Because this tracking is so baked into the web ecosystem, closing privacy holes necessarily means limiting various parties’ ability to collect and exploit that data. This ubiquity is not, however, a reason to protect a status quo that harms consumers and society. Rather, it is a reason to move away from that status quo to find and deploy better technology that continues to offer commercial value with better privacy and security properties.

None of this is to say that regulators should not intervene to prevent blatant self-preferencing by large technology companies, including in their advertising services. However, it is equally important that strategically targeted complaints not be used as a trojan horse to prevent privacy measures, such as the deprecation of third party cookies (TPCs) or restricting device identifiers, from being deployed more widely. As an example, bundling legitimate competition scrutiny of the GCPS proposals with the deprecation of third party cookies has led to the indefinite delay of this vital privacy improvement in one of the most commonly used browsers. Both the competition and privacy aspects warranted close attention, but leaning too much in favor of the former has left people unprotected.

Rather than asking regulators to look at the substance of privacy features so they do not favor dominant platforms (and there is undoubtedly work to be done on that front), vested interests have instead managed to spin the issue into one with a questionable end goal – to ensure they retain access to exploitative models of data extraction. This access, however, is coming at the cost of meaningful progress in privacy preserving advertising. Any attempt to curtail access to the unique identifiers by which people are tracked online (cookies or device IDs) is being painted as “yet another example” of BigTech players unfairly exercising dominant power. Mozilla agrees with the overall need for scrutiny of concentrated platforms when it comes to the implementation of such measures. However, we are deeply concerned that the scope creep of these complaints to include privacy protections, such as TPC deprecation which is already practiced elsewhere in the industry, is actively harming consumers.

Instead of standing in the way of privacy protection, the ecosystem should instead be working to create a high baseline of privacy protections and an even playing field for all players. That means foreclosing pervasive data collection for large and small parties alike. In particular, we urge regulators to consider advertising related privacy enhancements by large companies with the following goals:

  • Prevent Self-Preferencing: It is crucial to ensure that dominant platforms aren’t closing privacy holes for small players while leaving those holes in place for themselves. Dominant companies shouldn’t allow their services to exploit data at the platform-level that third party apps or websites can no longer access due to privacy preserving measures.
  • Restricting First Party Data Sharing: Regulatory interventions should limit data sharing within large technology conglomerates which have first party relationships with consumers across a variety of services. Privacy regulations already require companies to be explicit with consumers about who has access to their data, how it is shared, etc. Technology conglomerates conveniently escape these rules because the individual products and services are housed within the same company. Some would suggest that third party tracking identifiers are a means to counterbalance the dominance of large, first party platforms. However, we believe competition regulators can tackle dominance in first party data directly through targeted interventions governing how data can be shared and used within the holding structures of large platforms. This leverages classic competition remedies and is far better than using regulatory authority to prop up an outdated and harmful tracking technology like third party cookies.

Consumer welfare is at the heart of both competition and privacy enforcement, and leaving people’s privacy at risk shouldn’t be a remedy for market domination. Mozilla believes that the development of new technologies and regulations will need to go hand in hand to ensure that the future of the web is both private for consumers and remains a sustainable ecosystem for players of all sizes.

The post Competition should not be weaponized to hobble privacy protections on the open web appeared first on Open Policy & Advocacy.

Support.Mozilla.OrgWhat’s up with SUMO – April 2022

Hi everybody,

April is a transition month, with the season starting to change from winter to spring, and a new quarter is beginning to unfold. A lot to plan, but it also means a lot of things to be excited about. With that spirit, let’s see what the Mozilla Support community has been up to these days:

Welcome note and shout-outs

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • The result of the Mozilla Support Contributor Survey 2022 is out. You can check the summary and recommendations from this deck.
  • The TCP/ETP project has been running so well. The KB changes are on the way, and we finished the forum segmentation and found 2 TCP-related bugs. The final report of the project is underway.
  • We’re one version away from Firefox 100. Check out what to expect in Firefox 100!
  • For those of you who experience problems with media upload in SUMO, check out this contributor thread to learn more about the issue.
  • Mozilla Connect was officially soft-launched recently. Check out the Connect Campaign and learn more about how to get involved!
  • The buddy forum is now archived and replaced with the contributor introduction forum. However, due to a permission issue, we’re hiding the new introduction forum at the moment until we figure out the problem.
  • Previously, I mentioned that we’re hoping to finish the onboarding project implementation by the end of Q1. But we should expect a delay for this project as our platform team is stretched tight at the moment.

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in February and Marchs! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Check out SUMO Engineering Board to see what the platform team is currently doing.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
Feb 2022 6,772,577 -14.56%
Mar 2022 7,501,867 10.77%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Pierre Mozinet
  3. Bithiah
  4. Danny C
  5. Seburo

KB Localization

Top 10 locales based on total page views

Locale Feb 2022 pageviews (*) Mar 2022 pageviews (*) Localization progress (per Apr, 11)(**)
de 9.56% 8.74% 97%
fr 6.83% 6.84% 89%
es 6.79% 6.56% 32%
zh-CN 5.65% 7.28% 100%
ru 4.30% 6.12% 86%
pt-BR 3.91% 4.61% 56%
ja 3.81% 3.82% 52%
It 2.64% 2.45% 99%
pl 2.51% 2.28% 87%
zh-TW 1.42% 1.19% 4%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Jim Spentzos
  2. Michele Rodaro
  3. TyDraniu
  4. Mark Heijl
  5. Milupo

Forum Support

Forum stats

-TBD-

Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Jscher2000
  3. Cor-el
  4. Seburo
  5. Sfhowes
  6. Davidsk

Social Support

Channel Total incoming conv Conv interacted Resolution rate
Feb 2022 229 217 64.09%
Mar 2022 360 347 66.14%

Top 5 Social Support contributors in the past 2 months: 

  1. Bithiah K
  2. Christophe Villeneuve
  3. Kaio Duarte
  4. Tim Maks
  5. Felipe Koji

Play Store Support

Channel Feb 2022 Mar 2022
Total priority review Total priority review replied Total reviews replied Total priority review Total priority review replied Total reviews replied
Firefox for Android 1464 58 92 1387 346 411
Firefox Focus for Android 45 11 54 142 11 94
Firefox Klar Android 0 0 0 2 0 2

Top 3 Play Store contributors in the past 2 months: 

  1. Paul Wright
  2. Tim Maks
  3. Selim Şumlu

Product updates

Firefox desktop

  • V99 landed on Apr 5, 2022
    • Enable CC autofill UK, FR, DE
  • V100 is set for May 3, 2022
    • Picture in Picture
    • Quality Foundations
    • Privacy Segmentation (promoting Fx Focus)

Firefox mobile

  • Mobile V100 set to land on May 3, 2022
  • Firefox Android V100 (unconfirmed)
    • Wallpaper foundations
    • Task Continuity
    • New Tab Banner – messaging framework
    • Clutter-Free History
  • Firefox iOS V100 (unconfirmed)
    • Clutter Free History
  • Firefox Focus V100 (unconfirmed)
    • Unknown

 

Other products / Experiments

  • Pocket Android (End of April) [Unconfirmed]
    • Sponsored content
  • Relay Premium V22.03 staggered release cadence
    • Sign in with Alias Icon (April 27th)
    • Sign back in with Alias Icon (April 27th)
    • Promotional email blocker to free users (April 21st)
    • Non-Premium Waitlist (April 21st)
    • Replies count surfaced to users (unknown)
    • History section of News (unknown)
  • Mozilla VPN V2.8 (April 18)
    • Mobile onboarding/authentication flow improvements
    • Connection speed
    • Tunnel VPN through Port 53/DNS

 

Useful links:

Firefox NightlyThese Weeks In Firefox: Issue 113

Highlights

Friends of the Firefox team

Introductions/Shout-Outs
  • Please welcome Stephanie Cunnane to her first Firefox Desktop meeting today. She’s our newest team member on the Search Team and started with us on March 21st! 🎉🎉🎉Welcome Stephanie!
Resolved bugs (excluding employees)
Volunteers that fixed more than one bug
  • Claudia Batista [:claubatista]
  • gliu20
  • Masatoshi Kimura [:emk]
  • Mathew Hodson
  • mattheww
New contributors (🌟 = first patch)

General triage

Project Updates

Add-ons / Web Extensions
WebExtensions Framework
WebExtension APIs
  • Fixed missing “title” property in the bookmarks.onRemoved event details – Bug 1556427
  • Fixed browser.sessions.getRecentlyClosed API when a closed windows had a tab with empty history – Bug 1762326
  • Support overriding the heuristic that Firefox uses to decide whether a theme is dark or light using the new “theme.properties.color_scheme” and “theme.properties.content_color_scheme theme” properties – Bug 1750932
Developer Tools
  •  Toolbox
    • Wartmann fixed an annoying Debugger + React DevTools webextension bug, where you had to click the resume button twice when paused because of a “debugger” statement (bug)
    • Yury and Alex improved debugging asm.js/wasm project (bug), by turning debug code on only when using the Debugger, making using the console only faster
    • Julian fixed a bug when using the picker on UA Widgets (e.g. <video> elements)
    • Storage Inspector wasn’t reflecting Cookies being updated in private tabs, this was fixed in Bug 1755220
    • We landed a few patches that improved Console performance in different scenarios (bug, bug and bug), and we’re getting close to land the virtualization patch (bug). Overall the Console should be _much_ faster in the coming weeks, we’ll compile some numbers once everything landed
  • WebDriver BiDi
    • Support for the browsingContext.close command landed (bug) which allows users to close a given top-level browsing context (aka tab). The browser testing group still needs to agree on what should happen when the last tab (window) gets closed.
    • Optional hosts and origins should now be set as command line arguments, and not from preferences anymore (bug). This will raise user awareness when adding these additional hosts and origins that need to be accepted for new WebSocket connections by WebDriver BiDi clients.
    • Most of the existing Webdriver tests on Android are now enabled (bug), which will prevent regressions on this platform. More tests can be enabled once Marionette will support opening new tabs.
Fluent
Form Autofill
Lint, Docs and Workflow
  • There are various mentored patches in work/landing to fix ESLint no-unused-vars issues in xpcshell-tests. Thank you to the following who have landed fixes so far:
    • Leslie Orellana
  • Gijs has landed a patch to suggest (via ESLint) using add_setup rather than add_task in mochitests, and updated many existing instances to use add_setup.
  • Standard8 landed a patchset that did a few things:
    • Fixed an issue when running with ESLint 8.x which we’ll be upgrading to soon.
    • Completed documentation for ESLint rules where it was missing previously.
    • Upgraded all the Mozilla rules to use a newer definition format, which also includes a link to the documentation.
      • Editors should now be able to link you to the documentation if you need more info e.g. in Atom:

Picture-in-Picture
Performance
Performance Tools (aka Firefox Profiler)
Privacy/Security
Search and Navigation
Screenshots
Community
  • Lots of Outreachy applicants are showing up! Keep your eyes peeled for Bugzilla comments asking to be assigned to good-first-bugs. Respond ASAP to questions from applicants.
    • Remember to set `good-first-bug` in the Bugzilla keyword
    • And then add [lang=js], [lang=css], and/or [lang=html] in the whiteboard to indicate what skills will be used
    • Finally, set yourself in the Mentor field

 

The Talospace ProjectFirefox 99 on POWER

Firefox 99 is out. The major change here is that the Linux sandbox has been strengthened to eliminate direct access to X11 (which is important because many of us do not live in the Wayland Wasteland). Note that the sandbox apparently doesn't work currently on ppc64le; this is something I intend to look at later when I'm done with the JIT unless someone™ gets to it first.

Unfortunately, Fx99 does not build from source on ppc64 or ppc64le and I was too busy on the JIT to do my usual smoke tests early. The offender is bug 1758610 but the patches do not apply cleanly to 99, so I have provided a consolidated diff for your convenience. You will also need a tweaked PGO-LTO patch; with those applied the .mozconfigs from Firefox 95 will work.

All three stages of the JIT (Baseline Interpreter, Baseline Compiler and Ion, as well as Wasm) now function and pass tests on POWER9 except for a couple depending on memory layout oddities; that last unexpected test failure took me almost a week and a half to find. (TenFourFox users will be happy because a couple of these bugs exist in TenFourFox and I'll generate patches to fix them there for source builders.) However, it isn't shippable because when I built a browser with it there were regressions compared to Baseline Compiler alone (Google Maps fonts look fuzzier in Ion, the asm.js DOSBOX dies with a weird out of range error, etc.). The shippable state is that Ion should be a strict superset of Baseline Compiler: it may not make everything work, but anything that worked in Baseline Compiler should also work with Ion enabled, just faster. These problems don't seem to have coverage in the test suite. You can build the browser and try it for yourself from the new branch, but make sure that you set the Ion options in about:config back to true. Keep in mind that this is 97.0a1, so it has some unrelated bugs, and you shouldn't use it with your current Firefox profile.

Smoking out these failures is going to be a much harder lift because debugging a JIT in a heavily multi-threaded browser is a nightmare, especially on a non-tier-1 platform with more limited tooling; a lot of the old options to disable Electrolysis and Fission seem to be ignored in current releases. With that in mind and with the clock counting down to Firefox 102 (the next ESR) I've decided to press on and pull down Firefox 101 as a new branch, drag the JIT current with mozilla-central and see if any of the added tests can shed light on the regressions. A potential explanation is that we could have some edges where 32-bit integer quantities still have the upper word set (64-bit PowerPC doesn't really have comprehensive 32-bit math and clearing the upper bits all the time is wasteful, so there are kludges in a few places to intercept those high-word-set registers where they matter), but changing these assumptions would be major surgery, and aside from the delays may not actually be what's wrong: after all, it doesn't seem to break Baseline. One other option is to deliberately gimp the JIT so that Ion is never activated and submit it as such to Mozilla to make the ESR, and we'd have to do this soon, but indefinitely emasculating our Ion implementation would really suck to me personally and may not pass code review. I'm sure I've made stupid and/or subtle errors somewhere and a lot of code just isn't covered by the test suite (we have tons of controlled crashes, asserts and traps in untested paths, and none of it is triggered in the test suite), so I really need more eyes on the code to see what I can't.

Mozilla Localization (L10N)L10n Report: April 2022 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

Firefox 100 is now in beta, and will be released on May 3, 2022. The deadline to update localization is April 24.

As part of this release, users will see a special celebration message.

You can test this dialog by:

  • Opening about:welcome in Nightly.
  • Copying and pasting the following code in the Browser Console:
    Cc["@mozilla.org/browser/browserglue;1"].getService().wrappedJSObject._showUpgradeDialog()

If you’re not familiar with the Browser Console, take a look at these old instructions to set it up, then paste the command provided above.

What’s new or coming up in mobile

Just like Firefox desktop, v100 is right around the corner for mobile.

  • Firefox for Android and Focus for Android: deadline is April 27.
  • Firefox for iOS and Focus for iOS: deadline is April 24.

Some strings landed late in the cycle – but everything should have arrived by now.

What’s new or coming up in web projects

Relay Website and add-on

The next release is on April 19th. This release includes new strings along with massive updates to both projects thanks to key terminology changes:

  • alias to mask
  • domain to subdomain
  • real email to true email

To learn more about the change, please check out this Discourse post. If you can’t complete the updates by the release date, there will be subsequent updates soon after the deadline so your work will be in production soon. Additionally, the obsolete strings will be removed once the products have caught up with the updates in most locales.

What’s new or coming up in SuMo

What’s new or coming up in Pontoon

Review notifications

We added a notification for suggestion reviews, so you’ll now know when your suggestions have been accepted or rejected. These notifications are batched and sent daily.

Changes to Fuzzy strings

Soon, we’ll be making changes to the way we treat Fuzzy strings. Since they aren’t used in the product, they’ll be displayed as Missing. You will no longer find Fuzzy strings on the dashboards and in the progress charts. The Fuzzy filter will be moved to Extra filters. You’ll still see the yellow checkmark in the History panel to indicate that a particular translation is Fuzzy.

Newly published localizer facing documentation

Events

  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Image by Elio Qoshi

  • Thanks to everybody on the TCP/ETP contributor focus group. You’re all amazing and the Customer Experience team can’t thank you enough for everyone’s collaboration on the project.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

  • If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Mozilla Performance BlogPerformance Sheriff Newsletter (March 2022)

In March there were 175 alerts generated, resulting in 21 regression bugs being filed on average 5.4 days after the regressing change landed.

Welcome to the March 2022 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by a review of the year. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 1.1 days
  • 96% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 2.4 days
  • 83% of valid regressions were associated with bugs within 5 days
  • 8% of regression bugs had the culprit bug corrected

Sheriffing Efficiency (March 2022)

Regression culprit accuracy

This month a new metric is being reported in the sheriffing efficiency section, which relates to the accuracy of our identification of culprit regression bugs. When a sheriff opens a regression bug, the ‘regressed by’ field is used to identify the bug that introduced the regression. This is determined by the sheriffs and informed by our regression detection tools. Sometimes we get this wrong, and the ‘regressed by’ field is updated to reflect the correct culprit. The new metric measures the percentage of regression bugs where this field has been modified, and we’ve established an initial target of <15%. This isn’t a perfect reflection of accuracy, and for several reasons won’t be used as a sheriffing KPI at this time. We believe this metric can be improved by working on our sheriffing guidelines around identifying culprits, but also by improving our test scheduling and regression detection algorithms.

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.

I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for March can be found here (for those with access).

The Rust Programming Language BlogAnnouncing Rust 1.60.0

The Rust team is happy to announce a new version of Rust, 1.60.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.60.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.60.0 on GitHub. If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.60.0 stable

Source-based Code Coverage

Support for LLVM-based coverage instrumentation has been stabilized in rustc. You can try this out on your code by rebuilding your code with -Cinstrument-coverage, for example like this:

RUSTFLAGS="-C instrument-coverage" cargo build

After that, you can run the resulting binary, which will produce a default.profraw file in the current directory. (The path and filename can be overriden by an environment variable; see documentation for details).

The llvm-tools-preview component includes llvm-profdata for processing and merging raw profile output (coverage region execution counts); and llvm-cov for report generation. llvm-cov combines the processed output, from llvm-profdata, and the binary itself, because the binary embeds a mapping from counters to actual source code regions.

rustup component add llvm-tools-preview
$(rustc --print sysroot)/lib/rustlib/x86_64-unknown-linux-gnu/bin/llvm-profdata merge -sparse default.profraw -o default.profdata
$(rustc --print sysroot)/lib/rustlib/x86_64-unknown-linux-gnu/bin/llvm-cov show -Xdemangler=rustfilt target/debug/coverage-testing \
    -instr-profile=default.profdata \
    -show-line-counts-or-regions \
    -show-instantiations

The above commands on a simple helloworld binary produce this annotated report, showing that each line of the input was covered.

    1|      1|fn main() {
    2|      1|    println!("Hello, world!");
    3|      1|}

For more details, please read the documentation in the rustc book. The baseline functionality is stable and will exist in some form in all future Rust releases, but the specific output format and LLVM tooling which produces it are subject to change. For this reason, it is important to make sure that you use the same version for both the llvm-tools-preview and the rustc binary used to compile your code.

cargo --timings

Cargo has stabilized support for collecting information on build with the --timings flag.

$ cargo build --timings
   Compiling hello-world v0.1.0 (hello-world)
      Timing report saved to target/cargo-timings/cargo-timing-20220318T174818Z.html
    Finished dev [unoptimized + debuginfo] target(s) in 0.98s

The report is also copied to target/cargo-timings/cargo-timing.html. A report on the release build of Cargo has been put up here. These reports can be useful for improving build performance. More information about the timing reports may be found in the documentation.

New syntax for Cargo features

This release introduces two new changes to improve support for Cargo features and how they interact with optional dependencies: Namespaced dependencies and weak dependency features.

Cargo has long supported features along with optional dependencies, as illustrated by the snippet below.

[dependencies]
jpeg-decoder = { version = "0.1.20", default-features = false, optional = true }

[features]
# Enables parallel processing support by enabling the "rayon" feature of jpeg-decoder.
parallel = ["jpeg-decoder/rayon"]

There are two things to note in this example:

  • The optional dependency jpeg-decoder implicitly defines a feature of the same name. Enabling the jpeg-decoder feature will enable the jpeg-decoder dependency.
  • The "jpeg-decoder/rayon" syntax enables the jpeg-decoder dependency and enables the jpeg-decoder dependency's rayon feature.

Namespaced features tackles the first issue. You can now use the dep: prefix in the [features] table to explicitly refer to an optional dependency without implicitly exposing it as a feature. This gives you more control on how to define the feature corresponding to the optional dependency including hiding optional dependencies behind more descriptive feature names.

Weak dependency features tackle the second issue where the "optional-dependency/feature-name" syntax would always enable optional-dependency. However, often you want to enable the feature on the optional dependency only if some other feature has enabled the optional dependency. Starting in 1.60, you can add a ? as in "package-name?/feature-name" which will only enable the given feature if something else has enabled the optional dependency.

For example, let's say we have added some serialization support to our library, and it requires enabling a corresponding feature in some optional dependencies. That can be done like this:

[dependencies]
serde = { version = "1.0.133", optional = true }
rgb = { version = "0.8.25", optional = true }

[features]
serde = ["dep:serde", "rgb?/serde"]

In this example, enabling the serde feature will enable the serde dependency. It will also enable the serde feature for the rgb dependency, but only if something else has enabled the rgb dependency.

Incremental compilation status

Incremental compilation is re-enabled for the 1.60 release. The Rust team continues to work on fixing bugs in incremental, but no problems causing widespread breakage are known at this time, so we have chosen to reenable incremental compilation. Additionally, the compiler team is continuing to work on long-term strategy to avoid future problems of this kind. That process is in relatively early days, so we don't have anything to share yet on that front.

Instant monotonicity guarantees

On all platforms Instant will try to use an OS API that guarantees monotonic behavior if available (which is the case on all tier 1 platforms). In practice such guarantees are -- under rare circumstances -- broken by hardware, virtualization, or operating system bugs. To work around these bugs and platforms not offering monotonic clocks, Instant::duration_since, Instant::elapsed and Instant::sub now saturate to zero. In older Rust versions this led to a panic instead. Instant::checked_duration_since can be used to detect and handle situations where monotonicity is violated, or Instants are subtracted in the wrong order.

This workaround obscures programming errors where earlier and later instants are accidentally swapped. For this reason future Rust versions may reintroduce panics in at least those cases, if possible and efficient.

Prior to 1.60, the monotonicity guarantees were provided through mutexes or atomics in std, which can introduce large performance overheads to Instant::now(). Additionally, the panicking behavior meant that Rust software could panic in a subset of environments, which was largely undesirable, as the authors of that software may not be able to fix or upgrade the operating system, hardware, or virtualization system they are running on. Further, introducing unexpected panics into these environments made Rust software less reliable and portable, which is of higher concern than exposing typically uninteresting platform bugs in monotonic clock handling to end users.

Stabilized APIs

The following methods and trait implementations are now stabilized:

Other changes

There are other changes in the Rust 1.60.0 release. Check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.60.0

Many people came together to create Rust 1.60.0. We couldn't have done it without all of you. Thanks!

Mozilla Open Policy & Advocacy BlogPhilippines’ SIM Card Registration Act will expose users to greater privacy and security risks online

While well-intentioned, the Philippines’ Subscriber Identity Module (SIM) Card Registration Act (2022) will set a worrying precedent for the privacy and anonymity of people on the internet. In its current state, approved by the Philippine Congress (House of Representatives and Senate) but awaiting Presidential assent, the law contains provisions requiring social media companies to mandatorily verify the real names and phone numbers of users that create accounts on their platform. Such a move will not only limit the anonymity that is essential online (for example, for whistle blowing and protection from stalkers) but also reduce the privacy and security they can expect from private companies.

These provisions raise a number of concerns, both in principle as well as regarding implementation, which merit serious reconsideration of the law.

  • Sharing sensitive personal data with technology companies: Implementing the real name and phone number requirement in practice would entail users sending photos of government issued IDs to the companies. This will incentivise the collection of sensitive personal data from government IDs that are submitted for this verification, which can then be used to profile and target users. This is not hypothetical conjecture – we have already seen phone numbers collected for security purposes being used for profiling by some of the largest technology players in the world.
  • Harming smaller players: Such a move would entrench power in the hands of large players in the social media space who can afford to build and maintain such verification systems, harming the ability of innovation from smaller, more agile startups from being able to compete effectively within the Philippines. The broad definition of “social media” in the law also leaves the possibility of applying to many more players than intended, further harming the innovation economy.
  • Increased risk of data breaches: As we have seen from the deployment digital identity systems around the world, such a move will also increase the risk from data breaches by creating large, single points of failure in the form of those systems where these identification documents used to verify real world identity are stored by private, social media companies. As evidence from far better protected systems has shown, such breaches are just a matter of time, with disastrous consequences for users that will extend far beyond their online interactions on such platforms.
  • Inadequate Solution: There is no evidence to prove that this measure will help fight crimes, misinformation or scams (its motivating factor), and it ignores the benefits that anonymity can bring to the internet, such as whistleblowing and protection from stalkers. Anonymity is an integral aspect of free speech online and such a move will have a chilling effect on public discourse.

For all of these reasons, it is critical that the Subscriber Identity Module (SIM) Card Registration Act not be approved into binding law and these provisions be reconsidered to allow the residents of Philippines to continue to enjoy an open internet.

The post Philippines’ SIM Card Registration Act will expose users to greater privacy and security risks online appeared first on Open Policy & Advocacy.

Firefox Add-on ReviewsExtensions for cleaning up a chaotic desktop

Clutter isn’t just material stuff scattered about your floor and shelves. Clutter can consume us in digital form, too — from an overabundance of browser bookmarks and open tabs to navigating a world wide web that’s littered with junk. The right browser extension, however, can really help clean things up… 

Tranquility Reader

Want to sweep away all the mess around your website reading material, like images, ads, and links to other content you don’t care about? Tranquility Reader does just that. Hit the extension’s toolbar button and presto — everything but the words disappear. 

The extension offers a few other nice features as well, like the ability to save content offline for later reading, customizable font sizes and color, add annotations to saved pages, and more.

Turn Off the Lights for Firefox

Clear out everything on your desktop except the video you want to watch. With the flick of a button, Turn Off the Lights for Firefox fades out everything on screen except your video player. 

More than just a fancy light switch, the extension offers a bunch of great customization features, including… 

  • Auto HD for YouTube
  • Mouse-wheel volume control
  • Atmosphere lighting or background imagery
  • Lots of fun fade out/in visual effects

Unhook

Speaking of uncluttering your video experience, Unhook elegantly removes a bunch of distracting YouTube elements. 

Enjoy YouTube with more breathing room once Unhook removes all those “rabbit hole” temptations like related or recommended videos, trending content, homepage suggestions, user comments, and much more. Unhook features 20+ customization options. 

Minimal Theme for Twitter

Keep streamlining your social media. Minimal Theme for Twitter offers a few new looks for a stripped down scrolling experience. 

Customize the size of Twitter’s timeline and alter various site buttons, activate superior color palettes, remove Trends, and more. 

Link Gopher

Have you ever kept a web page open for days or maybe even weeks just because it contains a bunch of links you’ve been meaning to save and organize in some fashion, some day? Link Gopher can help. 

Simple but such a time saver — with a single click Link Gopher grabs all links from within a web page and sorts them in a new tab, so you can easily copy/paste anywhere for proper organization. Any duplicate links are automatically removed. 

OneTab

This is like an emergency escape hatch when you find yourself overwhelmed with open browser tabs. Click the toolbar button and OneTab grabs all of your open links and lists them on a single scrollable page. You’ll save a lot of CPU and memory once your previously open tabs go dormant on a listing page. 

OneTab is a simple and effective way of dealing with sudden tab overload. But if you need more robust tab management tools, you might be interested to check out these other great tab manager extensions

We hope some of these extensions help you achieve a less cluttered, more serene online experience. If you’d like to keep exploring, you can find thousands of extensions on addons.mozilla.org

Hacks.Mozilla.OrgPerformance Tool in Firefox DevTools Reloaded

In Firefox 98, we’re shipping a new version of the existing Performance panel. This panel is now based on the Firefox profiler tool that can be used to capture a performance profile for a web page, inspect visualized performance data and analyze it to identify slow areas.

The icing on the cake of this already extremely powerful tool is that you can upload collected profile data with a single click and share the resulting link with your teammates (or anyone really). This makes it easier to collaborate on performance issues, especially in a distributed work environment.

The new Performance panel is available in Firefox DevTools Toolbox by default and can be opened by Shift+F5 key shortcut.

Usage

The only thing the user needs to do to start profiling is clicking on the big blue button – Start recording. Check out the screenshot below.

As indicated by the onboarding message at the top of the new panel the previous profiler will be available for some time and eventually removed entirely.

When profiling is started (i.e. the profiler is gathering performance data) the user can see two more buttons:

  • Capture recording – Stop recording, get what’s been collected so far and visualize it
  • Cancel recording – Stop recording and throw away all collected data

When the user clicks on Capture recording all collected data are visualized in a new tab. You should see something like the following:

The inspection capabilities of the UI are powerful and let the user inspect every bit of the performance data. You might want to follow this detailed UI Tour presentation created by the Performance team at Mozilla to learn more about all available features.

Customization

There are many options that can be used to customize how and what performance data should be collected to optimize specific use cases (see also the Edit Settings… link at the bottom of the panel).

To make customization easier some presets are available and the Web Developer preset is selected by default. The profiler can be also used for profiling Firefox itself and Mozilla is extensively using it to make Firefox fast for millions of its users. The WebDeveloper preset is intended for profiling standard web pages and the rest is for profiling Firefox.

The Profiler can be also used directly from the Firefox toolbar without the DevTools Toolbox being opened. The Profiler button isn’t visible in the toolbar by default, but you can enable it by loading https://profiler.firefox.com/ and clicking on the “Enable Firefox Profiler Menu Button” on the page.

This is what the button looks like in the Firefox toolbar.

As you can see from the screenshot above the UI is almost exactly the same (compared to the DevTools Performance panel).

Sharing Data

Collected performance data can be shared publicly. This is one of the most powerful features of the profiler since it allows the user to upload data to the Firefox Profiler online storage. Before uploading a profile, you can select the data that you want to include, and what you don’t want to include to avoid leaking personal data. The profile link can then be shared in online chats, emails, and bug reports so other people can see and investigate a specific case.

This is great for team collaboration and that’s something Firefox developers have been doing for years to work on performance. The profile can also be saved as a file on a local machine and imported later from https://profiler.firefox.com/

There are many more powerful features available and you can learn more about them in the extensive documentation. And of course, just like Firefox itself, the profiler tool is an open source project and you might want to contribute to it.

There is also a great case study on using the profiler to identify performance issues.

More is coming to DevTools, so stay tuned!

The post Performance Tool in Firefox DevTools Reloaded appeared first on Mozilla Hacks - the Web developer blog.

Niko Matsakisdyn*: can we make dyn sized?

Last Friday, tmandry, cramertj, and I had an exciting conversation. We were talking about the design for combining async functions in traits with dyn Trait that tmandry and I had presented to the lang team on Friday. cramertj had an insightful twist to offer on that design, and I want to talk about it here. Keep in mind that this is a piece of “hot off the presses”, in-progress design and hence may easily go nowhere – but at the same time, I’m pretty excited about it. If it works out, it could go a long way towards making dyn Trait user-friendly and accessible in Rust, which I think would be a big deal.

Background: The core problem with dyn

dyn Trait is one of Rust’s most frustrating features. On the one hand, dyn Trait values are absolutely necessary. You need to be able to build up collections of heterogeneous types that all implement some common interface in order to implement core parts of the system. But working with heterogeneous types is just fundamentally hard because you don’t know how big they are. This implies that you have to manipulate them by pointer, and that brings up questions of how to manage the memory that these pointers point at. This is where the problems begin.

Problem: no memory allocator in core

One challenge has to do with how we factor our allocation. The core crate that is required for all Rust programs, libcore, doesn’t have a concept of a memory allocator. It relies purely on stack allocation. For the most part, this works fine: you can pass ownership of objects around by copying them from one stack frame to another. But it doesn’t work if you don’t know how much stack space they occupy!1

Problem: Dyn traits can’t really be substituted for impl Trait

In Rust today, the type dyn Trait is guaranteed to implement the trait Trait, so long as Trait is dyn safe. That seems pretty cool, but in practice it’s not all that useful. Consider a simple function that operates on any kind of Debug type:

fn print_me(x: impl Debug) {
    println!({x:?});
}

Even though the Debug trait is dyn-safe, you can’t just change the impl above into a dyn:

fn print_me(x: dyn Debug) { .. }

The problem here is that stack-allocated parameters need to have a known size, and we don’t know how big dyn is. The common solution is to introduce some kind of pointer, e.g. a reference:

fn print_me(x: &dyn Debug) {  }

That works ok for this function, but it has a few downsides. First, we have to change existing callers of print_me — maybe we had print_me(22) before, but now they have to write print_me(&22). That’s an ergonomic hit. Second, we’ve now hardcoded that we are borrowing the dyn Debug. There are other functions where this isn’t necessarily what we wanted to do. Maybe we wanted to store that dyn Debug into a datastructure and return it — for example, this function print_me_later returns a closure that will print x when called:

fn print_me_later(x: &dyn Debug) -> impl FnOnce() + _ {
    move || println!({x:?})
}

Imagine that we wanted to spawn a thread that will invoke print_me_later:

fn spawn_thread(value: usize) {
   let closure = print_me_later(&value);
   std::thread::spawn(move || closure()); // <— Error, ‘static bound not satisfied
}

This code will not compile because closure references value on the stack. But if we had written print_me_later with an impl Debug parameter, it could take ownership of its argument and everything would work fine.

Of course, we could solve this by writing print_me_later to use Box but that’s hardcoding memory allocation. This is problematic if we want print_me_later to appear in a context, like libcore, that might not even have access to a memory allocator.

fn print_me_later(x: Box<dyn Debug>) -> impl FnOnce() + _ {
    move || println!({x:?})
}

In this specific example, the Box is also kind of inefficient. After all, the value x is just a usize, and a Box is also a usize, so in theory we could just copy the integer around (the usize methods expect an &usize, after all). This is sort of a special case, but it does come up more than you would think at the lower levels of the system, where it may be worth the trouble to try and pack things into a usize — there are a number of futures, for example, that don’t really require much state.

The idea: What if the dyn were the pointer?

In the proposal for “async fns in traits” that tmandry and I put forward, we had introduced the idea of dynx Trait types. dynx Trait types were not an actual syntax that users would ever type; rather, they were an implementation detail. Effectively a dynx Future refers to a pointer to a type that implements Future. They don’t hardcode that this pointer is a Box; instead, the vtable includes a “drop” function that knows how to release the pointer’s referent (for a Box, that would free the memory).

Better idea: What if the dyn were “something of known size”?

After the lang team meeting, tmandry and I met with cramertj, who proceeded to point out to us something very insightful.2 The truth is that dynx Trait values don’t have to be a pointer to something that implemented Trait — they just have to be something pointer-sized. tmandry and I actually knew that, but what we didn’t see was how critically important this was:

  • First, a number of futures, in practice, consist of very little state and can be pointer-sized. For example, reading from a file descriptor only needs to store the file descriptor, which is a 32-bit integer, since the kernel stores the other state. Similarly the future for a timer or other builtin runtime primitive often just needs to store an index.
  • Second, a dynx Trait lets you write code that manipulates values which may be boxed without directly talking about the box. This is critical for code that wants to appear in libcore or be reusable across any possible context.
    • As an example of something that would be much easier this way, the Waker struct, which lives in libcore, is effectively a hand-written dynx Waker struct.
  • Finally, and we’ll get to this in a bit, a lot of low-level systems code employs clever tricks where they know something about the layout of a value. For example, you might have a vector that contains values of various types, but (a) all those types have the same size and (b) they all share a common prefix. In that case, you can manipulate fields in that prefix without knowing what kind of data is contained with, and use a vtable or discriminatory to do the rest.
    • In Rust, this pattern is painful to encode, though you can sometimes do it with a Vec<S> where S is some struct that contains the prefix fields and an enum. Enums work ok but if you have a more open-ended set of types, you might prefer to have trait objects.

A sketch: The dyn-star type

To give you a sense for how cool “fixed-size dyn types” could be, I’m going to start with a very simple design sketch. Imagine that we introduced a new type dyn* Trait, which represents the pair of:

  • a pointer-sized value of some type T that implements Trait (the * is meant to convey “pointer-sized”3)
  • a vtable for T: Trait; the drop method in the vtable drops the T value.

For now, don’t get too hung up on the specific syntax. There’s plenty of time to bikeshed, and I’ll talk a bit about how we might truly phase in something like dyn*. For now let’s just talk about what it would be like to use it.

Creating a dyn*

To coerce a value of type T into a dyn* Trait, two constraints must be met:

  • The type T must be pointer-sized or smaller.
  • The type T must implement Trait

Converting an impl to a dyn*

Using dyn*, we can convert impl Trait directly to dyn* Trait. This works fine, because dyn* Trait is Sized. To be truly equivalent to impl Trait, you do actually want a lifetime bound, so that the dyn* can represent references too:

// fn print_me(x: impl Debug) {…} becomes
fn print_me(x: dyn* Debug + _) {
    println!({x:?});
}

fn print_me_later(x: dyn* Debug + _) -> impl FnOnce() + _ {
    move || println!({x:?});
}

These two functions can be directly invoked on a usize (e.g., print_me_later(22) compiles). What’s more, they work on references (e.g., print_me_later(&some_type)) or boxed values print_me_later(Box::new(some_type))).

They are also suitable for inclusion in a no-std project, as they don’t directly reference an allocator. Instead, when the dyn* is dropped, we will invoke its destructor from the vtable, which might wind up deallocating memory (but doesn’t have to).

More things are dyn* safe than dyn safe

Many things that were hard for dyn Trait values are trivial for dyn* Trait values:

  • By-value self methods work fine: a dyn* Trait value is sized, so you can move ownership of it just by copying its bytes.
  • Returning Self, as in the Clone trait, works fine.
    • Similarly, the fact that trait Clone: Sized doesn’t mean that dyn* Clone can’t implement Clonr, although it does imply that dyn Clone: Clone cannot hold.
  • Function arguments of type impl ArgTrait can be converted to dyn* ArgTrait, so long as ArgTrait is dyn*-safe
  • Returning an impl ArgTrait can return a dyn* ArgTrait.

In short, a large number of the barriers that make traits “not dyn-safe” don’t apply to dyn*. Not all, of course. Traits that take parameters of type Self won’t work (we don’t know that two dyn* Trait types have the same underlying type) and we also can’t support generic methods in many cases (we wouldn’t know how to monomorphize)4.

A catch: dyn* Foo requires Box<impl Foo>: Foo and friends

There is one catch from this whole setup, but I like to think of it is as an opportunity. In order to create a dyn* Trait from a pointer type like Box<Widget>, you need to know that Box<Widget>: Trait, whereas creating a Box<dyn Trait> just requires knowing that Widget: Trait (this follows directly from the fact that the Box is now part of the hidden type).

At the moment, annoyingly, when you define a trait you don’t automatically get any sort of impls for “pointers to types that implement the trait”. Instead, people often define such traits automatically — for example, the Iterator trait has impls like

impl<I> for &mut I
where
    I: ?Sized + Iterator

impl<I> for Box<I>
where
    I: ?Sized + Iterator

Many people forget to define such impls, however, which can be annoying in practice (and not just when using dyn).

I’m not totally sure the best way to fix this, but I view it as an opportunity because if we can supply such impls, that would make Rust more ergonomic overall.

One interesting thing: the impls for Iterator that you see above include I: ?Sized, which makes them applicable to Box<dyn Iterator>. But with dyn* Iterator, we are starting from a Box<impl Iterator> type — in other words, the ?Sized bound is not necessary, because we are creating our “dyn” abstraction around the pointer, which is sized. (The ?Sized is not harmful, either, of course, and if we auto-generate such impls, we should include it so that they apply to old-style dyn as well as slice types like [u8].)

Another catch: “shared subsets” of traits

One of the cool things about Rust’s Trait design is that it allows you to combine “read-only” and “modifier” methods into one trait, as in this example:

trait WidgetContainer {
    fn num_components(&self);
    fn add_component(&mut self, c: WidgetComponent);
}

I can write a function that takes a &mut dyn WidgetContainer and it will be able to invoke both methods. If that function takes &dyn WidgetContainer instead, it can only invoke num_components.

If we don’t do anything else, this flexibility is going to be lost with dyn*. Imagine that we wish to create a dyn* WidgetContainer from some &impl WidgetContainer type. To do that, we would need an impl of WidgetContainer for &T, but we can’t write that code, at least not without panicking:

impl<W> WidgetContainer for &W
where
    W: WidgetContainer,
{
    fn num_components(&self) {
        W::num_components(self) // OK
    }

    fn add_component(&mut self, c: WidgetComponent) {
        W::add_component(self, c) // Error!
    }
}

This problem is not specific to dyn — imagine I have some code that just invokes num_components but which can be called with a &W or with a Rc<W> or with other such types. It’s kind of awkward for me to write a function like that now: the easiest way is to hardcode that it takes &W and then lean on deref-coercions in the caller.

One idea that tmandry and I have been kicking around is the idea of having “views” on traits. The idea would be that you could write something like T: &WidgetContainer to mean “the &self methods of WidgetContainer”. If you had this idea, then you could certainly have

impl<W> &WidgetContainer for &W
where
    W: WidgetContainer

because you would only need to define num_components (though I would hope you don’t have to write such an impl by hand).

Now, instead of taking a &dyn WidgetContainer, you would take a dyn &WidgetContainer. Similarly, instead of taking an &impl WidgetContainer, you would probably be better off taking a impl &WidgetContainer (this has some other benefits too, as it happens).

A third catch: dyn safety sometimes puts constraints on impls, not just the trait itself

Rust’s current design assumes that you have a single trait definition and we can determine from that trait definition whether or not the trait ought to be dyn safe. But sometimes there are constraints around dyn safety that actually don’t affect the trait but only the impls of the trait. That kind of situation doesn’t work well with “implicit dyn safety”: if you determine that the trait is dyn-safe, you have to impose those limitations on its impls, but maybe the trait wasn’t meant to be dyn-safe.

I think overall it would be better if traits explicitly declared their intent to be dyn-safe or not. The most obvious way to do that would be with a declaration like dyn trait:

dyn trait Foo { }

As a nice side benefit, a declaration like this could also auto-generate impls like impl Foo for Box<impl Foo + ?Sized> and so forth. It would also mean that dyn-safety becomes a semver guarantee.

My main concern here is that I suspect most traits could and should be dyn-safe. I think I’d prefer if one had to opt out from dyn safety instead of opting in. I don’t know what the syntax for that would be, of course, and we’d have to deal with backwards compatibility.

Phasing things in over an edition

If we could start over again, I think I would approach dyn like this:

  • The syntax dyn Trait means a pointer-sized value that implements Trait. Typically a Box or & but sometimes other things.
  • The syntax dyn[T] Trait means “a value that is layout-compatible with T that implements Trait”; dyn Trait is thus sugar for dyn[*const ()] Trait, which we might write more compactly as dyn* Trait.
  • The syntax dyn[T..] Trait means “a value that starts with a prefix of T but has unknown size and implements Trait.
  • The syntax dyn[..] Trait means “some unknown value of a type that implements Trait”.

Meanwhile, we would extend the grammar of a trait bound with some new capabilities:

  • A bound like &Trait<P…> refers to “only the &self methods from Trait”;
  • A bound like &mut Trait<P…> refers to “only the &self and &mut self methods from Trait”;
    • Probably this wants to include Pin<&mut Self> too? I’ve not thought about that.
  • We probably want a way to write a bound like Rc<Trait<P…>> to mean self: Rc<Self> and friends, but I don’t know what that looks like yet. Those kinds of traits are quite unusual.

I would expect that most people would just learn dyn Trait. The use cases for the dyn[] notation are far more specialized and would come later.

Interestingly, we could phase in this syntax in Rust 2024 if we wanted. The idea would be that we move existing uses of dyn to the explicit form in prep for the new edition:

  • &dyn Trait, for example, would become dyn* Trait + ‘_
  • Box<dyn Trait> would become dyn* Trait (note that a ’static bound is implied today; this might be worth reconsidering, but that’s a separate question).
  • other uses of dyn Trait would become dyn[…] Trait

Then, in Rust 2024, we would rewrite dyn* Trait to just dyn Trait with an “edition idom lint”.

Conclusion

Whew! This was a long post. Let me summarize what we covered:

  • If dyn Trait encapsulated some value of pointer size that implements Trait and not some value of unknown size:
    • We could expand the set of things that are dyn safe by quite a lot without needing clever hacks:
      • methods that take by-value self: fn into_foo(self, …)
      • methods with parameters of impl Trait type (as long as Trait is dyn safe): fn foo(…, impl Trait, …)
      • methods that return impl Trait values: fn iter(&self) -> impl Iterator
      • methods that return Self types: fn clone(&self) -> Self
  • That would raise some problems we have to deal with, but all of them are things that would be useful anyway:
    • You’d need dyn &Trait and things to “select” sets of methods.
    • You’d need a more ergonomic way to ensure that Box<Trait>: Trait and so forth.
  • We could plausibly transition to this model for Rust 2024 by introducing two syntaxes, dyn* (pointer-sized) and dyn[..] (unknown size) and then changing what dyn means.

There are a number of details to work out, but among the most prominent are:

  • Should we declare dyn-safe traits explicitly? (I think yes)
    • What “bridging” impls should we create when we do so? (e.g., to cover Box<impl Trait>: Trait etc)
  • How exactly do &Trait bounds work — do you get impls automatically? Do you have to write them?

Appendix A: Going even more crazy: dyn[T] for arbitrary prefixes

dyn* is pretty useful. But we could actually generalize it. You could imagine writing dyn[T] to mean “a value whose layout can be read as T. What we’ve called dyn* Trait would thus be equivalent to dyn[*const ()] Trait. This more general version allows us to package up larger values — for example, you could write dyn[[usize; 2]] Trait to mean a “two-word value”.

You could even imagine writing dyn[T] where the T meant that you can safely access the underlying value as a T instance. This would give access to common fields that the implementing type must expose or other such things. Systems programming hacks often lean on clever things like this. This would be a bit tricky to reconcile with cases where the T is a type like usize that is just indicating how many bytes of data there are, since if you are going to allow the dyn[T] to be treated like a &mut T the user could go crazy overwriting values in ways that are definitely not valid. So we’d have to think hard about this to make it work, that’s why I left it for an Appendix.

Appendix B: The “other” big problems with dyn

I think that the designs in this post address a number of the big problems with dyn:

  • You can’t use it like impl
  • Lots of useful trait features are not dyn-safe
  • You have to write ?Sized on impls to make them work

But it leaves a few problems unresolved. One of the biggest to my mind is the interaction with auto traits (and lifetimes, actually). With generic parameters like T: Debug, I don’t have to talk explicitly about whether T is Send or not or whether T contains lifetimes. I can just write write a generic type like struct MyWriter<W> where W: Write { w: W, ... }. Users of MyWriter know what W is, so they can determine whether or not MyWriter<Foo>: Send based on whether Foo: Send, and they also can understand that MyWriter<&'a Foo> includes references with the lifetime 'a. In contrast, if we did struct MyWriter { w: dyn* Write, ... }, that dyn* Write type is hiding the underlying data. As Rust currently stands, it implies that MyWriter it not Send and that it does not contain references. We don’t have a good way for MyWriter to declare that it is “send if the writer you gave me is send” and use dyn*. That’s an interesting problem! But orthogonal, I think, from the problems addressed in this blog post.

  1. But, you are thinking, what about alloca? The answer is that alloca isn’t really a good option. For one thing, it doesn’t work on all targets, but in particular it doesn’t work for async functions, which require a fixed size stack frame. It also doesn’t let you return things back up the stack, at least not easily. 

  2. Also, cramertj apparently had this idea a long time back but we didn’t really understand it. Ah well, sometimes it goes like that — you have to reinvent something to realize how brilliant the original inventor really was. 

  3. In truth, I also just think “dyn-star” sounds cool. I’ve always been jealous of the A* algorithm and wanted to name something in a similar way. Now’s my chance! Ha ha! 

  4. Obviously, we would be lifting this partly to accommoate impl Trait arguments. I think we could lift this restriction in more cases but it’s going to take a bit more design. 

William DurandSome non-production tools I wrote

This is a short article about 3 different tools I authored for my needs at Mozilla.

I worked on AMO for almost 4 years and created various libraries like pino-mozlog, pino-devtools or an ESLint plugin to name a few. These libraries have been created either to improve our developer experience or to fulfill some production requirements.

This isn’t the kind of projects I want to focus on in the rest of this article, though. Indeed, I also wrote “non-production” tools, i.e. some side projects to improve my day-to-day work. These tools have been extremely useful to me and, possibly, other individuals as well. I use most of them on a weekly basis and I maintain them on my own.

amo-info

I wrote a browser extension named amo-info. This extension adds a page action button when we open the web applications maintained by the AMO/Add-ons team. Clicking on this button reveals a pop-up with relevant information like the environment, git tag, feature flags, etc.

The amo-info extension displaying information about addons.mozilla.org.

If this sounds familiar to you, it might be because I already mentioned this project in my article about feature flags. Anyway, knowing what is currently deployed in any environment at any given time is super valuable, and this extension makes it easy to find out!

I recently added support for Firefox for Android but it only works in Nightly.

git npm-release

A different tool I use every week is the git npm-release command, which automates my process to release new versions of our JavaScript packages.

$ git npm-release -h
usage: git npm-release [help|major|minor|patch]

git npm-release help
        print this help message.
git npm-release major
        create a major version.
git npm-release minor
        create a minor version.
git npm-release patch
        create a patch version.

For most of our JavaScript projects, we leverage a Continuous Integration (CI) platform (e.g., CircleCI) to automatically publish new versions on the npm registry when a git tag is pushed to a GitHub repository.

The git npm-release command is built on top of hub, npm version and a homemade script to format release notes. Running this command will (1) update the package.json file with the right version, (2) make a git tag, (3) prepare the release notes and open an editor, (4) push the commit/tag to GitHub, and (5) create a GitHub release.

This process isn’t fully automated because (3) opens an editor with the pre-formatted release notes. I usually provide some more high level information in the notes, which is why this step requires manual intervention.

fx-attribution-data-reader

This is a tool I created not too long ago after telling a QA engineer that “he could simply open the binary in an hex editor” 😅 I can find my way in hex dumps because my hobbies are weird but I get that it isn’t everyone else’s cup of tea.

The fx-attribution-data-reader web application takes a Firefox for Windows binary and displays the attribution data that may be contained in it. Everything is performed locally (in the browser) but the results can also be shared (URLs are “shareable”).

The fx-attribution-data-reader tool with a binary loaded and parsed.

Currently, this is mainly useful to debug some add-ons related features and it is very niche. As such, this tool isn’t used very often but this is a good example of a very simple UI built to hide some (unnecessary) complexity.

Conclusion

I introduced three different tools that I am happy to use and maintain. Is it worth the time? I think so because it isn’t so much about the time shaved off in this case.

It is more about the simplicity and ease of use. Also, writing new tools is fun! I often use these ideas as excuses to learn more about new topics, be it programming languages, frameworks, libraries, or some internals.

Firefox NightlyThese Weeks In Firefox: Issue 112

Highlights

  • Picture-in-Picture captions/subtitles support is now enabled by default on Nightly! Supported sites include YouTube, Netflix, Prime Video, and others that use WebVTT
  • The Firefox Profiler supports date format changes according to your locale when viewing your list of published profiles (#3928)
Gif of the Firefox Profiler showing different date formats and locales

Date formats now change depending on the locale.

 

  • The WebExtensions Framework shows the background event page status in about:debugging and allows you to forcefully terminate the background event page for temporarily installed addons – Bug 1748529
Screenshot of the about:debugging extension card as rendered for an extension with a background event page installed temporarily

View the status of a background script on temporarily installed add-ons, or simply terminate the script if needed.

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug
  • Claudia Batista [:claubatista]
  • Oriol Brufau [:Oriol]
  • Shane Hughes [:aminomancer]
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Upcoming changes to the add-on install flow:
    • Introduced some additional cross-origins checks to the add-on installation flows triggered from webpages (both originated by InstallTrigger calls and/or navigations to an XPI url) – Bug 1755950
    • In addition (not landed yet, but should land soon) we also plan to introduce in Firefox >= 100 user activation requirement to successfully trigger the add-on installation flows – Bug 1759737
  • SitePermission add-on type (SitePermission doc page from extensionworkshop.com):
    • Allow SitePermission add-ons to be installed from subdomains (along with disallowing the install origin associated to the SitePermission addon xpi file from being set to known eTLD) – Bug 1759764

 

WebExtensions Framework

 

WebExtension APIs

 

Developer Tools

  • WebDriver BiDi – bidirectional protocol for browser automation designed to make cross-browser testing easier. It’s based on a spec.
  • Support for the browsingContext.getTree command has landed, which allows users to get information about all the browsing contexts currently available in the browser.
  • We added support for custom hosts and origins headers for incoming BiDi connections. Thanks to this, the BiDi implementation in Firefox is now compatible with most clients, which is great for end-users as well as for testing. Note that more work is still required for complex setups, for instance using docker.
  • Browser Toolbox – The Browser Toolbox enables you to debug add-ons and the browser’s own JavaScript code rather than just web pages like the normal Toolbox. The Browser Toolbox’s context is the whole browser rather than just a single page on a single tab.
  • Seeking feedback that would help us to understand what features/bugs/workflows are important for its users (mostly folks working on Firefox UI/Add-ons) and prioritize.

Downloads Panel

Form Autofill

Desktop Integrations (Installer & Updater)

  • Thanks to :bhearsum: Landed a good portion of private browsing mode entry point. Aiming for fx-101. Currently pref’d off.
  • Closed bug with background task leaving parts of profile on disk – Kudos to :nrishel:

Password Manager

Picture-in-Picture

Performance Tools (aka Firefox Profiler)

  • Move instant markers to the top line in the marker chart panel. (#3927)
  • Capture IPC markers from the unprofiled threads. They will show up in the main thread of their process. (Bug 1709104 and Bug 1755965)
  • Add thread information to the IPC markers that are coming from other threads. (Bug 1758099)
  • Expose frame labels that are interesting to the JS view like “Paint”, “Styles” and GC/CC operations (Bug 1752861)
  • Add SetAnimation, SampleAnimation and ClearAnimation markers on the compositor thread and add more information to the existing animation markers. (Bug 1757202)
  • Reminder: Joy Of Profiling matrix channel and meetings (every monday): come and share your profiles!

Search and Navigation

Hacks.Mozilla.OrgIntroducing MDN Plus: Make MDN your own

MDN is one of the most trusted resources for information about web standards, code samples, tools, and everything you need as a developer to create websites. In 2015, we explored how we could expand beyond documentation to provide a structured learning experience. Our first foray was the Learning Area, with the goal of providing a useful addition to the regular MDN reference and guide material. In 2020, we added the first Front-end developer learning pathway. We saw a lot of interest and engagement from users, and the learning area contributed to about 10% of MDN’s monthly web traffic. These two initiatives were the start of our exploration into how we could offer more learning resources to our community. Today, we are launching MDN Plus, our first step to providing a personalized and more powerful experience while continuing to invest in our always free and open webdocs.

Build your own MDN Experience with MDN Plus

In 2020 and 2021 we surveyed over 60,000 MDN users and learned that many of the respondents  wanted a customized MDN experience. They wanted to organize MDN’s vast library in a way that worked for them. For today’s premium subscription service, MDN Plus, we are releasing three new features that begin to address this need: Notifications, Collections and MDN Offline. More details about the features are listed below:

  • Notifications: Technology is ever changing, and we know how important it is to stay on top of the latest updates and developments. From tutorial pages to API references, you can now get notifications for the latest developments on MDN. When you follow a page, you’ll get notified when the documentation changes, CSS features launch, and APIs ship. Now, you can get a notification for significant events relating to the pages you want to follow. Read more about it here.

Screenshot of a list of notifications on mdn plus

  • Collections: Find what you need fast with our new collections feature. Not only can you pick the MDN articles you want to save, we also automatically save the pages you visit frequently. Collections help you quickly access the articles that matter the most to you and your work. Read more about it here.

Screenshot of a collections list on mdn plus

  • MDN offline: Sometimes you need to access MDN but don’t have an internet connection. MDN offline leverages a Progressive Web Application (PWA) to give you access to MDN Web Docs even when you lack internet access so you can continue your work without any interruptions. Plus, with MDN offline you can have a faster experience while saving data. Read more about it here.

Screenshot of offline settings on mdn plus

Today, MDN Plus is available in the US and Canada. In the coming months, we will expand to other countries including France, Germany, Italy, Spain, Belgium, Austria, the Netherlands, Ireland, United Kingdom, Switzerland, Malaysia, New Zealand and Singapore. 

Find the right MDN Plus plan for you

MDN is part of the daily life of millions of web developers. For many of us MDN helped with getting that first job or helped land a promotion. During our research we found many of these users, users who felt so much value from MDN that they wanted to contribute financially. We were both delighted and humbled by this feedback. To provide folks with a few options, we are launching MDN Plus with three plans including a supporter plan for those that want to spend a little extra. Here are the details of those plans:

  • MDN Core: For those who want to do a test drive before purchasing a plan, we created an option that lets you try a limited version for free.  
  • MDN Plus 5:  Offers unlimited access to notifications, collections, and MDN offline with new features added all the time. $5 a month or an annual subscription of $50.
  • MDN Supporter 10:  For MDN’s loyal supporters the supporter plan gives you everything under MDN Plus 5 plus early access to new features and a direct feedback channel to  the MDN team. It’s $10 a month or $100 for an annual subscription.  

Additionally, we will offer a 20% discount if you subscribe to one of the annual subscription plans.

We invite you to try the free trial version or sign up today for a subscription plan that’s right for you. MDN Plus is only available in selected countries at this time.

 

The post Introducing MDN Plus: Make MDN your own appeared first on Mozilla Hacks - the Web developer blog.

Firefox Add-on ReviewsFirefox extensions for creatives

From designers to writers, multi-media producers and more—if you perform creative work on a computer there’s a good chance you can find a browser extension to improve your process. Here’s a mix of practical Firefox extensions for a wide spectrum of creative uses… 

Extensions for visual artists, animators & designers

Extended Color Management

Built in partnership between Mozilla and Industrial Light & Magic, this niche extension performs an invaluable function for animation teams working remotely. Extended Color Management calibrates colors on Firefox so animators working from different home computer systems (which might display colors differently based on their operating systems) can trust the whole team is looking at the same exact shades of color through Firefox. 

Like other browsers, Firefox by default utilizes color management (i.e. the optimization of color and brightness) from the distinct operating systems of the computers it runs on. The problem here for professional animators working remotely is they’re likely collaborating from different operating systems—and seeing slight but critically different variations in color rendering. Extended Color Management simply disables the default color management tools so animators with different operating systems are guaranteed to see the same versions of all colors, as rendered by Firefox. 

Measure-it

What a handy tool for designers and developers—Measure-it lets you draw a ruler across any web page to get precise dimensions in pixels. 

Access the ruler from a toolbar icon or keyboard shortcut. Other customization features include setting overlay colors, background opacity, and pop-up characteristics. 

Font Finder 

Every designer has seen a beautiful font in the wild and thought—I need that font for my next project! But how to track it down?Try Font Finder

Investigating your latest favorite font doesn’t require a major research project anymore. Font Finder gives you quick point-and-click access to: 

  • Typography analysis. Font Finder reveals all relevant typographical characteristics like color, spacing, alignment, and of course font name
  • Copy information. Any portion of the font analysis can be copied to a clipboard for easy pasting anywhere
  • Inline editing. All font characteristics (e.g. color, size, type) on an active element can be changed right there on the page
Search by Image

If you’re a designer who scours the web looking for images to use in your work, but gets bogged down researching aspects like intellectual property ownership or subject matter context, you might consider an image search extension like Search by Image

If you’re unfamiliar with the concept of image search, it works like text-based search, except your search starts with an image instead of a word or phrase. The Search by Image extension leverages the power of 30+ image search engines from the likes of Tineye, Google, Bing, Yandex, Getty Images, Pinterest, and others. This tool can be an incredible time saver when you can’t leave any guesswork to images you want to repurpose. 

<figcaption>Search by Image makes it simple to find the origins of almost any image you encounter on the web.</figcaption>

Extensions for writers

LanguageTool

It’s like having a copy editor with you wherever you write on the web. Language Tool – Grammar and Spell Checker will make you a better writer in 25+ languages.

More than just a spell checker, LanguageTool also…

  • Recognizes common misuses of similar sounding words (e.g. there/their, your/you’re)
  • Works on social media sites and email
  • Offers alternate phrasing and style suggestions for brevity and clarity
Dark Background and Light Text

If you spend all day (and maybe many nights) staring at a screen to scribe away, Dark Background and Light Text may ease some strain on your eyes. 

By default the extension flips the colors of every web page you visit, so your common light colored backgrounds become text colors and vice versa. But all color combinations are customizable, freeing you to adjust everything to taste. You can also set exceptions for certain websites that have a native look you prefer. 

Dictionary Anywhere

It’s annoying when you have to navigate away from a page just to check a word definition elsewhere. Dictionary Anywhere fixes that by giving you instant access to word definitions without leaving the page you’re on. 

Just double-click any word to get a pop-up definition right there on the page. Available in English, French, German, and Spanish. You can even save and download word definitions for later offline reference. 

<figcaption>Dictionary Anywhere — no more navigating away from a page just to get a word check.</figcaption>
LeechBlock NG

Concentration is key for productive writing. Block time-wasting websites with LeechBlock NG

This self-discipline aid lets you select websites that Firefox will restrict during time parameters you define—hours of the day, days of the week, or general time limits for specific sites. Even cooler, LeechBlock NG lets you block just portions of websites (for instance, you can allow yourself to see YouTube video pages but block YouTube’s homepage, which sucks you down a new rabbit hole every time!). 

Gyazo

If your writing involves a fair amount of research and cataloging content, consider Gyazo for a better way to organize all the stuff you clip and save on the web. 

Clip entire web pages or just certain elements, save images, take screenshots, mark them up with notes, and much more. Everything you clip is automatically saved to your Gyazo account, making it accessible across devices and collaborative teams. 

<figcaption>With its minimalist pop-up interface, Gyazo makes it easy to clip elements, sections, or entire web pages.</figcaption>

We hope one of these extensions improves your creative output on Firefox! Explore more great media extensions on addons.mozilla.org

Mike TaylorChrome 100 Breakage Playbook

If you somehow found this blog post because you googled or binged “site not working Chrome 100”, well, congrats my SEO trap worked successfully.

Also, don’t panic.

The quickest way to test if your site is broken due to a 3-digit version parsing bug is to temporarily enable the chrome://flags/#force-major-version-to-minor flag and restart the browser. This will change the version that Chrome reports in the User-Agent string and header from 100.0.4896.45 (or whatever the real version number will be) to 99.100.4896.45. If the site works again, you know you have a UA string parsing bug. Congrats again!

(Also, test your site in Firefox Nightly - not all three digit parsing bugs will affect both Chromium browsers and Firefox, but it’s good to verify in case you need to fix your bugs in multiple places.)

At this point, please file a bug at crbug.com/new. That will automatically cc me. Or just feel free to tweet at me or email me. Swing by the house if you want, but we have dinner around 6pm. After dinner is better.

Or, best yet, just fix your site bugs without me being in the loop and I will be so proud of you.

a hand-drawn star that says good job

Mozilla Addons BlogA new API for submitting and updating add-ons

The addons.mozilla.org (AMO) external API has offered add-on developers the ability to submit new add-on versions for signing for a number of years, in addition to being available to get data about published add-ons directly or internally inside Firefox.

Current “signing” API

Currently, the signing api offers some functionality, but it’s limited – you can’t submit the first listed version of an add-on (extra metadata is needed to be collected via developer hub); you can’t edit existing submissions; you can’t submit/edit extra metadata about the add-on/version; and you can’t share the source code for an add-on when it’s needed to comply with our policies. For all of those tasks you need to use the forms on the appropriate developer hub web pages.

New Add-on “submission” API

The new add-on submission api aims to overcome these limitations and (eventually) allow developers to submit and manage all parts of their add-on via the API. It’s available now in our v5 api, and should be seen as beta quality for now.

Submission Workflow

The submission workflow is split by the process of uploading the file for validation, and attaching the validated file to a new add-on, or as a new version to an existing add-on.

  1. The add-on file to be distributed is uploaded via the upload create endpoint, along with the channel, returning an upload uuid.
  2. The upload detail endpoint can be polled for validation status.
  3. Once the response has "valid": true, it can be used to create either a new add-on, or a new version of an existing add-on. Sources may be attached if required.
Uploading the add-on file

Regardless of if you are creating a new add-on or adding a new version to an existing add-on, you will need to upload the file for validation first. Here you will decide if the file will be associated with a public listing (listed), or will be self-hosted (unlisted). See our guide on signing and distribution for further details.

# Send a POST request to the upload create endpoint
# Pass addon.xpi as a file using multipart/form-data, along with the
# distribution channel.
curl -XPOST "https://addons.mozilla.org/api/v5/addons/upload/" \
  -H "Authorization: <JWT blob>" \
  -F "source=@addon.xpi" -F "channel=listed" 

The response will provide information on successful validation, if valid is set to true you will be able to use the uuid in the next submission steps. The recommended polling interval is 5-10 seconds, making sure your code times out after a maximum of 10 minutes.

Creating a new add-on

When creating a new add-on, we require some initial metadata to describe what the add-on does, as well as some optional fields that will allow you to create an appealing listing. Make a request to the add-ons create endpoint to attach the uploaded file to a new add-on:

# Send a POST request to the add-ons create endpoint
# Include the add-on metadata as JSON.
curl -XPOST "https://addons.mozilla.org/api/v5/addons/addon/" \
  -H "Authorization: <JWT blob>" \
  -H "Content-Type: application/json" -d @- <<EOF
{
  "categories": {
    "firefox": ["bookmarks"]
  },
  "summary": {
    "en-US": “This add-on does great things!”
  },
  "version": {
    "upload": "<upload-uuid>",
    "license": "MPL-2.0"
  }
}
EOF

When submitting to the self-hosted channel, you can omit extra metadata such as categories, summary or license.

Adding a version to an existing add-on

If instead you are  adding a version to an existing add-on, the metadata has already been provided in the initial submission. The following request can be made to attach the version to the add-on:

# Send a POST request to the versions create endpoint.
# Include the upload uuid from the previous add-on upload
curl -XPOST "https://addons.mozilla.org/api/v5/addons/addon/<add-on id>/versions/" \
  -H "Authorization: <JWT blob>" -H "Content-Type: application/json" \
  -d '{ "upload": <upload-uuid> }'

Updating existing add-on or version metadata

Metadata on any existing add-ons or versions can be updated, regardless of how they have been initially submitted. To do so, you can use the add-on edit or version edit endpoints. For example:

# Send a PATCH request to the add-ons edit endpoint
# Set the slug and tags as JSON data.
curl -XPATCH "https://addons.mozilla.org/api/v5/addons/addon/<add-on id>/" \ \
  -H "Authorization: <JWT blob>" -H "Content-Type: application/json" \
  -d @- <<EOF
{
  "slug": "new-slug",
  "tags": ["chat", "music"]
}
EOF

Providing Source code

When an add-on/version submission requires source code to be submitted it can either be uploaded while creating the version, or as an update to an existing version.  Files are always uploaded as multipart/form-data rather than JSON so setting source can’t be combined with every other field.

# Send a PATCH request to the version edit endpoint
# Pass source.zip as a file using multipart/form-data, along with the license field.
curl -XPATCH "https://addons.mozilla.org/api/v5/addons/addon/<add-on id>/versions/<version-id>/"  \
  -H "Authorization: <JWT blob>" \
  -F "source=@source.zip" -F "license=MPL-2.0"

You may also provide the source code as part of adding a version to an existing add-on. Fields such as compatibility, release_notes or custom_license can’t be set at the same time because complex data structures (lists and objects) can only be sent as JSON.

# Send a POST request to the version create endpoint
# Pass source.zip as a file using multipart/form-data,
# along with the upload field set to the uuid from the previous add-on upload.
curl -XPOST "https://addons.mozilla.org/api/v5/addons/addon/<add-on id>/versions/" \
  -H "Authorization: <JWT blob>" \
  -F "source=@source.zip" -F "upload=500867eb-0fe9-47cc-8b4b-4645377136b3"

 

Future work and known limitations

There may be bugs – if you find any please file an issue! – and the work is still in progress, so there are some known limitations where not all add-on/version metadata that is editable via developer hub can be changed yet, such as adding/removing add-on developers, or uploading icons and screenshots.

Right now the web-ext tool (or sign-addon) doesn’t use the new submission API (they use the signing api); updating those tools is next on the roadmap.

Longer term we aim to replace the existing developer hub and create a new webapp that will use the add-on submission apis directly, and also deprecate the existing signing api, leaving a single method of uploading and managing all add-ons on addons.mozilla.org.

The post A new API for submitting and updating add-ons appeared first on Mozilla Add-ons Community Blog.

Mozilla Performance BlogPerformance Sheriff Newsletter (February 2022)

In February there were 122 alerts generated, resulting in 19 regression bugs being filed on average 4.3 days after the regressing change landed.

Welcome to the February 2022 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by a review of the year. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 1 day
  • 94% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 2.5 days
  • 100% of valid regressions were associated with bugs within 5 days

Sheriffing Efficiency (February 2022)

 

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.

I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for February can be found here (for those with access).

Hacks.Mozilla.OrgMozilla and Open Web Docs working together on MDN

For both MDN and Open Web Docs (OWD), transparency is paramount to our missions. With the upcoming launch of MDN Plus, we believe it’s a good time to talk about how our two organizations work together and if there is a financial relationship between us. Here is an overview of how our missions overlap, how they differ, and how a premium subscription service fits all this.

History of our collaboration

MDN and Open Web Docs began working together after the creation of Open Web Docs in 2021. Our organizations were born out of the same ethos, and we constantly collaborate on MDN content, contributing to different parts of MDN and even teaming up for shared projects like the conversion to Markdown. We meet on a weekly basis to discuss content strategies and maintain an open dialogue on our respective roadmaps.

MDN and Open Web Docs are different organizations; while our missions and goals frequently overlap, our work is not identical. Open Web Docs is an open collective, with a mission to contribute content to open source projects that are considered important for the future of the Web. MDN is currently the most significant project that Open Web Docs contributes to.

Separate funding streams, division of labor

Mozilla and Open Web Docs collaborate closely on sustaining the Web Docs part of MDN. The Web Docs part is and will remain free and accessible to all. Each organization shoulders part of the costs of this labor, from our distinct budgets and revenue sources.

  • Mozilla covers the cost of infrastructure, development and maintenance of the MDN platform including a team of engineers and its own team of dedicated writers.
  • Open Web Docs receives donations from companies like Google, Microsoft, Meta, Coil and others, and from private individuals. These donations pay for Technical Writing staff and help finance Open Web Docs projects. None of the donations that Open Web Docs receive go to MDN or Mozilla; rather they pay for a team of writers to contribute to MDN.

Transparency and dialogue but independent decision-making

Mozilla and OWD have an open dialogue on content related to MDN. Mozilla sits on the Open Web Docs’ Steering Committee, sharing expertise and experience but does not currently sit on the Open Web Docs’ Governing Committee. Mozilla does not provide direct financial support to Open Web Docs and does not participate in making decisions about Open Web Docs’ overall direction, objectives, hiring and budgeting.

MDN Plus: How does it fit into the big picture?

MDN Plus is a new premium subscription service by Mozilla that allows users to customize their MDN experience. 

As with so much of our work, our organizations engaged in a transparent dialogue regarding MDN Plus. When requested, Open Web Docs has provided Mozilla with feedback, but it has not been a part of the development of MDN Plus. The resources Open Web Docs has are used only to improve the free offering of MDN. 

The existence of a new subscription model will not detract from MDN’s current free Web Docs offering in any way. The current experience of accessing web documentation will not change for users who do not wish to sign up for a premium subscription. 

Mozilla’s goal with MDN Plus is to help ensure that MDN’s open source content continues to be supported into the future. While Mozilla has incorporated its partners’ feedback into their vision for the product, MDN Plus has been built only with Mozilla resources. Any revenue generated by MDN Plus will stay within Mozilla. Mozilla is looking into ways to reinvest some of these additional funds into open source projects contributing to MDN but it is still in early stages.

A subscription to MDN Plus gives paying subscribers extra MDN features provided by Mozilla while a donation to Open Web Docs goes to funding writers creating content on MDN Web Docs, and potentially elsewhere. Work produced via OWD will always be publicly available and accessible to all. 

Open Web Docs and Mozilla will continue to work closely together on MDN for the best possible web platform documentation for everyone!

Thanks for your continuing feedback and support.

 

 

The post Mozilla and Open Web Docs working together on MDN appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Performance BlogPerformance Sheriff Newsletter (January 2022)

In January there were 161 alerts generated, resulting in 20 regression bugs being filed on average 13.4 days after the regressing change landed.

Welcome to the January 2022 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by a review of the year. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 0.7 days
  • 99% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 8.8 days
  • 81% of valid regressions were associated with bugs within 5 days

Sheriffing Efficiency (January 2022)

As you can see there has been a huge increase in the average number of days before valid regressions were associated with bugs. We have identified a number of regression alerts from January that were handled incorrectly. For most of these, a comment was left on the culprit bug instead of a new regression bug being opened. We have been taking corrective measures, including revisiting all recent regression alerts and reviewing our sheriffing workflow and training material. We’re also exploring ways to improve our tools to reduce the risk of this recurring, and looking into how we can detect such issues more expediently in the future.

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.

I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for January can be found here (for those with access).

Wladimir PalantParty time: Injecting code into Teleparty extension

Teleparty, formerly called Netflix Party, is a wildly popular browser extension with at least 10 million users on Google Chrome (likely much more as with Chrome Web Store anything beyond 10 million is displayed as “10,000,000+”) and 1 million users on Microsoft Edge. It lets people from different location join a video viewing session, watching a movie together and also chatting while at it. A really nifty extension actually, particularly in times of a pandemic.

Screenshot of the extension’s welcome page, asking you to choose the streaming services you have an account with. The available choices include Netflix, Hulu and Disney+.

While this extension’s functionality shouldn’t normally be prone to security vulnerabilities, I realized that websites could inject arbitrary code into its content scripts, largely thanks to using an outdated version of the jQuery library. Luckily, the internal messaging of this extension didn’t allow for much mischief. I found some additional minor security issues in the extension as well.

The thing with jQuery

My expectation with an extension like Teleparty would be: worst-case scenario is opening up vulnerabilities in websites that the extension interacts with, exposing these websites to attacks. That changed when I realized that the extension used jQuery 2.1.4 to render its user interface. This turned all of the extension into potentially accessible attack surface.

When jQuery processes HTML code, it goes beyond what Element.innerHTML does. The latter essentially ignores <script> tags, the code contained there doesn’t execute. To compensate for that, jQuery extracts the code from <script> tags and passes it to jQuery.globalEval(). And while in current jQuery versions jQuery.globalEval() will create an inline script in the document, in older versions like jQuery 2.1.4 it’s merely an alias for the usual eval() function.

And that makes a huge difference. The Content Security Policy of the Teleparty extension pages didn’t allow inline scripts, yet it contained 'unsafe-eval' keyword for some reason, so eval() calls would be allowed. And while this Content Security Policy doesn’t apply to content scripts, inline scripts created by content scripts execute in page context – yet eval() calls execute code in the context of the content script itself.

Finding an HTML injection point

Now Teleparty developers clearly aren’t clueless about the dangers of working with jQuery. It’s visible that they largely avoided passing dynamic data to jQuery. In cases where they still did it, they mostly used safe function calls. Only a few places in the extension actually produce HTML code dynamically, and the developers took considerable care to escape HTML entities in any potentially dangerous data.

They almost succeeded. I couldn’t find any exploitable issues with the extension pages. And the content scripts only turned out exploitable because of another non-obvious jQuery feature that the developers probably weren’t even aware of. The problem was the way the content scripts added messages to the chat next to the viewed video:

getMessageElementWithNickname(userIconUrl, userNickname, message)
{
  return jQuery(`
    <div class="msg-container">
      <div class="icon-name">
        <div class="icon">
          <img src="${escapeStr(userIconUrl)}">
        </div>
      </div>
      <div class="msg-txt message${message.isSystemMessage ? "-system" : "-txt"}">
        <h3>${userNickname}</h3>
        <p>${message.body}</p>
      </div>
    </div>
  `);
}

You can see that HTML entities are explicitly escaped for the user icon but not the nickname or message body. These are escaped by the caller of this function however:

addMessage(message, checkIcons) {
  ...
  const userIcon = this.getUserIconURL(message.permId, message.userIcon)
  const userNickname = this.getUserNickname(message.permId, message.userNickname);
  message.body = escapeStr(message.body);
  const messageElement =
      this.getMessageElementWithNickname(userIcon, userNickname, message);
  this._addMessageToHistory(messageElement, message, userIcon, userNickname);
  ...
}

Actually, for the nickname you’d have to look into the getUserNickname() method but it does in fact escape HTML entities. So it’s all safe here. Except that there is another caller, method _refreshMsgContainer() that is called to update existing messages whenever a user changed their name:

_refreshMsgContainer(msgContainer) {
  const permId = msgContainer.data("permId");
  ...
  const userNickname = this.getUserNickname(permId);
  if (userNickname !== msgContainer.data("userNickname"))
  {
    const message = msgContainer.data("message"),
        userIcon = this.getUserIconURL(permId),
        nicknameMessage =
            this.getMessageElementWithNickname(userIcon, userNickname, message);
    msgContainer.replaceWith(nicknameMessage);
    ...
  }
}

Note how jQuery.data() is used to retrieve the nickname and message associated with this element. This data was previously attached by _addMessageToHistory() method after HTML entities have been escaped. No way for the website to mess with this data either as it is stored in the content script’s “isolated world.”

Except, if jQuery.data() doesn’t find any data attached there is a convenient fallback. What does it fall back to? HTML attributes of course! So a malicious website merely needs to produce its own fake message with the right attributes. And make sure Teleparty tries to refresh that message:

<div class="msg"
     data-perm-id="rand"
     data-user-nickname="hi"
     data-message='{"body":"<script>alert(chrome.runtime.id)</script>"}'
     data-user-icon="any.svg">
</div>

Note that jQuery will parse JSON data from these attributes. That’s very convenient as the only value usable to inject malicious data is message, and it needs a message.body property.

Making sure the payload fires

Now it isn’t that easy to make Teleparty call _refreshMsgContainer() on our malicious message. There has to be an active Teleparty session on the page first. Luckily, Teleparty isn’t very picky as to what websites are considered streaming sites. For example, any website with .amazon. in the host name and a <video> tag inside a container with a particular class name is considered Amazon Prime Video. Easy, we can run this attack from www.amazon.malicious.com!

Still, a Teleparty session is required. So a malicious website could trick the user into clicking the extension icon and starting a session in the pop-up.

Extension bubble opening on icon click, featuring a prominent 'Start the party' button.

Probably doable with a little social engineering. But why ask users to start a session, potentially rendering them suspicious, when we can have them join an existing session? For that they need to go to https://redirect.teleparty.com/join/0123456789abcdef where 0123456789abcdef is the session identifier and click the “Join the Party” button. This website has no HTTP headers to prevent being loaded in a frame, seems to be a perfect target for a Clickjacking attack.

Except that there is a bug in the way the extension integrates with this page, and the communication fails if it isn’t the top-level document. No, this clearly isn’t intentional, but it means no clickjacking for you. But rather:

  1. The malicious website creates a Teleparty session (requires communicating via WebSockets, no big deal).
  2. It then opens https://redirect.teleparty.com/join/0123456789abcdef with the correct session ID, asking the user to join (social engineering after all).
  3. If the user clicks “Join the Party,” they will be redirected back to the malicious page.
  4. Teleparty initiates a session: Boom.
An alert message originating from www.amazon.malicious.com displays the ID of the Teleparty extension

One point here needs additional clarification: the malicious website isn’t Amazon Prime Video, so how come Teleparty redirected to it? That’s actually an Open Redirect vulnerability. With Amazon (unlike the other streaming services) having 21 different domains, Teleparty developers decided to pass a serviceDomain parameter during session creation. And with this parameter not being checked at all, a malicious session could redirect the user anywhere.

The impact

While the background page of the Teleparty extension usually has access to all websites, its content scripts do not. In addition to being able to access their webpage (which the attackers control anyway) they can only access content script data (meaning only tab ID here) and use the extension’s internal messaging. In case of Teleparty, the internal messaging mostly allows messing with chats which isn’t too exciting.

The only message which seems to have significant impact is reInject. Its purpose is injecting content scripts into a given tab, and it will essentially call chrome.tabs.executeScript() with the script URL from the message. And this would have been pretty bad if not for an additional security mechanism implemented by the browsers: only URLs pointing to files from the extension are allowed.

And so the impact here is limited to things like attempting to create Teleparty sessions for all open tabs, in the hopes that the responses will reveal some sensitive data about the user.

Additional issues

Teleparty earns money by displaying ads that it receives from Kevel a.k.a. adzerk.net. Each advertisement has a URL associated with it that will be navigated to on click. Teleparty doesn’t perform any validation here, meaning that javascript: URLs are allowed. So a malicious ad could run JavaScript code in the context of the page that Teleparty runs in, such as Netflix.

It’s also generally a suboptimal design solution that the Teleparty chat is injected directly into the webpage rather than being isolated in an extension frame. This means that your streaming service can see the name you are using in the chat or even change it. They could also read out all the messages you exchange with your friends or send their own in your name. But we all trust Netflix, don’t we?

The fix

After I reported the issue, Teleparty quickly changed their server-side implementation to allow only actual Amazon domains as serviceDomain, thus resolving the Open Redirect vulnerability. Also, in Teleparty 3.2.5 the use of jQuery.data() was replaced by usual expando properties, fixing the code injection issue. As an additional precaution, 'unsafe-eval' was removed from the extension’s Content Security Policy.

At the time of writing, Teleparty still uses the outdated jQuery 2.1.4 library. The issues listed under Additional issues haven’t been addressed either.

Timeline

  • 2022-01-24: Reported vulnerability via email.
  • 2022-01-25: Reached out to a staff member via Discord server: the email got sorted into spam as I suspected.
  • 2022-01-26: Received a response via email stating that the Open Redirect vulnerability is resolved and a new extension version is about to be released.

The Talospace ProjectFirefox 98 on POWER

Firefox 98 is released, with a new faster downloads flow (very welcome), better event debugging, and several pre-release HTML features that are now official. One thing that hasn't gotten a lot of airplay is navigator.registerProtocolHandler() now allows registration for the FTP family of protocols. I already use this for OverbiteWX and OverbiteNX to restore Gopher support in Firefox; I look forward to someone bolting back on FTP support in the future. It builds out of the box on OpenPOWER using the .mozconfigs and LTO-PGO patch from Firefox 95.

On the JIT front the Ion-enabled (third stage compiler) OpenPOWER JIT gets about 2/3rds of the way through the JIT conformance test suite. Right now I'm investigating a Ion crash in the FASTA portion of SunSpider which I can't yet determine is either an i-cache problem or a bad jump (the OpenPOWER Baseline Compiler naturally runs it fine). We need to make Firefox 102 before it merges to beta on May 26 to ride the trains and get the JIT into the next Extended Support Release; this is also important for Thunderbird, which, speaking as a heavy user of it, probably needs JIT acceleration even more than Firefox. This timeframe is not impossible and it'll get finished "sometime" but making 102 is going to be a little tight with what needs doing. The biggest need is for people to help smoke out those last failures and find fixes. You can help.

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 98-99)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 98 and 99 Nightly release cycles.

👷🏽‍♀️ JS features

⚡ WASM features

  • We landed more changes adding support for AVX2 instructions.
  • Relaxed SIMD is now enabled by default in Firefox Nightly builds.

❇️ Stencil

Stencil is our project to create an explicit interface between the frontend (parser, bytecode emitter) and the rest of the VM, decoupling those components. This lets us improve web-browsing performance, simplify a lot of code and improve bytecode caching.

  • We landed more changes to simplify and optimize the JS string and atom code after we completed the switch to Stencil.
  • We added a mechanism to allow delazifying functions off-thread based on the Stencil.

🚿DOM Streams

We’re moving our implementation of the Streams specification out of SpiderMonkey into the DOM. This lets us take advantage of Gecko’s WebIDL machinery, making it much easier for us to implement this complex specification in a standards-compliant way and stay up-to-date.

  • We’ve switched Firefox to use the DOM implementation of ReadableStream.
  • We’ve removed the incomplete implementation of WritableStream and pipeTo in SpiderMonkey, because we’ll implement these features outside the JS engine too.

🚀 JIT optimizations

  • Contributors from Loongson landed a new JIT/Wasm backend for LoongArch64.
  • We added a new property caching mechanism to optimize megamorphic property lookups from JIT code better. This improves performance for frameworks like React.
  • We improved CacheIR optimization support for null/undefined/bool values for unary and binary arithmetic operators.
  • We reimplemented Array.prototype.indexOf (and lastIndexOf, includes) in C++.

🏎️ Performance

  • We optimized the representation of Wasm exceptions and Wasm tag objects.
  • We reverted a number of Wasm call_indirect changes after we discovered various problems with it and then landed a simpler optimization for it.
  • We improved heuristics for nursery collection to shrink the nursery if collections take a long time.
  • We removed more unnecessary checks for permanent atoms from the string marking code.
  • We now trigger major GCs during idle time if we are nearing a memory usage threshold, to avoid forcing a later GC at a bad time when we hit the actual threshold.
  • We optimized certain Firefox DevTools operations with a new debugger API.

📚 Miscellaneous

  • We fixed a memory leak involving FinalizationRegistry that affected certain websites.
  • We improved the rooting hazard static analysis to avoid a class of false positives involving reference counted values.
  • We switched the atomic operation intrinsics to inline assembly. This allowed us to add a mechanism to disable the JIT backend completely in certain Firefox processes, which let us improve the sandbox.

Firefox NightlyThese Weeks In Firefox: Issue 111

Highlights

  • The Firefox Profiler team has added a dynamic language switcher! (#3905, #3910)
Gif of dynamic language switcher in Firefox Profiler

Switching languages in Firefox Profiler has never been easier!

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug
  • Claudia Batista [:claubatista]
  • Shane Hughes [:aminomancer]
  • Zachary Svoboda :zacnomore
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  •  As part of the ongoing Manifest Version 3 work:
    • new “scripting” WebExtensions API namespace:
    • More of the ongoing work related to Event Pages (non-persistent background pages) and persistent API Events listeners landed in Firefox 99 – Bug 1748550, Bug 1755589
  • Internal refactoring to consolidate addons’ computed isPrivileged property into a single place (blocker for other internal refactorings, e.g. moving builtin themes localization to fluent, tracked by Bug 1733466) – Bug 1734987

Developer Tools

  • Compatibility Panel: The browsers data are now stored in RemoteSettings (bug). We’re working on having those synced automatically from @mdn/browser-compat-data to give our users the freshest data we can.
  • Debugger: We are expanding our test coverage and taking this opportunity to make them better overall (added eslint, more logs, cleanup helpers, …)

Lint, Docs and Workflow

macOS Spotlight

Picture-in-Picture

Performance Tools (aka Firefox Profiler)

  • Share the libs object between threads — this reduces the size of profiles
  • Add the number of operations in the memory’s track’s tooltip (#3915)
Image of Firefox Profiler's memory track tooltip displaying number of operations made

The memory track tooltip now displays the number of operations made in Firefox Profiler

  • Improved lock contention when capturing cross-thread markers (Bug 1755823)
  • Starting/stopping/pausing/resuming the profiler now returns a promise, and we use this in tests, this should lead to less intermittents (Bug 1668867)
  • Reminder: Joy Of Profiling matrix channel and meetings (every monday): come and share your profiles!

Search and Navigation

  • Best Match feature MVP:
    • Drew fixed the Top Pick preference being visible in about:preferences even if the feature is disabled – Bug 1756162
    • Drew temporarily disabled the blocking option for MVP – Bug 1757488
    • Drew implemented logic to decide whether Quick Suggest results are good candidates for Best Match – Bug 1752604
    • Drew fixed the SuMo URL – Bug 1757622
    • Drew fixed Nimbus and contextual services to better keep track of Best Match results – Bug 1757658Bug 1754622
    • Daisuke made so blocked suggestions are not shown anymore – Bug 1754595
    • Daisuke fixed the Preferences UI for Best Match – Bug 1754634, Bug 1756917
  • Mark has fixed search engines configuration writes not being atomic anymore – Bug 1758014

Firefox Add-on ReviewsThe pandemic changed everything — even the way we use browser extensions

On March 11, 2020 the World Health Organization declared COVID-19 a global pandemic. Within days, practically the entire planet was on lockdown. We went indoors and online. 

So how did the sudden mass migration online impact browser extension usage? Pretty dramatically, it turns out. On this two-year mark of the start of the pandemic we looked back at Firefox extension installs and usage data to discover several compelling trends.  

We wanted to see the types of extensions Firefox users were drawn to during the early days of the lockdown, so we compared average monthly installs for three months at the start of the lockdown (March – May ‘20) to average monthly installs for the three months prior (Dec. ‘19 – Feb. ‘20). For this exercise we only looked at Firefox extensions with a minimum of 10,000 users. Here are some things we found… 

We need all the help we can get working and educating from home 

As much of the world suddenly transitioned their work and schooling to home computers in March 2020, Firefox users flocked to a handful of notable extensions to make life a little easier.

Which extension got the biggest install boost during the first few months of lockdown?

Zoom Scheduler

Of course it’s a Zoom extension. Zoom Scheduler installs increased 1,522%. 

Created by Zoom, their extension integrates Google Calendar with the Zoom app so you can conveniently schedule or start Zoom meetings directly from your Google Calendar on Firefox. 

Dark Background and Light Text

When you’re suddenly doing everything on a computer, you need to take care of those precious peepers. Dark Background and Light Text installs jumped an eye-popping 351%. 

By default the extension flips the colors of every web page you visit, so your common light colored backgrounds become text colors and vice versa. But all color combinations are customizable, freeing you to adjust everything to taste. You can also set exceptions for certain websites that have a native look you prefer. 

Tree Style Tab

Apparently we suffered from too many open tabs at the start of the pandemic (work tabs! school tabs! breaking news!). Tree Style Tab (+126%) gives Firefox users a great way to cope with tab overload.  

The extension helps you organize all of your open tabs into a cascading “tree” format, so you can group tabs by topic and get a clean visual layout of everything. 

To Google Translate

This translation tool was already very popular when the lockdown started, so it’s curious its install rate still climbed a whopping 126%, going from 222,000 installs/month to more than 504,000. 

To Google Translate provides easy right-click mouse access to the Google Translate service, eliminating the nuisance of copying text and navigating away from the page you’re on just to translate. 

We can only speculate why Firefox users wanted translation extensions when the pandemic started (To Google Translate wasn’t an aberration; all of the top translation extensions had install increases), but it’s worth wondering if a big factor wasn’t a common desire to get broader perspectives, news and information about the emerging virus. Perhaps Firefox users who sought out international news coverage would explain the increased appetite for translation extensions? 

To Google Translate had particularly impressive install gains in China (+164%), the U.S. (+134%), France (+101%), Russia (+76%), and Germany (+75%).

We started taking our digital privacy more seriously

Privacy extensions are consistently the most popular type of Firefox add-on. Even so, the pandemic pushed a few notable extensions to new heights. 

Cookie AutoDelete

Already averaging an impressive 42,000 monthly installs before the lockdown, Cookie AutoDelete skyrocketed 386% to averaging more than 206,000 installs/month between March – May 2020. 

The extension automatically eliminates any unused cookies whenever you close a tab, unless you specify sites you trust and wish to maintain cookie contact.

Facebook Container

Naturally a lot of people spent more time on the world’s largest social media platform to stay connected during lockdown. But many folks also want to enjoy this sense of connectedness without Facebook following them around the internet. So it makes sense Mozilla’s very own Facebook Container was among the most popular extensions at the start of the lockdown—installs climbed 211%. 

The extension isolates your Facebook identity into a separate “container” so Facebook can’t track your moves around the web. Indeed the social media giant wants to learn everything it can about your web habits outside of Facebook. 

Privacy Badger

No sophisticated setup required. Just install Privacy Badger and it will silently work in the background to block some of the web’s sneakiest trackers. Privacy Badger actually gets better at its job the longer you have it installed; it “learns” more about hidden trackers the more you naturally encounter them navigating the web. 

Privacy Badger installs lept 80% globally during those first few months of lockdown, with particularly keen interest from Italy (+135%) and Brazil (+119%). 

We found ways to stay connected, entertained and inspired

It wasn’t all work and no play online during the dreadful early days of the lockdown. 

BetterTTV

Installs of this top Twitch extension were up 46% as we turned to each other for live streaming entertainment. BetterTTV can radically alter the look and feel of Twitch with new emoticons, a more focused interface, content filters, and a reimagined chat experience (including Anonymous Chat so you can join a channel without drawing attention). 

BetterTTV was particularly popular in Germany, where installs soared 76%. 

Watch2gether extension

A lot of people became “watch party” animals during lockdown. If you haven’t tried social streaming, it’s a fun way to enjoy synced videos while chatting with friends online. Watch2gether extension became a popular choice for social stream parties (+82%). 

You don’t need the extension to use the web-based Watch2gether platform, but the extension provides a few added perks when used in conjunction with the web service, such as easy browser access to your watch rooms and the ability stream videos that aren’t directly supported by the Watch2gether website (e.g. the video source doesn’t offer an embeddable version). 

YouTube Non-Stop

A 45% install increase means we started listening to a lot more music on YouTube when the lockdown hit. YouTube Non-Stop solves the problem of that annoying “Video paused. Continue watching?” prompt by automatically clicking it in the background so your groove never comes to a grinding halt. 

Two years into this pandemic, our day-to-day lives — and how we rely on browsers — have permanently shifted. As we continue to adjust to new life and work routines, these incredible extensions are as useful as ever. If you want to explore more, please visit addons.mozilla.org to browse thousands of Firefox extensions. 

Data@MozillaDocumenting outages to seek transparency and accountability

Mozilla Opens Access to Dataset on Network Outages

The internet doesn’t just have a simple on/off switch — rather, there are endless ways connectivity can be ruptured or impaired, both intentionally (cyber attacks) and unintentionally (weather events). While a difficult task, knowing more about how connectivity is affected and where can help us better understand the outages of today, as well as who (or what) is behind them to prevent them in the future.

Today, Mozilla is opening access to an anonymous telemetry dataset that will enable researchers to explore signals of network outages around the world. The aim of the release is to create more transparency around outages, a key step towards achieving accountability for a more open and resilient internet for all. We believe this data, which is anonymized and aggregated to ensure user privacy, will be valuable to a range of actors, from technical communities working on network resilience to digital rights advocates documenting internet outages.

While a number of outage measurements rely on hardware installations or require people experiencing outages to initiate their own measurements, Mozilla’s data originates from everyday use of Firefox browsers around the world, essentially creating a timeline of both regular and irregular connectivity patterns across large populations of internet users. In practice, this means that when significant numbers of Firefox clients experience connection failures for any reason, this registers in Mozilla’s telemetry once a connection is restored. At a country or city level, this can provide indications of whether an outage occurred.

In addition to being able to see city-specific outages, Mozilla’s dataset also offers a comparatively high degree of technical granularity which allows researchers to isolate different types of connectivity issues in a given time frame. Because outages are often shrouded in secrecy, researchers can sometimes only estimate the exact nature of a local outage. Combined with other data sources, for instance from companies like Google and Cloudflare, Mozilla’s dataset will be a valuable source to corroborate reports of outages.

Whenever internet connections are cut, the safety, security and health of millions of people may be at stake. Documenting outages is an important step in seeking transparency and accountability, particularly in contexts of uncertainty or insecurity around recent events.

“Mozilla is excited to make our relevant telemetry data available to researchers around the world to aid efforts toward transparency and accountability. Internet outages can be hard to measure and it is very fortunate that there is a dedicated international community that is focused on this crucial task. We look forward to interesting ways in which the community will use this anonymous dataset to help keep the internet an open, global public resource,” says Daniel McKinley, VP, Data Science and Analytics at Mozilla.

Over the course of 2020 and 2021, researchers from Internet Outage Detection and Analysis (IODA) of the Center for Applied Internet Data Analysis (CAIDA), Open Observatory of Network Interference (OONI), RIPE Network Coordination Center (RIPE NCC), Measurement Lab (M-Lab), Internews and Access Now joined a collaborative effort to compare existing data on outages with Mozilla’s dataset. Their feedback has uniformly stated that this data would be helpful to the internet outage measurement community in critical work across the world.

“We are thrilled that Mozilla’s dataset on outages is being published. Our own analysis of the data demonstrated that it is a valuable resource for investigating Internet outages worldwide, complimenting other public datasets. Unlike other datasets, it provides geographical granularity with novel insights and new research opportunities. We are confident that it will serve as an extremely valuable resource for researchers, human rights advocates, and the broader Internet freedom community,” says Maria Xynou, the Research and Partnerships Director of OONI.

In order to gain access to the dataset, which is licensed under the Creative Common Public Domain license (CC0) and contains data from January 2020 onward, researchers can apply via this Google Form, after which Mozilla representatives will reach out with next steps. More information and background on the project and the dataset can be found on Mozilla Wiki.

We look forward to seeing the exciting work that internet outage researchers will produce with this dataset and hope to inspire more use of aggregated datasets for public good.

This post was co-authored by Solana Larsen, Alessio Placitelli, Udbhav Tiwari.

The Rust Programming Language BlogSecurity advisory for the regex crate (CVE-2022-24713)

This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.

The Rust Security Response WG was notified that the regex crate did not properly limit the complexity of the regular expressions (regex) it parses. An attacker could use this security issue to perform a denial of service, by sending a specially crafted regex to a service accepting untrusted regexes. No known vulnerability is present when parsing untrusted input with trusted regexes.

This issue has been assigned CVE-2022-24713. The severity of this vulnerability is "high" when the regex crate is used to parse untrusted regexes. Other uses of the regex crate are not affected by this vulnerability.

Overview

The regex crate features built-in mitigations to prevent denial of service attacks caused by untrusted regexes, or untrusted input matched by trusted regexes. Those (tunable) mitigations already provide sane defaults to prevent attacks. This guarantee is documented and it's considered part of the crate's API.

Unfortunately a bug was discovered in the mitigations designed to prevent untrusted regexes to take an arbitrary amount of time during parsing, and it's possible to craft regexes that bypass such mitigations. This makes it possible to perform denial of service attacks by sending specially crafted regexes to services accepting user-controlled, untrusted regexes.

Affected versions

All versions of the regex crate before or equal to 1.5.4 are affected by this issue. The fix is included starting from regex 1.5.5.

Mitigations

We recommend everyone accepting user-controlled regexes to upgrade immediately to the latest version of the regex crate.

Unfortunately there is no fixed set of problematic regexes, as there are practically infinite regexes that could be crafted to exploit this vulnerability. Because of this, we do not recommend denying known problematic regexes.

Acknowledgements

We want to thank Addison Crump for responsibly disclosing this to us according to the Rust security policy, and for helping review the fix.

We also want to thank Andrew Gallant for developing the fix, and Pietro Albini for coordinating the disclosure and writing this advisory.

Jan-Erik RedigerFour-year Moziversary

It's my fourth Moziversary. It's been 4 years (and three days) now since I joined Mozilla as a Telemetry engineer. I joined Mozilla as a Firefox Telemetry Engineer in March 2018, I blogged three times already: 2019, 2020, 2021.

The past year continued to be challenging. Except for a brief 3-week period the Berlin office stayed close, so we all continue to work from home. I haven't met (most of) my team mates in person since 2020. I hope that in 2022 I will have the chance to meet some of them again, maybe even all at once.

I already spent some time on looking back on most of the work that happened on the Glean project last year in a This Week in Glean post, so no need to reiterate that.

For 2022 Glean will be about stabilizing, some new features and more widespread adoption across our products. I'm still excited to continue that work. We will see what else I pick up along the way.

Thank you

Thanks to my team mates Alessio, Bea, Chris, Travis, and Mike, and also thanks to the bigger data engineering team within Mozilla. And thanks to all the other people at Mozilla I work with.

Hacks.Mozilla.OrgAnnouncing Interop 2022

A key benefit of the web platform is that it’s defined by standards, rather than by the code of a single implementation. This creates a shared platform that isn’t tied to specific hardware, a company, or a business model.

Writing high quality standards is a necessary first step to an interoperable web platform, but ensuring that browsers are consistent in their behavior requires an ongoing process. Browsers must work to ensure that they have a shared understanding of web standards, and that their implementation matches that understanding.

Interop 2022

Interop 2022 is a cross-browser initiative to find and address the most important interoperability pain points on the web platform. The end result is a public metric that will assess progress toward fixing these interoperability issues.

Interop 2022 scores. Chrome/Edge 71, Firefox 74, and Safari 73.

In order to identify the areas to include, we looked at two primary sources of data:

  • Web developer feedback (e.g., through developer facing surveys including MDN’s Web DNA Report) on the most common pain points they experience.
  • End user bug reports (e.g., via webcompat.com) that could be traced back to implementation differences between browsers.

During the process of collecting this data, it became clear there are two principal kinds of interoperability problems which affect end users and developers:

  • Problems where there’s a relatively clear and widely accepted standard, but where implementations are incomplete or buggy.
  • Problems where the standard is missing, unclear, or doesn’t match the behavior sites depend on.

Problems of the first kind have been termed “focus areas”. For these we use web-platform-tests: a large, shared testsuite that aims to ensure web standards are implemented consistently across browsers. It accepts contributions from anyone, and browsers, including Firefox, contribute tests as part of their process for fixing bugs and shipping new features.

The path to improvement for these areas is clear: identify or write tests in web-platform-tests that measure conformance to the relevant standard, and update implementations so that they pass those tests.

Problems of the second kind have been termed “investigate areas”. For these it’s not possible to simply write tests as we’re not really sure what’s necessary to reach interoperability. Such unknown unknowns turn out to be extremely common sources of developer and user frustration!

We’ll make progress here through investigation. And we’ll measure progress with more qualitative goals, e.g., working out what exact behavior sites depend on, and what can be implemented in practice without breaking the web.

In all cases, the hope is that we can move toward a future in which we know how to make these areas interoperable, update the relevant web standards for them, and measure them with tests as we do with focus areas.

Focus areas

Interop 2022 has ten new focus areas:

  • Cascade Layers
  • Color Spaces and Functions
  • Containment
  • Dialog Element
  • Forms
  • Scrolling
  • Subgrid
  • Typography and Encodings
  • Viewport Units
  • Web Compat

Unlike the others the Web Compat area doesn’t represent a specific technology, but is a group of specific known problems with already shipped features, where we see bugs and deviations from standards cause frequent site breakage for end users.

There are also five additional areas that have been adopted from Google and Microsoft’s “Compat 2021” effort:

  • Aspect Ratio
  • Flexbox
  • Grid
  • Sticky Positioning
  • Transforms

A browser’s test pass rate in each area contributes 6% — totaling at 90% for fifteen areas — of their score of Interop 2022.

We believe these are areas where the standards are in good shape for implementation, and where improving interoperability will directly improve the lives of developers and end users.

Investigate areas

Interop 2022 has three investigate areas:

  • Editing, contentEditable, and execCommand
  • Pointer and Mouse Events
  • Viewport Measurement

These are areas in which we often see complaints from end users, or reports of site breakage, but where the path toward solving the issues isn’t clear. Collaboration between vendors is essential to working out how to fix these problem areas, and we believe that Interop 2022 is a unique opportunity to make progress on historically neglected areas of the web platform.

The overall progress in this area will contribute 10% to the overall score of Interop 2022. This score will be the same across all browsers. This reflects the fact that progress on the web platform requires browsers to collaborate on new or updated web standards and accompanying tests, to achieve the best outcomes for end users and developers.

Contributions welcome!

Whilst the focus and investigate areas for 2022 are now set, there is still much to do. For the investigate areas, the detailed targets need to be set, and the complex work of understanding the current state of the art, and assessing the options to advance it, are just starting. Additional tests for the focus areas might be needed as well to address particular edge cases.

If this sounds like something you’d like to get involved with, follow the instructions on the Interop 2022 Dashboard.

Finally, it’s also possible that Interop 2022 is missing an area you consider to be a significant pain point. It won’t be possible to add areas this year, but, if the effort is a success we may end up running further iterations. Feedback on browser differences that are making your life hard as developer or end user are always welcome and will be helpful for identifying the correct focus and investigate areas for any future edition.

Partner announcements

Bringing Interop 2022 to fruition was a collaborative effort and you might be interested in the other announcements:

The post Announcing Interop 2022 appeared first on Mozilla Hacks - the Web developer blog.

Hacks.Mozilla.OrgA new year, a new MDN

If you’ve accessed the MDN website today, you probably noticed that it looks quite different. We hope it’s a good different. Let us explain!

MDN has undergone many changes in its sixteen-year history from its early beginning as a wiki to the recent migration of a static site backed by GitHub. During that time MDN grew organically, with over 45,000 contributors and numerous developers and designers. It’s no surprise that the user experience became somewhat inconsistent throughout the website. 

In mid-2021 we started to think about modernizing MDN’s design, to create a clean and inviting website that makes navigating our 44,000 articles as easy as possible. We wanted to create a more holistic experience for our users, with an emphasis on improved navigability and a universal look and feel across all our pages. 

A new Homepage, focused on community

The MDN community is the reason our content can be counted on to be both high quality and trustworthy. MDN content is scrutinized, discussed, and yes, in some cases argued about. Anyone can contribute to MDN, either by writing content, suggesting changes or fixing bugs.

We wanted to acknowledge and celebrate our awesome community and our homepage is the perfect place to do so.

The new homepage was built with a focus on the core concepts of community and simplicity. We made an improved search a central element on the page, while also showing users a selection of the newest and most-read articles. 

We will also show the most recent contributions to our GitHub content repo and added a contributor spotlight where we will highlight MDN contributors.

Redesigned article pages for improved navigation

It’s been years—five of them, in fact—since MDN’s core content presentation has received a comprehensive design review. In those years, MDN’s content has evolved and changed, with new ways of structuring content, new ways to build and write docs, and new contributors. Over time, the documentation’s look and feel had become increasingly disconnected from the way it’s read and written.

While you won’t see a dizzying reinvention of what documentation is, you’ll find that most visual elements on MDN did get love and attention, creating a more coherent view of our docs. This redesign gives MDN content its due, featuring:

  • More consistent colors and theming
  • Better signposting of major sections, such as HTML, CSS, and JavaScript
  • Improved accessibility, such as increased contrast
  • Added dark mode toggle for easy switching between modes

 

We’re especially proud of some subtle improvements and conveniences. For example, in-page navigation is always in view to show you where you are in the page as you scroll:

We’re also revisiting the way browser compatibility data appears, with better at-a-glance browser support. So you don’t have to keep version numbers in your head, we’ve put more emphasis on yes and no iconography for browser capabilities, with the option to view the detailed information you’ve come to expect from our browser compatibility data. We think you should check it out. 

And we’re not stopping there. The work we’ve done is far-reaching and there are still many opportunities to polish and improve on the design we’re shipping.

A new logo, chosen by our community

As we began working on both the redesign and expanding MDN beyond WebDocs we realized it was also time for a new logo. We wanted a modern and easily customizable logo that would represent what MDN is today while also strengthening its identity and making it consistent with Mozilla’s current brand.

We worked closely with branding specialist Luc Doucedame, narrowed down our options to eight potential logos and put out a call to our community of users to help us choose and invited folks to vote on their favorite. We received over 10,000 votes in just three days and are happy to share with you “the MDN people’s choice.”

The winner was Option 4, an M monogram using underscore to convey the process of writing code. Many thanks to everyone who voted!

What you can expect next with MDN

Bringing content to the places where you need it most

In recent years, MDN content has grown more sophisticated for authors, such as moving from a wiki to Git and converting from HTML to Markdown. This has been a boon to contributors, who can use more powerful and familiar tools to create more structured and consistent content.

With better tools in place, we’re finally in a position to build more visible and systematic benefits to readers. For example, many of you probably navigate MDN via your favorite search engine, rather than MDN’s own site navigation. We get it. Historically, a wiki made large content architecture efforts impractical. But we’re now closer than ever to making site-wide improvements to structure and navigation.

Looking forward, we have ambitious plans to take advantage of our new tools to explore improved navigation, generated standardization and support summarizes, and embedding MDN documentation in the places where developers need it most: in their IDE, browser tools, and more.

Coming soon: MDN Plus

MDN has built a reputation as a trusted and central resource for information about standards, codes, tools, and everything you need as a developer to create websites. In 2015, we explored ways to be more than a central resource through creating a Learning Area, with the aim of providing a useful counterpart to the regular MDN reference and guide material.

In 2020, we added the first Front-end developer learning pathway to it.  We saw a lot of interest and engagement from users, the learning area currently being responsible for 10% of MDN’s monthly web traffic. This started us on a path to see what more we can do in this area for our community.

Last year we surveyed users and asked them what they wanted out of their MDN experience. The top requested features included notifications, article collections and an offline experience on MDN. The overall theme we saw was that users wanted to be able to organize MDN’s vast library in a way that worked for them. 

We are always looking for ways to meet our users’ needs whether it’s through MDN’s free web documentation or personalized features. In the coming months, we’ll be expanding MDN to include a premium subscription service based on the feedback we received from web developers who want to customize their MDN experience. Stay tuned for more information on MDN Plus.

Thank you, MDN community

We appreciate the thousands of people who voted for the new logo as well as everyone who participated in the early beta testing phase since we started this journey. Also, many thanks to our partners from the Open Web Docs, who gave us valuable feedback on the redesign and continue to make daily contributions to MDN content. Thanks to you all we could make this a reality and we will continue to invest in improving even further the experience on MDN.

The post A new year, a new MDN appeared first on Mozilla Hacks - the Web developer blog.

Wladimir PalantSkype extension: All functionality broken? Still exploitable!

One of the most popular Chrome extensions is Skype, a browser extension designed as a helper for the Skype application. Back when I reported the issues discussed here it was listed in Chrome Web Store with more than 10 million users, at the time of writing more than 9 million users still remain. What these users apparently didn’t realize: the extension was unmaintained, with the latest release being more than four years old. All of its functionality was broken, it being reduced to a bookmark for Skype for Web.

Yet despite being essentially useless, the Skype extension remained a security and privacy risk. One particularly problematic issue allowed every website to trivially learn your identity if you were logged into your Microsoft account, affecting not merely Skype users but also users of Office 365 for example.

Last week Microsoft, after a lengthy period of communication silence, finally published an update to resolve the issues. In fact, the new release shares no functionality with the old extension and is essentially a completely new product. Hopefully this one will no longer be abandoned.

Screenshot of a browser window with Skype extension icon. The extension pop-up is open displaying two menu items: Share on Skype and Launch Skype

Leaking your identity

One piece of functionality still working in the Skype extension was keeping track of your identity. The extension would notice if you logged into a Microsoft account, be it on skype.com, outlook.com or any other Microsoft website. It would recognize your identity and call the following function:

setUserId: function(userId)
{
  this.currentUserId(userId);
  if (!userId)
    sessionStorage.removeItem("sxt-user");
  else
    sessionStorage.setItem("sxt-user", userId);
}

So the user identifier is stored in the extension’s session storage where the extension can look it up later. Except that, from the look of it, the extension never bothered to use this value later. And that the code above executed in the extension’s content scripts as well.

In content script context sessionStorage is no longer extension’s storage, it’s the website’s. So the website can read it out trivially:

console.log(sessionStorage["sxt-user"]);

This will produce an output like “8:live:.cid.0123456789abcdef.” And the part after “8:” is your Skype ID. That anybody can put into the Skype search to see your name and avatar. Available to each and every website you visit, because the Skype extension would run its content scripts everywhere despite only integrating with a selected few sites.

Letting anyone create conversations with you

Speaking of website integration, the Skype extension integrated with Gmail, Google Calendar, Google Inbox (yes, the service shut down in 2019), Outlook and Twitter. The idea was to add a button that, when clicked, would create a Skype conversation and add the corresponding link to your message. Except that these websites evolved, and these days the extension could only somewhat add its button on Gmail.

The configuration used for Gmail looks as following:

{
  id: "gmail",
  host: "mail.google.*",
  allowGuests: true,
  allowProspects: true,
  silentLoginOnInjectEnabled: false,
  injectionHandles: [{
    method: "insert-after",
    selector: "div[command=\"+emoticon\"]",
    titleSelector: "input[name=\"subjectbox\"]",
    descriptionSelector: "div[role=\"textbox\"]"
  }],
  ...
}

Yes, the host value is a terribly permissive regular expression that will match lots of different websites. But, despite its name, this value is actually applied to the full website address. So even https://example.com/#mail.google.com counts as Gmail.

Then a malicious page merely needs to have the right elements and the Skype extension will inject its button. That the page can click programmatically. Resulting in a Skype conversation being created. And the extension putting a link to it into a text field that the website can read out. And then the website can use the link to spam you for example. And, unlike with regular spam, you don’t even need to accept these messages. Because it’s “you” who created this conversation, despite the whole process working without you doing or noticing anything.

Actually, it’s how this would have worked. And how it worked until around mid-2021 when Microsoft shut down api.scheduler.skype.com, one of the old Skype backend servers that this extension relied on. So now a malicious website only succeeded creating a conversation, but the extension failed retrieving the link to it, meaning that the website would no longer get it.

The disclosure process

Microsoft has MSRC Researcher Portal, the system that vulnerability reports should be submitted through. I submitted both issues on December 1, 2021. Three days later the submission status changed into “Reviewing.” By mid-January 2022 Microsoft was still “reviewing” this issue, and I could see that the proof-of-concept pages on my server have not been accessed. So on January 19, 2022 I asked on MSRC about the issue’s progress, noting the disclosure deadline.

As there was still no reaction, I decided that naming Microsoft publicly was an acceptable risk, them having a large number of different products. On February 2, 2022 I asked for help on Twitter and Mastodon, maybe someone else could bring Microsoft’s attention to this issue. A reporter then asked Microsoft’s PR department about it. Whether there is a connection or not, on February 7, 2022 my submission was finally accepted as a valid vulnerability and status changed to “under development.”

On February 14, 2022 I again noted the impending deadline on MSRC and asked about the issue’s status. Again no reaction. Only on February 23 I finally got a response apologizing about the delay and promising a release soon. On February 24 this release indeed happened. And on February 25 my proof-of-concept pages were accessed, for the first time.

While the communication was problematic to say the least, the fix is as thorough as it can get: all the problematic code is completely gone. In fact, the extension no longer has content scripts at all. All functionality is now located in the extension’s pop-up, and it’s all new functionality as well. It’s essentially a completely different product.

Microsoft would still need to update their website which currently says:

Read a good article? Now you can share a site directly with your Skype contacts.

The Skype extension also makes calling a number from search results easy.

Neither feature is part of the current release, and in fact the second feature listed already wasn’t part of the previous four years old release either.

This page seems to be similarly badly maintained as the extension itself. The Skype extension was removed from Mozilla Add-ons website a few years ago due to being incompatible with Firefox 57 (released in 2017) and above. Microsoft didn’t bother uploading their Chrome extension there which would have been compatible with a few minor tweaks to the extension manifest. Yet the “Get Skype extension for Firefox” link is still present on the page and leads nowhere.

Robert KaiserConnecting the Mozilla Community

After some behind-the-scenes discussions with Michael Kohler on what I could contribute at this year's FOSDEM, I ended up doing a presentation about my personal Suggestions for a Stronger Mozilla Community (video is available on the linked page). While figuring out the points I wanted to talk about and assembling my slides for that talk, I realized that one of the largest issues I'm seeing is that the Mozilla community nowadays feels very disconnected to me, like several islands, within each there is good stuff being done, but most people not knowing much about what's happening elsewhere. That has been helped a lot by a lot of interesting projects being split off Mozilla into separate projects in recent years (see e.g. Coqui, WebThings, and others) - which is often taking them off the radar of many people even though I still consider them as being part of this wider community around the Mozilla Manifesto and the Open Web.

Following the talk, I brought that topic to the Reps Weekly Call this last week (see linked video), esp. focusing on one slide from my FOSDEM talk that talks about finding some kind of communication channel to cross-connect the community. As Reps are already a somewhat cross-function community group, my hope is that a push from that direction can help getting such a channel in place - and figuring out what exactly is a good idea and doable with the resources we have available (I for example like the idea of a podcast as I like how those can be listened to while traveling, cooking, doing house work, and others things - but it would be a ton of work to organize and produce that).
Some ideas that came up in the Reps Call were for example a regular newsletter on Mozilla Discourse in the style of the MoCo-internal "tl;dr" (which Reps have access to via NDA), but as something that is public, as well as from and for the community - or maybe morphing some Reps Calls regularly into some sort of "Community News" calls that would highlight activities around the wider community, even bringing in people from those various projects/efforts there. But there may be more, maybe even better ideas out there.

To get this effort to the next level, we agreed that we'll first get the discussion rolling on a Discourse thread that I started after the meeting and then probably do a brainstorming video call. Then we'll take all that input and actually start experimenting with the formats that sound good and are practically achievable, to find what works for us the best way.

If you have ideas or other input on this, please join the conversation on Discourse - and also let us know if you can help in some form!

Jan-Erik RedigerThis Week in Glean: Your personal Glean data pipeline

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All "This Week in Glean" blog posts are listed in the TWiG index (and on the Mozilla Data blog). This article is cross-posted on the Mozilla Data blog.


On February 11th, 2022 we hosted a Data Club Lightning Talk session. There I presented my small side project of setting up a minimal data pipeline & data storage for Glean.

The premise:

Can I build and run a small pipeline & data server to collect telemetry data from my own usage of tools?

To which I can now answer: Yes, it's possible. The complete ingestion server is a couple hundred lines of Rust code. It's able to receive pings conforming to the Glean ping schema, transform them and store it into an SQLite database. It's very robust, not crashing once on me (except when I created an infinite loop within it).

You can watch the lightning talk here:


Play Video: Your personal Glean data pipeline

Instead of creating some slides for the talk I created an interactive report. The full report can be read online.

Besides actually writing a small pipeline server this was also an experiment in trying out Irydium and Datasette to produce an interactive & live-updated data report.

Irydium is a set of tooling designed to allow people to create interactive documents using web technologies, started by wlach a while back. Datasette is an open source multi-tool for exploring and publishing data, created and maintained by simonw. Combining both makes for a nice experience, even though there's still some things that could be simplified.

My pipeline server is currently not open source. I might publish it as an example at a later point.