Firefox NightlyWebExtensions Mv3, WebMIDI, OpenSearch, PiP updates and more! – These Weeks in Firefox: Issue 128

Highlights

Image of an opensearch result appearing on Firefox's URL bar.

Site-specific searches can be executed on the URL bar for sites like Wikipedia, for example.

Image of a Picture-in-Picture window with playback controls displayed, including a brand new video scrubber.

The video scrubber allows you to seek at a certain duration from the PiP window with ease.

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Brian Pham
  • Itiel
  • Sebastian Zartner [:sebo]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • As part of the work related to “Origin Controls “ and “Unified Extensions UI”:
    • The Unified Extensions UI is also riding the 109 release train – Bug 1801129
    • Follow-ups and bug fixes
    • Fixed a regression that was preventing the “disabled” state of an extension action to be applied correctly. This bug was also affecting Beta (108) – Bug 1802411
    • Extensions actions can now be pinned/unpinned from the Unified Extensions panel – Bug 1782203
    • Default area of extension actions is now the unified extensions panel – Bug 1799947
  • Niklas fixed a regression on opening a link from the context menu options on an extension sidebar page (regressed in Firefox 107 by Bug 1790855), the fix has been landed in nightly 109 and uplifted to beta 108 – Bug 1801360
  • Emilio fixed a regression related to the windows.screen properties in extension background pages returning physical screen dimensions (regressed in Firefox 103 by Bug 1773813) – Bug 1798213
WebExtension APIs
  • As part of the ongoing work on the declarativeNetRequest API: the initial implementation of the declarativeNetRequest rule engine is being hooked to the networking (and so rules with actions and conditions already supported are now being applied to actual intercepted network requests) – Bug 1745761
Addon Manager & about:addons
  • SitePermsAddonProvider: a new Add-ons Manager provider used to provision virtual add-ons which unlock dangerous permissions for specific sites. We are experimenting with using an add-on install flow to gate site access to WebMIDI in order to convey to users that granting such access entails trusting the site.

Developer Tools

DevTools
  • Zac Svoboda tweaked the JSON viewer so the toolbar matches the one of the toolbox  (bug)
  • Karntino Areros made watchpoint more legible (bug)
  • Clinton Adeleke fixed padding in the debugger “welcome box” (bug)
  • Sean Feng made a change so opening DevTools does not trigger PerformanceObserver callback (bug)
  • Julian fixed adding new rules in inspector on pages with CSP (bug)
WebDriver BiDi
  • Opening a new tab with WebDriver:NewWindow now properly sets the focus on the page (bug)
  • Column numbers in expectations and stacktraces are now 0-based (bug)

ESMification status

  • Please consider migrating your components if you haven’t already. Don’t forget actors as well.
  • ESMified status:
    • browser: 39.7%
    • toolkit: 29.8%
    • Total: 41.3% (up from 38.4%)
  • #esmification on Matrix
  • Migration Document (with a walkthrough!)

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)

Picture-in-Picture

Performance Tools (aka Firefox Profiler)

Clicking a node in the activity graph selects the call tree tab now. (PR #4331)

A gif showing the new clicking behavior on the activity graph for Firefox Profiler.

Clicking a node in the activity graph selects the call tree tab now.

  • Added markers for browsertime visual metrics. (PR #4330) (Example profile)

    Image of Firefox Profiler showing off new markers for browsertime visual metrics.

    New markers in the Test category for various browsertime visual metrics.

  • Improved the vertical scrolling of our treeviews like call tree, marker table panels. (PR #4332)
  • Added a profiler marker for FirstContentfulPaint metric. (Bug 1691820) (Example profile)

    Image of the Firefox Profiler showing an added marker for a metric named FirstContentfulPaint.

    View all details relating to First Contentful Paint via a tooltip.

Search and Navigation

Support.Mozilla.OrgHubs transition

Hi SUMO folks,

I’m delighted to share this news with you. The Hubs team has recently transitioned into a new phase of a product. If in the past, you needed to figure out the hosting and deployment on your own with Hubs Cloud, you now have the option to simply subscribe to unlock more capabilities to customize your Hubs room. To learn more about this transformation, you can read their blog post.

Along with this relaunch, Mozilla has also just acquired Active Replica, a team that shares Mozilla’s passion for 3D development. To learn more about this acquisition, you can read this announcement.

What does this mean for the community?

To support this change, the SUMO team has been collaborating with the Hubs team to update Hubs help articles that we host on our platform. We also recently removed Hubs AAQ (Ask a Question) from our forum, and replaced it with a contact form that is directly linked to our paid support infrastructure (similar to what we have for Mozilla VPN and Firefox Relay).

Paying customers of Hubs will need to be directed to file a support ticket via the Hubs contact form which will be managed by our designated staff members. Though contributors can no longer help with the forum, you are definitely welcome to help with Hubs’ help articles. There’s also a Mozilla Hubs Discord server that contributors can pop into and participate in.

We are excited about the new direction that the Hubs team is taking and hope that you’ll support us along the way. If you have any questions or concerns, we’re always open to discussion.

The Mozilla BlogPulse Joins the Mozilla Family to Help Develop a New Approach to Machine Learning

I’m proud to announce that we have acquired Pulse, an incredible team that has developed some truly novel machine learning approaches to help streamline the digital workplace. The products that Raj, Jag, Rolf, and team have built are a great demonstration of their creativity and skill, and we’re incredibly excited to bring their expertise into our organization. They will spearhead our efforts in applied ethical machine learning, as we invest to make Mozilla products more personal, starting with Pocket. 

Machine learning (ML) has become a powerful driver of product experience. At its best, it helps all of us to have better, richer experiences across the web. Building ML models to drive these experiences requires data on people’s preferences, behaviors, and actions online, and that’s why Mozilla has taken a very cautious approach in applying ML in our own product experiences. It is possible to build machine learning models that act in service of the people on the internet, transparently, respectful of privacy, and built from the start with a focus on equity and inclusion. In short, Mozilla will continue its tradition of DOING: building products that serve as examples of a better way forward for the industry, a way forward that puts people first.

Which explains why we were so excited when we began talking to the Pulse team. It became immediately obvious that we both fundamentally agree that the world needs a model where automated systems are built from day one with individual people as the primary beneficiary. Mozilla, with an almost 25 year history of building products with people and privacy at their core, is the right organization to do that. And with Pulse as part of our team, we can move even more quickly to set a new example for the industry. 

One of the things that makes this marriage such a great fit is Pulse’s history of building products that optimize for the preference of each individual customer. They know how to take things from theory and design and turn them into real product experiences that address actual needs and preferences. That kind of background is going to be critical as we work to enhance the experience across our existing and new products in the coming years. I’m particularly excited to enhance our machine learning capabilities, including personalization, in Pocket, a fantastic product that has only just scratched the surface of its ultimate potential.

We have big plans for the Pulse team’s skills and know-how, and are thrilled to have their contributions to our growing entire portfolio of products. 

So, Raj, Jag, Rolf, and team, welcome aboard! We are energized by the chance to work together, and I can’t wait to see what we build.

The post Pulse Joins the Mozilla Family to Help Develop a New Approach to Machine Learning appeared first on The Mozilla Blog.

Allen Wirfs-BrockHow Smalltalk Became a AI Language

A model pretending to use a Tektronix 4404

This post is based upon a Twitter thread that was originally published on December 2. 2018.

There is a story behind how Tektronix Smalltalk became branded as an AI language in 1984.

In the 1960s-70s, Tektronix Inc had grown to become an industry leading electronics competing head-to-head with Hewlett-Packard.  In the early ’80s Tektronix was rapidly going digital and money was being poured into establishing a Computer Research Lab (CRL) within Tek Labs. Two early successful CRL projects was my effort to create a viable performance Smalltalk virtual machine that ran on Motorola 680xx family processors and Roger Bates/Tom Merrow’s effort to develop an Alto-like 680xx based workstation for use in the lab.

The workstation was called the Magnolia and eventually over 50 of the were built. One for everybody in the fully staffed CRL. Tom’s team ported Unix to Magnolia and started working on Unix window mangers. I got Smalltalk-80 up on it using my virtual machine implementation.

CRL was rapidly staffing up with newly hired PhD-level CS researchers and each of them got a Magnolia. They were confronted with the choice of programming in a multi-window but basically shell-level Unix environment or a graphically rich Smalltalk live dev environment.  Most of them, including most of the AI group, chose to build their research prototypes using Smalltalk— particularly after a little evangelism from Ward Cunningham. Many cool projects were built and demonstrated to Tek executives at the annual Tek Labs research forums (internal “science fairs”) in ’81-’83.

During that time there was a lot of (well deserved) angst within Tek about its seeming inability to timely ship new products incorporating new technologies and addressing new markets. At the fall 1982 research forum Tom, myself, and Rick LeFaive, CRL’s director, (and perhaps Ward) sat down with some very senior Tek execs in front of a couple of Magnolias and ran through the latest demos. The parting words from the execs were: “We have to do something with this!”

Over the next couple months Tom Merrow and I developed the concept for a “low-cost” ($10k) Smalltalk workstation.  Rebecca Wirfs-Brock had been software lead of the recently successful 410x “low cost” graphics terminals and we thought we could leverage its mechanicals for our workstation. Over the first half of ’83 Roger Bates and Chip Schnarel prototyped a 68010-based processor and display that would fit inside a 4105 enclosure. It was code named “Pegasus”.

After much internal politics, in late summer of 1983 we got the go ahead to turn Pagasus into a product. An intrapreneurial “special products unit” (SPU) was formed to take Pegasus to market. The SPU management was largely the team that had initially done the 410x terminals.

So, finally we get to the AI part of the story. Mike Taylor was the marketing manager of the Pegasus SPU. One day in late August of ’83 I was chatting with Mike in a CRL corridor. He says something like: Smalltalk is very cool but to market it we have to tell people what they can use it for?

I initially muttered some words about exploratory programming, objects, software reuse, etc. Then I paused, as wheels turned in my mind. AI was in the news because of Japan’s Fifth Generation Computing Initiative and I had just seen an of issue Time magazine that included coverage of it. I thought: objects, symbolic computing, garbage collection, LISP and responded to Mike: Smalltalk is an AI language.

Mike said: What!?? You mean Pegasus is a $10K AI machine? That’s something I can sell!

Before I knew what happened the Pegasus SPU was rechristened as AIM (AI Machines) and we were trying to figure out how we were going to support Common Lisp and Prolog in addition to Smalltalk.

The Pegasus was announced as the Tektronix 4404 in August 1994 at that year’s AAAI conference. The first production units shipped in January 1995 at a list price of $14,950. Even at that price it was considered a bargain.

You can read more about the history and technology of Tektronix Smalltalk and Tek AI machine at my Tektronix Smalltalk Document Archive.

Demo video of Tek Smalltalk on a Tektronix 4404

Tantek ÇelikRunning For The @W3C Advisory Board (@W3CAB) Special Election

Hi, I’m Tantek Çelik and I’m running for the W3C Advisory Board (AB) to help it reboot W3C as a community-led, values-driven, and more effective organization. I have been participating in and contributing to W3C groups and specifications for over 24 years.

I am Mozilla’s Advisory Committee (AC) representative and have previously served on the AB for several terms, starting in 2013. In the early years I helped lead the movement to offer open licensing of W3C standards, and make it more responsive to the needs of independent websites and open source implementers. In my most recent term I led the AB’s Priority Project for an updated W3C Vision. I set the example of a consensus-based work-mode of summarizing & providing granular proposed resolutions to issues, presenting these to the AB at the August 2022 Berlin meeting, and making edits to the W3C Vision according to consensus.

I co-chaired the W3C Social Web Working Group that produced several widely interoperably deployed Social Web Standards, most notably the ActivityPub specification, which has received renewed attention as the technology behind Mastodon and other implementations growing an open decentralized alternative to proprietary social media networks such as Twitter. ActivityPub was but one of seven W3C Recommendations produced by the Social Web Working Group, six of which are widely adopted by implementations & their users, five of those with still functional test suites today, almost five years later.

Most recently, I’ve focused on the efforts to clarify and operationalize W3C’s core values, and campaigned to add Sustainability to W3C’s Horizontal Reviews in alignment with the TAG’s Ethical Web Principles. I established the Sustainability Community Group and helped organize interested participants at TPAC 2022 into asynchronous work areas.

The next 6-18 months of the Advisory Board are going to be a critical transition period, and will require experienced AB members to actively work in coordination with the TAG and the Board of Directors to establish new models and procedures for sustainable community-driven leadership and governance of W3C.

I have Mozilla’s financial support to spend my time pursuing these goals, and ask for your support to build the broad consensus required to achieve them.

You can follow my posts directly from my tantek.com feed or from Mastodon with: @tantek.com@tantek.com

If you have any questions or want to chat about the W3C Advisory Board, Values & Vision, or anything else W3C related, please reach out by email: tantek at mozilla.com. Thank you for your consideration.

The Mozilla BlogCelebrating Pocket’s Best of 2022

The run-up to December is always my favorite time of year at Pocket. It’s when we sift through our data (always anonymous and aggregated—we’re part of Mozilla, after all 😉), to see which must-read profiles, thought-provoking essays, and illuminating explainers Pocket readers loved best over the past 12 months. 

Today, we’re delighted to bring you Pocket’s Best of 2022. This year’s honor roll is our biggest ever: a whopping 20 lists celebrating the year’s top articles across culture, technology, science, business, and more. All are informed by the saving and reading habits of Pocket’s millions of curious, discerning users.

The stories people save to Pocket reveal something unique—not only about what’s occupying our collective attention, but about what we aspire to be. And what we see again and again from 40 million saves to Pocket every month is the gravitational pull of stories that help us better understand the world around us—and ourselves. 

For the past few years, our most-saved articles have reflected our challenging, unsettling times: how to manage burnout (2019), Covid uncertainty (2020), and the chronic sense of ‘blah’ so many of us felt as the pandemic wore on (2021). This year, we see something different: seeds of renewal. Our data shows people looking to reinvent themselves and redefine what happiness looks like to them. We see readers eager to reset their relationships: with their stuff, with technology, and especially with other people. Articles about how to build deeper connections were some of the most popular stories saved to Pocket this year. 

These are, in many ways, age-old challenges. But what you’ll find in our Best of 2022 collections are all the ways Pocket readers are discovering and embracing new solutions after two long, hard years. To borrow a phrase from a story that resonated deeply with the Pocket community this year, it feels like something of a vibe shift

Nowhere was this more evident than in the author who earned more saves to Pocket than any other this year: Arthur C. Brooks, whose “How to Build a Life” series for The Atlantic was a Pocket favorite month in and month out. Whether it’s shortcuts to contentment or tips for how to want less, you can seek your own vibe shift (and bliss) in a special year-end collection of Arthur’s most popular pieces, with an introduction by the #1 most-saved author himself. 

There are so many more gems to enjoy in the Best of 2022 collections, including:

If the articles featured in Best of 2022 are new to you, save them to your Pocket and dig in over the holidays. (May we suggest making use of our Listen feature while wrapping gifts?) While you’re at it, join the millions of people discovering the thought-provoking articles we curate in our apps, daily newsletter, and in the Firefox browser each and every day. 

With Pocket, you can make active decisions about how you spend your time online—if that isn’t a vibe shift, what is?

From all of us at Pocket, have a joyous and safe holiday season.

Carolyn O’Hara is senior director of content discovery at Pocket.

P.S. For our German Pocket users: Auch für unsere deutschsprachigen Pocket User:innen haben wir das größtes „Best of” bisher in petto – mit den besten Artikeln und Storys, die unsere Community dieses Jahr am meisten geklickt, gelesen und gespeichert hat. Entdecke hier die spannendsten Geschichten von 2022 zu Psychologie, Wissenschaft, Technologie und vielen anderen Themen. Obendrauf hat Autorin Sarah Diehl eine besondere Collection über die Kraft des Alleinseins kuratiert. In diesem Sinne: Happy reading!

Methodology: The Best of 2022 winners were selected based on an aggregated and anonymized analysis of the links saved to Pocket in 2022, with a focus on English- and German-language articles. Results took into account how often a piece of content was saved, opened, read, and shared, among other factors.

Save and discover the best articles, stories and videos on the web

Get Pocket

The post Celebrating Pocket’s Best of 2022 appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 471

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is code-radio-cli, a command line interface for listening to freeCodeCamp's Code Radio music stream.

Thanks to 魏杰 for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

This week we also have a few non-rust-specific needs from your friends at This Week in Rust! Check them out:

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

389 pull requests were merged in the last week

Rust Compiler Performance Triage

A relatively quiet week for performance, with the notable exception of "Avoid GenFuture shim when compiling async constructs #104321" which brought sizeable wins on a number of stress test benchmarks. It probably won't be of huge benefit to most codebases, but should provide smaller wins to folks with large amounts of async-generated futures.

Triage done by @simulacrum. Revision range: a78c9bee..8a09420a

3 Regressions, 3 Improvements, 6 Mixed; 2 of them in rollups 43 artifact comparisons made in total

See the full report for details.

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2022-11-30 - 2022-12-28 🦀

Virtual
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

After many years of writing bugs, and then discovering Rust, I learned to appreciate explicitness in code, and hope you eventually will too.

Artem Borisovskiy on rust-users

Thanks to Árpád Goretity for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla Blog3 ways to use Mozilla Hubs, a VR platform that’s accessible and private by design

A 3D illustration shows human, animal, food and robotic characters floating in a nature setting.<figcaption>Credit: JR Ingram / Mozilla</figcaption>

When NASA’s Webb Space Telescope team and artist Ashley Zelinskie wanted to bring space exploration to everyone, they chose Mozilla Hubs, our open source platform for creating 3D virtual spaces right from your browser. 

Ashley told us that they “didn’t want to cut people out that didn’t have fancy VR headsets or little experience in VR. … If we were going to invite the world to experience the Webb Telescope we wanted everyone to be able to attend.” 

That’s exactly why Mozilla has been investing in the immersive web: We believe that virtual worlds are part of the future of the internet, and we want them to be accessible and safe for all. 

That means each Hubs user controls access to the virtual world they created, which is only discoverable to the people they share it with. Hubs users and their guests can also immerse themselves in this world right from their desktop or mobile browser – no downloads or installations required. And while you can use a VR headset, you can access the same spaces through your phone, tablet or desktop computer.

If you’re curious, take a look at a few ways people have been creating immersive worlds with Mozilla Hubs: 

To create art galleries and portfolios

A screenshot from a Mozilla Hubs room shows an art gallery.<figcaption>Credit: Apart Poster Gallery by Paradowski Creative</figcaption>

It’s not just space art. A virtual museum of art prints put together by the creative agency Paradowski helped raise money for a COVID-19 response fund by the World Health Organization. In St. Louis, Missouri, the American Institute of Graphic Arts showcased artists’ work during the school’s annual design show. In the U.K., the Royal Institute of British Architects presented an exhibition that immersed visitors in architectural milestones over the last five centuries. 

While Mozilla Hubs can host projects on the grandest scale, you can use it for personal projects too: Whether you’re an artist, photographer or a 3D modeler, you can create an immersive portfolio that’s easy to use and accessible to anybody with a browser.

To build spaces for hobbies (and meet new people)

A screenshot from a Mozilla Hubs space shows a group of human and animal characters. <figcaption>Credit: Mozilla Hubs Creator Labs </figcaption>

The website Meetup lets people find local events based on their interests – from pet poultry to coding to learning a new language. In addition to in-person gatherings, the platform allows people to organize online. Those who wish to meet up virtually can do so in a 3D space through Hubs. 

You can create your own immersive space and invite others. You can also just grab an existing room made available by another creator, remix it and make it your own.

To teach and learn 

NYU Langone Health, one of the largest healthcare systems in the northeast U.S., uses Hubs to teach anatomy. Hubs helps instructors immerse medical students in the coursework, including 3D vascular stereoscopic models.

A screenshot from a Mozilla Hubs room shows a drought map.<figcaption>Credit: Screenshot courtesy of Dr. Tutaleni Asino</figcaption>

Oklahoma State University’s Emerging Technologies and Creativity Research Lab created a virtual science expo that showcased different Earth environments and hosted Q&A sessions with scientists.

Olympic medalist Sofía Toro, along with professors from the Universidad Católica San Antonio in Spain, even taught a windsurfing class online using Hubs.

Virtual spaces offer new opportunities for connections and innovation. Through Hubs, Mozilla wants to make those opportunities available to everyone. Learn more and join the Hubs community here.  

The post 3 ways to use Mozilla Hubs, a VR platform that’s accessible and private by design appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird Android Update: K-9 Mail 6.400 Adds Customizable Swipe Actions

In what feels like the blink of an eye, four months have passed since announcing our plans for bringing Thunderbird to Android. The path to bringing you a great email experience on Android devices begins with K-9 Mail, which joined the Thunderbird product family earlier this summer.

As we work towards a modern redesign of desktop Thunderbird, we’re also working towards improving K-9 Mail as it begins its transition to Thunderbird mobile in Summer 2023.

Later this week, we’ll share a preview of K-9 Mail’s beautiful Message View redesign. Today, though, let’s talk about the newly released version 6.400.

K-9 Mail 6.400 Adds Customizable Swipe Features

Version 6.400 of K-9 Mail for Android begins rolling out today on F-Droid and Google’s Play Store. Dedicated to pursuing a modernized interface, the new version introduces customizable swipe actions.

<figcaption>K-9 Mail 6.400 introduces swipe actions, such as Archiving and Deleting. (Shown here in Dark Mode)</figcaption>

As you can see above, an intuitive swipe to the left or right can quickly delete or archive a message. But you’re obviously not limited to just those actions. You can customize both right and left swipe actions with the following options:

  • Toggle Selection
  • Mark as read/unread
  • Add/remove star
  • Archive
  • Delete
  • Spam
  • Move

To configure your own preferences, open the app menu (the 3 vertical lines on the top left of the app), and then: Settings ➡ General Settings ➡ Interaction ➡ Swipe Actions.

Swipe interactions extend to the full message view. When you’re reading a message, simply swipe left to navigate to the next message in your list, or swipe right to back up to the previous message.

<figcaption>K-9 Mail 6.400 introduces swipe actions, such as moving between your messages. (Shown here in Light Mode)</figcaption>

We also recently added integral OAuth 2.0 support for major email account providers like Google, Yahoo Mail, AOL Mail, Hotmail, and Microsoft (Office 365 and personal accounts).

Track Our Progress On The Thunderbird Android Roadmap

In addition to our Thunderbird Supernova roadmap, we’ve recently added a roadmap for Android Thunderbird. Track our progress and see what other features are in development by clicking here.

Then discuss the future of Thunderbird on Android on our public mailing list.

Where To Get K-9 Mail Version 6.400

Version 6.400 will start gradually rolling out today. As always, you can get it on the following platforms:

GitHub | F-Droid | Play Store

(Note that the release will gradually roll out on the Google Play Store, so please be patient if it doesn’t automatically update.)

Try New Features First: Join The K-9 Mail Beta!

As K-9 Mail transforms into Thunderbird for Android, be the first to try out new features and interface improvements by testing the Beta version.

GitHub releases → We publish all of our releases there. Beta versions are marked with a “Pre-release” label.

Play Store → You should be able to join the beta program using the Google Play Store app on the device. Look out for the “Join the beta” section in K-9 Mail’s detail page.

F-Droid → Unlike stable versions, beta versions on F-Droid are not marked with a “Suggested” label. You have to manually select such a version to install it. To get update notification for non-suggested versions you need to check ‘Settings > Expert mode > Unstable updates’ in the F-Droid app.

The post Thunderbird Android Update: K-9 Mail 6.400 Adds Customizable Swipe Actions appeared first on The Thunderbird Blog.

Paul BoneWaiting for web content to do something in a Firefox mochitest

It’s not unusual for a Firefox test to have to wait for various things such as a tab loading. But recently I needed to write a test that loaded a content tab with a web worker and wait for that before observing the result in a different tab. I am writing this for my own reference in the future, and if it helps someone else, that’s extra good. But I don’t think it will be of much interest if you don’t work on Firefox as the problem I’m solving won’t be relevant and the APIs won’t be familiar.

I don’t think of myself as a JavaScript programmer - I’m learning what I need to know when I need to know it, but mainly to write tests. So I’m not sure I’ll pitch this article at any particular level of JS knowledge, sorry.

Web Workers

Web Workers provide web pages a way to execute long-running JavaScript tasks in a separate thread, where it won’t block the main event loop. They solve the same problem, allowing a page to use concurrency. However their programming model is more like processes because they don’t share state (global variables or even functions) and communicate by sending and receiving messages.

I realise this is a tangent but it’s a topic I like and you may have the same questions I did: So if workers are supposed to solve the same problems as threads do in other languages, why are they more like processes? Furthermore, at least in Firefox, each worker instantiates another copy of the JavaScript engine (the JSRuntime class) with its own instantiation of JIT, GC etc. Isn’t this fairly heavy just to add concurrency?

It is, but there are benefits:

  • I’m not certain, but I think this was the easiest way to retrofit concurrency to JavaScript (the language standard) without breaking backwards compatibility with existing web sites.

  • Message-passing concurrency makes the boundary between threads very clear. This makes it a simpler programming model, especially if you’re working on some code that is isolated from the concurrency happening elsewhere.

  • It worked for Erlang, although Erlang likely shares bytecode caches and some other systems. But not garbage collection.

Anyway, the point is that Web Workers are concurrent "process like" things that communicate through message-passing.

about:performance

Firefox has a number of about: pages, used for diagnostics and tweaking. about:config is probably the most infamous (if you touch those settings you can break your browser or make it insecure). about:support is interesting too it contains diagnostic information about Firefox on your computer.

Today we’re looking at about:performance, which is useful when you are thinking "Firefox seems slow, I wonder why..". about:performance will show your busiest tabs, how much CPU time/power and memory they’re using.

Measuring memory usage can be tricky at the best of times (more on this in an upcoming article). We can’t afford to count every allocation since that is too slow for a page like about:performance. Although about:memory comes closer to doing this. For about:performance we can ask major subsystems how much memory they’re using and rely on their counters. This isn’t accurate but it’s good enough.

I noticed two major things that weren’t counted:

  • Malloc memory used by JS objects was not counted.

  • Web workers were not counted.

I fixed them in Bug 1760920.

So I wanted to write a test that would verify that we are indeed counting memory belonging to web workers.

My web worker

To make it easier to see if we’re counting a component’s memory, it’s great of our test causes that component to use a lot of memory then we can test for that.

Here’s a Web Worker that uses about 40MB of memory using an array with 4 million elements.

var big_array = [];
var n = 0;

onmessage = function(e) {
  var sum = 0;
  if (n == 0) {
    for (let i = 0; i < 4 * 1024 * 1024; i++) {
      big_array[i] = i * i;
    }
  } else {
    for (let i = 0; i < 4 * 1024 * 1024; i++) {
      sum += big_array[i];
      big_array[i] += 1;
    }
  }
  self.postMessage(`Iter: ${n}, sum: ${sum}`);
  n++;
};

It registers an onmessage event hander. When the page sends it a message it will execute the anonymous function. The first time this happens the function will create the array, the next time it will manipulate the array. Since the array is a global and is also captured by the handler I doubt the GC would free it. But I also don’t want an optimiser (now or in the future) from reducing the whole program to a large summation, or caching an answer. Which is why the array is manipulated each time the event handler is called. It doesn’t matter that it’s ridiculous - it’s a test - just that it uses "enough" memory.

From the main page it can be started like this:

  var worker = new Worker("workers_memory_script.js");
  worker.postMessage(n);

But that’s not enough to make a working test.

The test

Our test needs to open this page in one tab, and in another tab look at about:performance and observe that the memory is being used. Opening and managing multiple tabs and is standard faire for a browser test, but what we need is for our test to wait for the tab with the worker to be /ready/.

Waiting for a tab to be loaded is also very easy, which means that the tab will have executed worker.postMessage(n) by the time the test code checks. But that doesn’t mean that the worker has received the message.

So we need to make our test wait for the worker to start and complete one iteration (creating its array).

In the test we can add code such as:

  let tabContent = BrowserTestUtils.addTab(gBrowser, url);

  // Wait for the browser to load the tab.
  await BrowserTestUtils.browserLoaded(tabContent.linkedBrowser);

  // For some of these tests we have to wait for the test to consume some
  // computation or memory.
  await SpecialPowers.spawn(tabContent.linkedBrowser, [], async () => {
    await content.wrappedJSObject.waitForTestReady();
  });

The last three lines here are the interesting ones. SpecialPowers.spawn allows us to execute code in the context of the tab. In which we wait on a promise that the test is ready.

Now we need to add this promise to the page that owns the worker:

  var result = document.querySelector('#result');
  var worker = new Worker("workers_memory_script.js");
  var n = 1;

  var waitPromise = new Promise(ready => {
    worker.onmessage = function(event) {
      result.textContent = event.data;
      ready();

      // We seem to need to keep the worker doing something to keep the
      // memory usage up.
      setTimeout(() => {
        n++;
        worker.postMessage(n);
      }, 1000);
    };
  });

  worker.postMessage(n);

  window.waitForTestReady = async () => {
    await waitPromise;
  };

Starting at the bottom. For some reason I had to wrap the promise up in a function, I can’t remember why! I’m tempted to complain about JavaScript and it’s inconsistent rules here, but it could also be my limited understanding preventing me from getting it. What I do know is that this function must be in the window object so that the test code above can find it in wrappedJSObject.

The promise wrapped here (waitPromise I could have picked a better name) is resolved when ready() is called, which happens after we receive the worker’s response. Finally we use setTimeout() to post another message to keep memory usage up. I don’t know why this was necessary either. Was the worker completely terminated without it?

One more thing

Our test almost works. For whatever reason when the test accesses the right part of the about:performance page there’s no value for how much memory is being used. Waiting for a single update fixes this:

  if (!memCell.innerText) {
    info("There's no text yet, wait for an update");
    await new Promise(resolve => {
      let observer = new row.ownerDocument.ownerGlobal.MutationObserver(() => {
        observer.disconnect();
        resolve();
      });
      observer.observe(memCell, { childList: true });
    });
  }
  let text = memCell.innerText;

For the complete code for this test checkout Bug 1760920 and toolkit/components/aboutperformance/tests/browser.

There’s things I don’t know

There’s three places here where I’ve said "it needs this code, I don’t know why". I hate programming like this, and I feel shameful writing it in a blog post and calling myself an engineer. I don’t want to spin it as a joke on JavaScript, or myself "lol, that’s programming! AMIRITE?!" There’s obviously some further subtleties I don’t know the rules for, and JavaScript does have some pretty inconsistent rules, throw in a browser, two tabs and a web worker and feeling like you don’t know how something works is relatable.

Do I wish I knew? Sure, I’m uncomfortable not knowing, but I’ve already spent enough time on this. But this is also why I wrote down what I do know. Next time I’ll be able to find this much and solve my problem quicker.

The Mozilla BlogNew phone? Give yourself the gift of privacy with these 5 tips

An illustration shows two gift boxes and a padlock.<figcaption>Credit: Nick Velazquez</figcaption>

So you’ve unboxed a shiny new phone, peeled the sticker off the screen and transferred your data. If you’re reading this, you’ve made the smart decision to take another important step: Setting up your device for privacy and security

Here are five steps you can take to help keep your data safe. Your future self thanks you.

1. Set up privacy controls on your new devices

Do you know which apps know your location, track your online activity and have access to your contacts? Check your privacy settings to make sure you’re only sharing what you’re comfortable sharing with your apps. Here’s how to get to your phone’s privacy settings:

  • iPhone: Settings > Privacy & Security
  • Android: Settings > Privacy > Permission Manager

2. Turn on auto-update

Updates can be disruptive, but they’re also vital in keeping your device safe from hackers who take advantage of security holes that updates are intended to patch. Here’s where you can turn on auto-update:

  • iPhone: Settings > General > Software Update > Automatic Updates
  • Android: Settings > Software Updates

3. Opt out of the default browser

Sure it’s convenient to use the browser that’s already on your phone. But you do have a choice. Download Firefox to use a browser that’s backed by a nonprofit and that will always put you and your privacy first. Once you’ve installed Firefox, here’s how to make it your default browser:

  • iPhone: Settings > Firefox > Default Browser App > Firefox
  • Android: Settings > Set as default browser > Firefox for Android > Set as default

Another benefit: If you already use Firefox on desktop, you’ll get to see your bookmarks, history and saved credit card information and passwords on your phone too. Just log into your Firefox account to move seamlessly between your devices. 

A table compares major browsers' security and privacy features, including private browsing, blocking third-party tracking cookies by default, blocking cryptomining scripts and blocking social trackers. Firefox checks the boxes for all.

4. Prevent spam texts and calls with Firefox Relay

Want fewer spam text messages and calls? Sign up for Firefox Relay, which gives you a phone number mask (i.e. not your true digits) when website forms ask for your number. That way, when you’re making restaurant reservations or signing up for discount codes, you lessen the chance of companies selling your phone number to third parties. Bonus: You can even give your phone number mask to people when you don’t want to give them your true number just yet. Phone calls and texts will automatically get forwarded to you. Learn more about how Firefox Relay works here.

5. Consider using a VPN

Many mobile apps don’t implement encryption properly, leaving the data on your phone vulnerable to hackers. Using a VPN encrypts your connection and conceals your IP address, shielding your identity and location from prying eyes. The Mozilla VPN, unlike some services, will never log and sell your data. (P.S. We’ve made it more accessible to take advantage of both Firefox Relay and Mozilla VPN. Learn more about it here.)

Staying secure and private online isn’t hard, but it does take some effort. Mozilla is always here to help. For more tips about living your best online life, check out our #AskFirefox series on YouTube

Layer on even more protection with phone number masking

Sign up for Firefox Relay

The post New phone? Give yourself the gift of privacy with these 5 tips appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 470

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is graph, a collection of high-performance graph algorithms.

Thanks to Knutwalker for the (partial self-) suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

There were no calls for participation submitted this week. If you would like to submit, please check the guidelines.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

388 pull requests were merged in the last week

Rust Compiler Performance Triage

A fairly quiet week with regressions unfortunately slightly outweighing improvements. There was not any particular change of much note. Many of the regressions were justifiable since they were for critical bug fixes.

Triage done by @rylev. Revision range: 96ddd32c..a78c9bee

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.7% [0.2%, 3.0%] 76
Regressions ❌
(secondary)
1.5% [0.3%, 8.4%] 69
Improvements ✅
(primary)
-0.7% [-1.8%, -0.2%] 18
Improvements ✅
(secondary)
-1.4% [-3.2%, -0.2%] 35
All ❌✅ (primary) 0.4% [-1.8%, 3.0%] 94

7 Regressions, 4 Improvements, 6 Mixed; 5 of them in rollups 47 artifact comparisons made in total

Full report here

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New and Updated RFCs

Upcoming Events

Rusty Events between 2022-11-23 - 2022-12-21 🦀

Virtual
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

While working on these userspace Mesa changes today, I did not hit a single GPU kernel driver bug. Not. A. Single. Bug.

This is thanks to Lina's phenomenal efforts. She took a gamble writing the kernel driver in Rust, knowing it would take longer to get to the first triangle but believing it would make for a more robust driver in the end. She was right.

A few months of Lina's Rust development has produced a more stable driver than years of development in C on certain mainline Linux GPU kernel drivers.

I think... I think I have Rust envy 🦀

....Or maybe just Lina envy 😊

Alyssa Rosenzweig tooting on Mastodon

Thanks to Brian Kung for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox NightlyMore improvements than can fit into this title – These Weeks in Firefox: Issue 127

Highlights

  • Screenshots is getting an overhaul for Nightly! Check it out by flipping the `screenshots.browser.component.enabled`

  • We’re starting to assemble and line up ideas for next steps for Screenshots. If you are a heavy screenshots-taker – or think you would be if only Screenshots did x – please tell us about x. We’re in #fx-screenshots in matrix, or you can post to connect.mozilla.org or file a bug in bugzilla.
  • Thanks to Nolan Ishii from CalState LA for adding new migrators for Opera and for Vivaldi!
    • You can test these by setting `browser.migrate.opera.enabled` and `browser.migrate.vivaldi.enabled` to true, and then opening the importer dialog from (Alt-f > Import from another browser on Windows, File > Import from another browser on macOS and Linux)
    • Note that you’ll need Opera or Vivaldi installed in order for them to appear in the migrator dialog.
    • Notice any bugs? You can file them here.
  • HBO Max and TubiTV captions support have been fixed for Picture-in-Picture. Thanks to Niklas (1,2) and kpatenio (1) for their patches.
  • The Performance Tools team put together a great blog post summarising their progress in Q3
  • A huge thank you to Francis (:mckenfra) who landed a patch that speeds up deleting from history by 7,000% in some cases!
    • On Francis’ machine, deleting 2000 entries went from taking about 73 seconds down to roughly 1 second! Wow!

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Thanks Johannes Bechberger for your continued work on the Firefox Profiler.
  • ben.freist
  • Itiel
  • Nolan Ishii
  • Rob Lemley [:rjl]
  • Zach Harris

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • As part of the work related to “Origin Controls “ and “Unified Extensions UI”:
  • “applications” manifest key is fully deprecated (and not supported anymore) in Manifest Version 3 extensions, superseded by the existing “browser_specific_settings” manifest.json property – Bug 1797050
    • Please do not use the “applications” key in manifest.json files anymore
    • Bug 1797777 – Tracking meta for updating all non-test extensions in tree to use “browser_specific_settings” in their manifest
  • In Firefox >= 108, a simplified extension version string format is now recommended (warnings only but AMO/addons-linter will enforce this new format for Manifest Version 3 extensions) – Bug 1793925
  • Fixed a regression on calling “confirm” from extension action popups (regression originally introduced in Firefox 107 from Bug 1791972, fix landed Firefox 108 and uplifted to 107) – Bug 1796972
  • Investigated and identified underlying issues of the regression reported on LastPass in Firefox Nightly 108 Bug 1791415 – LastPass extension not working
    • Turned out that the regression was triggered by the addition of the implementation for the JS Array grouping proposal (Bug 1739648)
    • Regression currently fixed by disabling the Array grouping proposal by default in Nightly (Bug 1799619)
WebExtension APIs
  • Lifting user activation requirement on the action.openPopup API method (to align it with Chromium and Safari per agreement between browser vendors part of the W3C WebExtensions Community Group) – Bug 1755763
    • Currently limited on Nightly builds (locked behind the “extensions.openPopupWithoutUserGesture.enabled” about:config pref)
    • Bug 1799344 is tracking removing the pref and enabling the new behavior on all channels.
  • Fixed an issue with browser.runtime.onStartup not being fired on event pages after the event page got suspended once – Bug 1796586
  • As part of the ongoing work on the declarativeNetRequest API, Bug 1745758 has introduced an initial version of the declarativeNetRequest rule evaluation logic (Bug 1745761 is tracking hooking up the rule evaluation logic to the network)

Developer Tools

DevTools
  • Big thanks go out to:
    • :luke.swiderski, DOM mutation breakpoints are correctly synchronised between Inspector and Debugger panels (bug).
    • :zacnomore who fixed a JSON Viewer bug, which now correctly checks key modifiers before handling a keyboard shortcut (bug).
    • :zacnomore updated the toolbar height of the JSON Viewer to be consistent with the rest of our Toolbox UI (bug).
    • Emilio, who fixed several UI issues around flexbox in DevTools as soon as they were reported (bug 1, bug 2, bug 3)
  • Container queries
    • Nicolas (:nchevobbe) improved Container Queries support in the Inspector’s Rule View:
    • Nicolas also updated the Style Editor’s “Media Queries” sidebar to show all “At-rules”: @container, @media and @support (bug).

      https://snipboard.io/xSo2IN.jpg
  • Improvements
    • Hubert (:bomsy) improved the Debugger’s Pretty Print availability. The feature should now be more consistently available, and in the few cases where it really can’t be provided, the icon is still visible but disabled with an explanatory tooltip (bug).

      https://snipboard.io/0FQnwv.jpg
    • In the Network Monitor, Hubert added a feature to copy requests in a new format: “Copy as PowerShell” (bug)

      https://snipboard.io/kRM8Na.jpg
  • Maintenance
    • Alex (:ochameau) removed the devtools-source-map bundle (bug, bug), which was manually built and checked-in by our team, and used the occasion to add performance tests for our sourcemap usage (bug, bug)
    • Hubert also fixed a memory leak in our sourcemap implementation (bug).
    • Alex fixed a regression where DevTools would no longer close when Firefox was closed (bug).
    • Julian (:jdescottes) fixed a bug where the Inspector’s RuleView was blank when trying to render very long base64 URLs (bug).
    • Julian fixed the Add New Rule feature of the Inspector, which no longer worked on websites with CSPs (bug).
WebDriver BiDi
  • Sasha (:sasha) implemented a new event browsingContext.domContentLoaded, which is emitted when the document becomes interactive (bug).
  • Henrik (:whimboo) changed WebDriver BiDi to write the connection information to a JSON file called “WebDriverBiDiServer.json”, which provides more details than the previous “WebDriverBiDiActivePort” (bug).
  • Henrik contributed many new web platform tests for “no such element” errors (bug)
  • Henrik fixed WebDriver:NewWindow to open new tabs on about:blank instead of about:newtab (bug)
  • Henrik updated the serialisation of Document objects to follow the current WebDriver Classic specification and stop serialising them as WebElements (bug).

ESMification status

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)

Picture-in-Picture

Performance

Performance Tools (aka Firefox Profiler)

  • added 2 new transforms: collapse indirect recursion, and focus category
    • Reminder: transforms are available from the right-click menu in the call tree, flame graph or stack chart. They transform the call tree to make it simpler to read.
    • (#4232) Collapse indirect recursion: this removes the nodes if they end up calling the same function. For example: From A -> B -> C -> A -> D, this keeps only A -> D. Thanks Simmo Saan!
    • (#4212) Focus category: this will keep only the nodes that belong to the same category as the currently right clicked node. Thanks Johannes Bechberger!

  • (#4286) add an item to show all local tracks for a process, and make that double clicking on the global track in the menu also shows all its local tracks.

  • (#4292) show the CPU model in the profile info panel

  • (#4295) sort the extensions list in the profile info panel
  • (#4296) fix a bug in the linux perf converter where single-letter executables wouldn’t be parsed. From our contributor Kitsu.
  • (#4305) display a blue border for selected tracks

  • (#4193) display categories in the flame graph’s tooltip — only available for converters, not available for gecko profiles yet. The style for the graphs in the tooltip changed for everybody though. From our contributor Johannes Bechberger.

  • (#4261) The sourceview is now focusable, which makes it possible to copy paste text out of it.
  • (#4199) When moving between panels, the sidebar categories stays open (previously the state was forgotten). Thanks Johannes Bechberger!
  • (Bug 1788647) When we profile browsertime runs in our CI, the names in the generated zip were previously pretty bad. Now they should be much clearer.

  • Then your usual crash fixes and dependency updates, and some other minor or invisible changes.

Search and Navigation

Screenshots

  • We’re starting to identify and burn down the list of blocking issues to ship the new component implementation and replace the extension implementation

Mozilla ThunderbirdHelp Keep Thunderbird Alive and Thriving In 2023

A few short years ago Thunderbird was on the verge of extinction. But you saved us! This year we began work on an Android version of Thunderbird, made excellent progress toward next year’s “Supernova” release, and hired more talented software engineers, developers, and designers to help us make Thunderbird better than ever in 2023.

Putting YOU In Control — Not A Corporation

Since 2003, part of our mission has been giving you a customizable communication experience full of powerful features. The other part of Thunderbird’s mission is more personal: Respecting your privacy and putting you in control – not a corporation. 

We never show advertisements, and we never sell your data. That’s because Thunderbird is completely funded by gifts from generous people just like you. You keep this great software free, and you keep us thriving! 

But accomplishing this mission is expensive. Consistently improving Thunderbird and keeping it competitive means ensuring your security in a constantly changing landscape of mail providers. It means maintaining complex server infrastructure. It means fixing bugs and updating old code. It means striving for full accessibility and a refreshing, modern design. 

Help Thunderbird Thrive In 2023

So today, we’re asking for your help. Did you know that development of Thunderbird is funded by less than 1% of the people who use and enjoy it? 

If you find value in using Thunderbird, please consider giving a gift to support it. Your contributions make a huge difference. And if you’ve already donated this year, THANK YOU!

Thank you for using Thunderbird, and thank you for trusting us with your important daily communications. 

The post Help Keep Thunderbird Alive and Thriving In 2023 appeared first on The Thunderbird Blog.

Hacks.Mozilla.OrgImproving Firefox stability with this one weird trick

The first computer I owned shipped with 128 KiB of RAM and to this day I’m still jarred by the idea that applications can run out of memory given that even 15-year-old machines often shipped with 4 GiB of memory. And yet it’s one of the most common causes of instability experienced by users and in the case of Firefox the biggest source of crashes on Windows.

As such, at Mozilla, we spend significant resources trimming down Firefox memory consumption and carefully monitoring the changes. Some extra efforts have been spent on the Windows platform because Firefox was more likely to run out of memory there than on macOS or Linux. And yet none of those efforts had the impact of a cool trick we deployed in Firefox 105.

But first things first, to understand why applications running on Windows are more prone to running out of memory compared to other operating systems it’s important to understand how Windows handles memory.

All modern operating systems allow applications to allocate chunks of the address space. Initially these chunks only represent address ranges that aren’t backed by physical memory unless data is stored in them. When an application starts using a bit of address space it has reserved, the OS will dedicate a chunk of physical memory to back it, possibly swapping out some existing data if need be. Both Linux and macOS work this way, and so does Windows except that it requires an extra step compared to the other OSes.

After an application has requested a chunk of address space it needs to commit it before being able to use it. Committing a range requires Windows to guarantee it can always find some physical memory to back it. Afterwards, it behaves just like Linux and macOS. As such Windows limits how much memory can be committed to the sum of the machine’s physical memory plus the size of the swap file.

This resource – known as commit space – is a hard limit for applications. Memory allocations will start to fail once the limit is reached. In operating system speech this means that Windows does not allow applications to overcommit memory.

One interesting aspect of this system is that an application can commit memory that it won’t use. The committed amount will still count against the limit even if no data is stored in the corresponding areas and thus no physical memory has been used to back the committed region. When we started analyzing out of memory crashes we discovered that many users still had plenty of physical memory available – sometimes gigabytes of it – but were running out of commit space instead.

Why was that happening? We don’t really know but we made some educated guesses: Firefox tracks all the memory it uses and we could account for all the memory that we committed directly.

However, we have no control over Windows system libraries and in particular graphics drivers. One thing we noticed is that graphics drivers commit memory to make room for textures in system memory. This allows them to swap textures out of the GPU memory if there isn’t enough and keep them in system memory instead. A mechanism that is similar to how regular memory can be swapped out to disk when there is not enough RAM available. In practice, this rarely happens, but these areas still count against the limit.

We had no way of fixing this issue directly but we still had an ace up our sleeve: when an application runs out of memory on Windows it’s not outright killed by the OS, its allocation simply fails and it can then decide what it does by itself.

In some cases, Firefox could handle the failed allocation, but in most cases, there is no sensible or safe way to handle the error and it would need to crash in a controlled way… but what if we could recover from this situation instead? Windows automatically resizes the swap file when it’s almost full, increasing the amount of commit space available. Could we use this to our advantage?

It turns out that the answer is yes, we can. So we adjusted Firefox to wait for a bit instead of crashing and then retry the failed memory allocation. This leads to a bit of jank as the browser can be stuck for a fraction of a second, but it’s a lot better than crashing.

There’s also another angle to this: Firefox is made up of several processes and can survive losing all of them but the main one. Delaying a main process crash might lead to another process dying if memory is tight. This is good because it would free up memory and let us resume execution, for example by getting rid of a web page with runaway memory consumption.

If a content process died we would need to reload it if it was the GPU process instead the browser would briefly flash while we relaunched it; either way, the result is less disruptive than a full browser crash. We used a similar trick in Firefox for Android and Firefox OS before that and it worked well on both platforms.

This little trick shipped in Firefox 105 and had an enormous impact on Firefox stability on Windows. The chart below shows how many out-of-memory browser crashes were experienced by users per active usage hours:

Firefox trick

You’re looking at a >70% reduction in crashes, far more than our rosiest predictions.

And we’re not done yet! Stalling the main process led to a smaller increase in tab crashes – which are also unpleasant for the user even if not nearly as annoying as a full browser crash – so we’re cutting those down too.

Last but not least we want to improve Firefox behavior in low-memory scenarios by responding differently to cases where we’re low on commit space and cases where we’re low on physical memory, this will reduce swapping and help shrink Firefox footprint to make room for other applications.

I’d like to send special thanks to my colleague Raymond Kraesig who implemented this “trick”, carefully monitored its impact and is working on the aforementioned improvements.

The post Improving Firefox stability with this one weird trick appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogThe best gift for anyone who wants to feel safer when they go online: Mozilla privacy products

The holidays are a wonderful time of the year where we are happily shopping for unique gifts for loved ones online. It also means we’re sharing our personal information online like giving out email addresses or phone numbers to sign up for discount programs or creating new accounts. Whenever we go online, we are asked to give our personal information, which can end up in the wrong hands. Once our information is out there and publicly available it’s even tougher to get it back. 

Here at Mozilla, a mission-driven company with a 20-year track record of fighting for online privacy and a healthier internet, we get that. Our privacy products, Firefox Relay and Mozilla VPN, have helped people feel safer when they go online and have blocked more than 1.5 million unwanted emails from people’s inboxes while keeping their real email addresses safe from trackers across the web. So, wherever you go online with Mozilla’s trusted products and services, your information is safer. 

Mozilla’s privacy products include Firefox Relay which hides your real email address and masks your phone number, and Mozilla VPN, our fast and easy-to-use VPN service, that helps protect the privacy of your network traffic. Together they help you keep what you do online private. And now, we are making it easier to get both Firefox Relay and Mozilla VPN together — for $6.99 a month when you sign up for an annual subscription. Whether you currently use one or none of these products, here’s more information on what makes these products a must-have whenever you go online. 

Mozilla privacy product #1: Firefox Relay

Since the launch of Firefox Relay, thousands of users have signed up for our smart, easy solution that hides their real email address to help protect their identity. This year, we continued to look to our users to improve and shape their Firefox Relay experience. In 2022, we added user-requested features which included increasing the email limit size to 10 MB and making Firefox Relay available as a Chrome extension. For Firefox Relay Premium users, we added a phone number mask feature to protect personal phone numbers. Whether you are signing up for loyalty programs, booking a restaurant reservation, or making purchases that require your phone number, now you can feel confident that your personal phone number won’t fall in the wrong hands. You can read more about the phone number mask feature here. Firefox Relay has helped keep thousands of people’s information safe. Check out the great coverage in The Verge, Popular Science, Consumer Reports and PCMag

Mozilla privacy product  #2: Mozilla VPN 

This year, Mozilla VPN, our fast and easy-to-use Virtual Private Network service, integrated with one of our users’ favorite Firefox Add-ons, Multi-Account Containers, to offer a unique, privacy solution that is only available in Firefox. We also included the ability to multi-hop, which means that you can use two VPN servers instead of one for extra protection. You can read more about this feature here. To date, thousands of people have signed up to Mozilla VPN, which provides device-level network traffic protection as you go on the web. Besides our loyal users, there are numerous news articles (Consumer Reports, Washington Post, KTLA-TV and The Verge) that can tell you more about how a VPN can help whenever you use the web. 

Better Together: Firefox Relay and Mozilla VPN

If there’s one person you shouldn’t forget on your list, it’s giving yourself the gift of privacy with Mozilla’s products. And now we’re offering Firefox Relay and Mozilla VPN together at $6.99 a month, when you sign up for an annual subscription. 

Developed by Mozilla, we are committed to innovate and deliver new products like Mozilla VPN and Firefox Relay. We know that it’s more important than ever for you to be safe, and for you to know that what you do online is your own business. By subscribing to our products, users support both Mozilla’s product development and our mission to build a better web for all. 

Subscribe today either from the Mozilla VPN or Firefox Relay site.

The post The best gift for anyone who wants to feel safer when they go online: Mozilla privacy products  appeared first on The Mozilla Blog.

The Talospace ProjectFirefox 107 on POWER

Firefox 107 is out, a modest update, though there are some developer-facing changes. As before linking still requires Dan Horák's patch from bug 1775202 or the browser won't link on 64-bit Power ISA (alternatively put --disable-webrtc in your .mozconfig if you don't need WebRTC). Otherwise the build works with the .mozconfigs from Firefox 105 and the PGO-LTO patch from Firefox 101.

Mozilla Release Management TeamFirefox Regional feedback: Let's start with Europe

We work hard as an organization to ship the best browser possible every 4 weeks with about 1000 new patches per release.

We ship new features to make our browser useful and easy to use. We also do platform work to be able to render new sites and web applications while remaining compatible with millions of websites created a decade (or more) ago.

This ongoing work also includes updating our translations in more than 100 languages thanks to our impressive community of localizers.

Yes, we want to make sure that Firefox can be used everywhere by everybody.

But could we maybe do better with a tighter feedback loop from our local communities?

Let’s give a few examples from our bug tracker:

Usually, major issues that may impact users in a specific country are fixed before we ship the final release, but occasionally we discover them after shipping and have to ship a fix in a dot release.

We talked about significant breakage with a regional impact, but what about papercuts?

Web compatibility, incorrect translations, internationalization issues, PiP subtitles support, certificates… The list of potential problems in a region that may affect our users is long and we may not know about them.

Maybe these issues are discussed in places we don’t know about, in languages we don’t speak. Maybe these issues are already filed in our bug tracker but don’t get prioritized correctly because we don’t know about their regional impact. Maybe a handful of specific regional issues are making Firefox hard to use in a specific country and the information is out there. Maybe all we need is somebody who understands these issues to surface these bugs in Bugzilla to our developers.

In a nutshell, we don’t know what we don’t know.

That is why I intend to work on studying and setting up basic feedback mechanisms to evaluate the health of Firefox in a few European countries so as to help my team (Release Management) prioritize product fixes for existing bugs which have the highest impact on our users and also to get help identifying regressions on our pre-release channels (Nightly, Beta, Developer Edition).

My very first goal is to make contacts with Mozillians in a handful of European countries (France, Germany, Italy, Poland, Spain) 1 that could help me identify issues that affect them locally, identify their top web compatibility issues, and maybe relay a general message for community feedback on pre-release channels.

To that effect, I created a Local Firefox room on the Mozilla Matrix instance. If you are interested in collaborating with me on this project, you are very welcome to join it and say hello (my nick is Pascal). I can speak with you in French and Spanish as well if you don’t feel comfortable speaking in English.

Thanks!

Pascal

  1. I am focusing on a few European countries for timezone and bandwidth reasons since I’ll do that alongside my role as a Firefox Release Manager, but I am open to feedback from other regions as well of course. 

Mozilla Addons BlogManifest v3 signing available November 21 on Firefox Nightly

Starting November 21, 2022 add-on developers are welcome to upload their Firefox Manifest version 3 (MV3) compatible extensions to addons.mozilla.org (AMO) and have them signed as MV3 extensions. Getting an early jump on MV3 signing enables you to begin testing your extension’s future functionality on Nightly to ensure a smooth eventual transition to MV3 in Firefox.

To be clear, Firefox will continue to support MV2 extensions for the foreseeable future, even as we welcome MV3 extensions in the release to general availability in Firefox 109 (January 17, 2023). Our goal has been to ensure a seamless transition from MV2 to MV3 for extension developers. Taking a gradual approach and gathering feedback as MV3 matures, we anticipate opportunities will emerge over time to modify our initial MV3 offering. In these instances, we intend to take the time necessary to make informed decisions about our approach.

Towards the end of 2023 — once we’ve had time to evaluate and assess MV3’s rollout (including identifying important MV2 use cases that will persist into MV3) — we’ll decide on an appropriate timeframe to deprecate MV2. Once this timeframe is established, we’ll communicate MV2’s closure process with advance notice. For now, please see this guide for supporting both MV2 and MV3 versions of your extension on AMO.

Mozilla’s vision for Firefox MV3

Firefox MV3 offers simplified and consolidated APIs, enhanced security and privacy mechanisms, and functionality to better support mobile platforms. As we continue to collaborate with other browser vendors and the developer community to shape MV3, we recognize cross-browser compatibility as a fundamental consideration. That said, we’re also implementing distinct elements to suit Firefox’s product and community needs. We want to give extension developers creative flexibility and choice, while ensuring users maintain access to the highest standards of extension customization and security. Firefox MV3 stands apart from other iterations of MV3 in two critical ways:

  1. While other browser vendors introduced declarativeNetRequest (DNR) in favor of blocking Web Request in MV3, Firefox MV3 continues to support blocking Web Request and will support a compatible version of DNR in the future. We believe blocking Web Request is more flexible than DNR, thus allowing for more creative use cases in content blockers and other privacy and security extensions. However, DNR also has important performance and compatibility characteristics we want to support.
  2. Firefox MV3 offers Event Pages as the background script in lieu of service workers, though we plan to support service workers in the future for compatibility. Event Pages offer benefits like DOM and Web APIs that aren’t available to service workers, while also generally providing a simpler migration path.

Over subsequent releases next year, we’ll continue to expand Firefox MV3 compatibility.

MV3 also ushers an exciting user interface change in the form of the new Unified Extensions button (already available on Firefox Nightly). This will give users direct control over which extensions can access specific web sites.

The Unified Extensions button will give Firefox users direct control over website specific extension permissions.

Users are able to review, grant, or revoke MV3 extension access to any website. MV2 extensions will display in the button interface, but permissions access is unavailable. Please see this post for more information about the new Unified Extensions button.

If you’re planning to migrate your MV2 extension to MV3, there are steps you can take today to get started. We always encourage feedback from our developer community, so don’t hesitate to get in touch:

The post Manifest v3 signing available November 21 on Firefox Nightly appeared first on Mozilla Add-ons Community Blog.

Mozilla Addons BlogUnified Extensions Button and how to handle permissions in Manifest V3

Manifest V3 (MV3) is bringing new user-facing changes to Firefox, including the Unified Extensions Button to manage installed and enabled browser extension permissions (origin controls), providing Firefox users control over extension access to their browsers. The first building blocks of this button were added to Nightly in Firefox 107 and will become available with the general release of MV3 in Firefox 109.

Unified Extensions Button

The Unified Extensions button will give Firefox users direct control over website specific extension permissions.

In MV2, host permissions are granted by the user at the time of install and there’s no elegant way for the user to change this setting (short of uninstalling/reinstalling and choosing different permissions). But with the new Unified Extensions Button in MV3 in Firefox, users will have easy access and persistent control over which extensions can access any web page, at any time. Users are free to grant ongoing access to a website, or make a choice per visit. To enable this, MV3 treats host permissions (listed in the extension manifest) as opt-in.

The button panel will display the user’s installed and enabled extensions and their current permission state. In addition to managing host permissions, the panel also allows the user to manage, remove, or report the extension. Extensions with browser actions will behave similarly in the toolbar as in the panel.

Manifest V2 (MV2) extensions will also display in the panel; however users can’t take actions for MV2 host permissions since those were granted at installation and this choice cannot be reversed in MV2 without uninstalling the extension and starting again.

How to deal with opt-in permissions in extension code

The Permissions API provides a way for developers to read and request permissions.

With permissions.request(), you can request specific permissions that have been defined as optional permissions in the manifest:

const permissionsToRequest = {
  permissions: ["bookmarks", "history"],
  origins: ["https://developer.mozilla.org/"]
}

async function requestPermissions() {
  function onResponse(response) {
    if (response) {
      console.log("Permission was granted");
    } else {
      console.log("Permission was refused");
    }

    return browser.permissions.getAll();
  }

  const response = await browser.permissions.request(permissionsToRequest);
  const currentPermissions = await onResponse(response);

  console.log(`Current permissions:`, currentPermissions);
}

This is handy when the request for permissions is tied to a user action like selecting a context menu item. Note that you cannot request for a permission that is not defined in the manifest.

Other times, you’ll want to react to a permission being granted or removed. This can be done with permissions.onAdded and permissions.onRemoved respectively.


function handleAdded(permissions) {
  console.log(`New API permissions: ${permissions.permissions}`);
  console.log(`New host permissions: ${permissions.origins}`);
}

browser.permissions.onAdded.addListener(handleAdded);

Finally, you can check for already existing permissions in two different ways: permissions.getAll() returns a list of all granted permissions and permissions.contains(permissionsToCheck) checks for specific permissions and resolves to true if, and only if, all checked permissions are granted.


// Extension permissions are:
// "webRequest", "tabs", "*://*.mozilla.org/*"

let testPermissions1 = {
  origins: ["*://mozilla.org/"],
  permissions: ["tabs"]
};

const testResult1 = await browser.permissions.contains(testPermissions1);
console.log(testResult1); // true

We always encourage feedback from our developer community, so don’t hesitate to get in touch:

The post Unified Extensions Button and how to handle permissions in Manifest V3 appeared first on Mozilla Add-ons Community Blog.

This Week In RustThis Week in Rust 469

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is lngcnv, a linguistic command line tool.

Thanks to Piotr Bajdek for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

373 pull requests were merged in the last week

Rust Compiler Performance Triage

A light week for triage. The biggest of the three regressions has a (hopeful) fix up already. The second biggest is a regression we are accepting for sake of correctness of incremental-compilation. The third regression is small and may well be removed as the type system internals are improved. max-rss seems stable.

Triage done by @pnkfelix. Revision range: 57d3c58e..96ddd32c

3 Regressions, 4 Improvements, 3 Mixed; 2 of them in rollups 40 artifact comparisons made in total

Full report here

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New and Updated RFCs

Upcoming Events

Rusty Events between 2022-11-16 - 2022-12-14 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

What you are essentially saying is: "Doctor, I'm writing C in Rust, and it hurts." To which the doctor will reply: "Then don't write C in Rust, and it won't hurt!"

Árpád Goretity on rust-users

Thanks to Michael Bryan for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla Open Policy & Advocacy BlogMozilla Comments on FTC’s “Commercial Surveillance and Data Security” Advance Notice of Proposed Rulemaking

Like regulators around the world, the US Federal Trade Commission (FTC) is exploring the possibility of new rules to protect consumer privacy online. We’re excited to see the FTC take this important step and ask key questions surrounding commercial surveillance and data security practices, from advertising and transparency to data collection and deceptive design practices.

Mozilla has a long track record on privacy. It’s an integral aspect of our Manifesto, where we state that individuals’ security and privacy on the internet are fundamental and must not be treated as optional. It’s evidenced in our products and in our collaboration with others in industry to forge solutions to create a better, more private online experience.

But we can’t do it alone. Without rules of the road, sufficient incentive won’t exist to shift the rest of the industry to more privacy preserving practices. To meet that need, we’ve called for comprehensive privacy legislation like the American Data Privacy and Protection Act (ADPPA), greater ad transparency, and strong enforcement around the world. In our latest submission to the FTC, we detail the urgent need for US regulators and policymakers to take action to create a healthier internet.

At a high level, our comments focus on:

Privacy Practices Online: Everyone should have control over their personal data, understand how it’s obtained and used, and be able to access, modify, or delete it. To that end, Mozilla has long advocated for companies to adopt better privacy practices through our Lean Data Practices methodology. It’s also important that rules govern not just the collection of data, but the uses of that data in order to limit harmful effects – from the impact of addictive user interfaces on kids to the use of recommendation systems to discrimination in housing and jobs.

Privacy Preserving Advertising: The way in which advertising is conducted today is broken and causes more harm than good.  At the same time, we believe there’s nothing inherently wrong with digital advertising. It supports a large section of services provided on the web and it is here to stay, in some form. A combination of new research, technical solutions, increased public awareness, and effective regulatory enforcement can reform advertising for the future of the web.

Deceptive Design Practices: Consumers are being tricked into handing over their data with deceptive patterns, then that data is used to manipulate them. The use of deceptive design patterns results in consumer harms including limited or frustrated choice, lower quality, lower innovation, poor privacy, and unfair contracts. This is bread-and-butter deception – the online manifestation of what the FTC was established to address – and it is critical that the FTC has the authority to take action against such deception.

Automated Decision Making Systems (ADMS): For years, research and investigative reporting have uncovered instances of ADMS that cause or enable discrimination, surveillance, or other harms to individuals and communities. The risks stemming from ADMS are particularly grave where these systems affect, for example, people’s livelihoods, safety, and liberties. We need enforceable rules that hold developers and deployers of ADMS to a higher standard, built on the pillars of transparency, accountability, and redress.

Systemic Transparency and Data Sharing: We encourage the FTC to strengthen the mechanisms that empower policymakers and trusted experts to better understand what is happening on major internet platforms. To achieve this, we need greater access to platform data (subject to strong user privacy protections), greater research tooling, and greater protections for researchers.

Practices surrounding consumer data on the internet today, and the resulting societal harms, have put people and trust at risk. The future of privacy online requires industry to step up to protect and empower people online, and demands that lawmakers and regulators implement frameworks that create the ecosystem and incentive for a better internet ahead.

To read Mozilla’s full submission, click here.

The post Mozilla Comments on FTC’s “Commercial Surveillance and Data Security” Advance Notice of Proposed Rulemaking appeared first on Open Policy & Advocacy.

The Mozilla Blog4 ways a Firefox account comes in handy

An illustration shows a Firefox browser window with cycling arrows in the middle, a pop-up hidden password field and the Pocket logo next to the address bar.<figcaption>Credit: Nick Velazquez / Mozilla</figcaption>

Even people who are very online can use some help navigating the internet – from keeping credit card details safe when online shopping to generating a password when one simply doesn’t have any more passwords in them.

Using Firefox as your main browser helps take care of that. Want to level up? With a Firefox account, you can take advantage of the following features whether you’re using your desktop device, tablet or your phone.

1. See your bookmarks across devices

To easily find your go-to places on the web (aka your bookmarks) on your phone or tablet, use Firefox mobile for Android or iOS. Not only will you get the same privacy-first experience you enjoy when using Firefox on desktop, you’ll also have Firefox Sync, which lets you see your bookmarks wherever you log into your Firefox account. Firefox Sync allows you to choose the data you want to take with you. In addition to bookmarks, you also have the option to sync your browsing history, open tabs and your installed add-ons across devices. 

A Firefox browser pop-up shows a window asking the user to choose what they want to sync.

2. Use a secure password manager that goes with you wherever you are

Firefox has a built-in password manager that can generate a secure password when you’re creating a new account on a website. (Just click the password field and hit Use a Securely Generated Password. Firefox will save your login for that site.) When you’re using Firefox on your mobile device and you’re logged into your Firefox account, you’ll see your usernames and passwords right where you saved them.

3. Shop securely across devices with credit card autofill

Firefox will also automatically fill in credit card information that you saved when purchasing something online. You just need to enter your CVV number, which Firefox doesn’t save as a security measure. For extra protection, you can choose to require your device’s password, face ID or fingerprint before Firefox autofills your credit card data. Here’s how to turn that on. 

While this works both on desktop and mobile devices when you’re signed into your Firefox account, you can also opt to start shopping on one device and send your browser tab to another to complete your purchase. For example, you can add items to an online shopping cart on your phone but choose to check out on your laptop. 

4. Stay productive now, save that article or video for later

The internet is full of stories, whether it’s a long read about Gen Z’s internet habits or a video about nerdcore hip-hop. They’re a fun way to learn about the world, but sometimes, we need to set them aside so we can finish that research paper for class or slide deck for work. Just hit the Pocket button in the toolbar to easily save an article or video. When you’re ready, just log into Pocket with your Firefox account and you’ll find everything you’ve saved.

A screenshot from the Firefox browser shows the Pocket logo next to the address bar.

Switching to Firefox on your iOS or Android device is easy

If you already use Firefox on desktop, then you already know how Firefox beats other major browsers on security, privacy and functionality. You can easily enjoy the same benefits with a Firefox account on your phone or tablet by making Firefox your default browser on mobile. Here’s how to do that: 

A table shows a comparison of Firefox's portability vs. other browsers.

The internet can bring us to our favorite online spaces and take us to new, fascinating places at the tip of our fingers. A Firefox account lets you enjoy all the web has to offer while keeping your data safe – wherever you are. 

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

The post 4 ways a Firefox account comes in handy appeared first on The Mozilla Blog.

Mozilla AccessibilitySignificant Improvements for Screen Readers Now in Nightly Firefox

A couple of months ago, we shared an update on our Cache the World project, covering the ongoing re-write of the Firefox accessibility engine. The project aims to improve Firefox’s overall performance for users of assistive technologies (ATs) like screen readers and to reduce crashes and hangs. It will also make the accessibility engine easier to maintain and simplify adding new features going forward.

In our last post, we provided instructions for enabling the new engine in Firefox Nightly by changing an experimental setting and encouraged adventurous Windows OS users to opt in and try it out. Thanks to your testing and feedback, and further work by the engineering team, the new Firefox accessibility engine is now solid enough that we have enabled it for all Firefox Nightly users, starting with the Nightly build on November 15th (id: 20221115095444 or newer). Unless you’ve changed Firefox’s update settings, it will attempt to automatically update twice per day, but you can perhaps speed that up by manually checking for updates. To do that, navigate to the Firefox menu, then to Help, and then to About Nightly. Alternatively, open the Help menu from the Windows menu bar and choose About Nightly. That will do an update check; if an update is available Firefox will download it. After a restart to finish the update, Firefox will be using the new accessibility engine.

We’ve enabled this in Nightly so that we can gather more feedback, both from AT users, as well as from non-AT users or users who may not know Firefox’s accessibility engine has been activated due to an OS feature or third party application. We believe that the experience for most Nightly screen reader users will be a significant improvement over the old engine, as many of you have told us after opting in to the Nightly preview. As well as direct feedback we receive, enabling it by default in Nightly will allow us to gather automatic feedback via crash reports. We will also be receiving feedback from a larger group of users with more diverse use cases which will help us move closer to a beta release and an eventual release to the full Firefox audience.

If you experience slow-downs, more frequent crashes, or missing capabilities that prevent you from using Firefox Nightly as you normally would, you can revert to the old accessibility engine by going to Firefox Settings, entering accessibility cache into the search box, tabbing to the Accessibility cache check box and turning it off. After a restart, you’ll be back on the old engine. If this does become necessary or you have any other feedback to offer, please file a Bugzilla report or stop in at our Matrix chat and let us know.

The post Significant Improvements for Screen Readers Now in Nightly Firefox appeared first on Mozilla Accessibility.

The Mozilla BlogOver a quarter of parents believe their children don’t know how to protect their information online – Firefox can help with that

Parenting has never been easy. But with a generation growing up with groundbreaking technology, families are facing new challenges along with opportunities as children interact with screens everywhere they go — while learning at school, playing with friends and for on-the-go entertainment. 

We are previewing a new Mozilla Firefox survey conducted in partnership with YouGov to better understand families’ needs in the United States, Canada, France, Germany and the United Kingdom that we will release fully in January 2023. We wanted to hear parents’ thoughts around online safety, as well as their biggest concerns and questions when their kids navigate through the sticky parts of the web before getting to the good stuff. Here are the top insights we learned from the survey:

  • Many parents believe their kids have no idea how to protect themselves online. About 1 in 3 parents in France and Germany don’t think their child “has any idea on how to protect themselves or their information online.” In the U.S., Canada and the U.K., about a quarter of parents feel the same way. 

As far as the safety of the internet itself, parents in the U.S. seem to be more trusting across all the countries surveyed: Almost 1 in 10 said they believe the internet is “very safe” for children. Parents in France trust the internet the least, with almost 75% finding it to be unsafe to some degree. 

  • U.S. parents spend the most time online compared to parents in other countries, and so do their children. Survey takers in the U.S. reported an average of 7 hours of daily internet use via web browsers, mobile apps and other means. Asked how many hours their children spend online on a typical day, U.S. parents said an average of 4 hours. That’s compared to 2 hours of internet use among children in France, where parents reported spending about 5 hours online everyday. No matter where a child grows up, they spend more time online a day as they get older.  
  • Yes, toddlers use the web. Parents in North America and Western Europe reported introducing their kids to the internet some time between 2 and 8 years old.  North America and the U.K. skew younger, with kids first getting introduced online between 2 and 5 for about a third of households.  Kids are introduced to the internet in France and Germany when they are older, between 8 to14 years old. 

Overall, the survey showed parents to be content with the time in which they chose to introduce their children to internet safety. Although in retrospect, over 1 in 5 parents in the U.S, Canada and France would have preferred to do so at an even younger age. 

Most parents speak to their children about internet safety between the ages of 5 and 8. Whatever age, these conversations don’t have to be difficult. OK, it may be a teeny-bit awkward – but you can lean on Firefox to help out with a few starter topics to get the conversation started. To find out more about starting a Tech Talk, here is our Firefox guide to help steer the conversation in the right direction. 

Methodology: 

This survey was conducted among parents between the ages of 25 and 55 years old living in the U.S., Canada, Germany, France and the U.K., who have children between 5 and 17 years old. The survey interviewed 3,699 participants between Sept 21 – Sept. 29, 2022. 


The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

The post Over a quarter of parents believe their children don’t know how to protect their information online – Firefox can help with that  appeared first on The Mozilla Blog.

Mozilla ThunderbirdImportant Message For Microsoft Office 365 Enterprise Users

In a coming release of the Thunderbird 102.x series, we will be making some changes to the way we handle OAuth2 authorization with Microsoft accounts, and this may involve some extra work for users currently using Microsoft-hosted accounts through their employer or educational institution.

In order to meet Microsoft’s requirements for publisher verification, it is necessary for us to switch to a new Azure application and application ID. However, some of these accounts are configured to require administrators to approve any applications accessing email.

We have already made the necessary changes in the current Thunderbird beta series.

If you are using a hosted Microsoft account, please temporarily launch Thunderbird 107.0b3 or later (download here) and attempt to log in, making sure to select “OAuth2” as your authentication method.

If you encounter a screen saying “Need admin approval” during the login process, please contact your IT administrators to approve the client ID 9e5f94bc-e8a4-4e73-b8be-63364c29d753 for Mozilla Thunderbird (it may appear to admins as “Mzla Technologies Corporation”).

We request the following permissions:

  • IMAP.AccessAsUser.All
  • POP.AccessAsUser.All
  • SMTP.Send
  • offline_access

After doing this, you may return to using the version you were using previously.

The post Important Message For Microsoft Office 365 Enterprise Users appeared first on The Thunderbird Blog.

Andrew SutherlandAndrew’s Searchfox Roadmap 2022

Searchfox (source, config source) is Mozilla’s primary code searching tool for Firefox introduced by Bill McCloskey in 2016 which built upon prior work on DXR. This roadmap post is the second of two posts attempting to lay out where my personal efforts to enhance searchfox are headed and the decision making framework that guides them. The first post was a more abstract product vision document and can be found here.

Discoverable, Extensible, Powerful Queries

Searchfox has a new “query” endpoint introduced in bug 1762817 which is intended to enable more powerful queries. Queries are parsed using :katsquery-parser crate which allows us to support our existing (secret) key:value syntax in a more rigorous way (and with automatic parse correction for the inevitable typos). In order to have a sane execution model, these key/value pairs are mapped through an extensible configuration file into a pipeline / graph execution model whose clap-based commands form the basis of our testing mechanism and can also be manually built and run from the command-line via searchfox-tool.

A graphviz diagram of the execution graph for the "foo" query.  The diagram captures that the execution graph consists of 2 phases, with the first phase consisting of 3 parallel pipelines: "file-search", "semantic-search", and "text-search".  Those 3 pipelines feed into "compile-results" which then passes its output to the 2nd phase which contains the "display" job.  If you're interested in more details, see below for the "check output for the query" link which links the backing JSON which is the basis for the graph.

Above you will find a diagram rendering the execution pipeline of searching for foo manually created from the JSON insta crate check output for the query. Bug 1763005 will add automatically generated diagrams as well as further expanding on the existing capability to produce markdown explanations of what is happening in each stage of the pipeline and the values at each stage.

While a new query syntax isn’t exciting on its own, what is exciting is that this infrastructure makes it easier to add functionality with confidence (and tests!). Some particular details worth discussing:

Customizable, Shareable Queries

Bug 1799796: Do you really wish that you could issue a query like webidl:CacheStorage to search just our WebIDL files for “CacheStorage”? Does your team have terminology that’s specific to your team and it would be great to have special search terms/aliases but it would feel wrong to use up all the cool short prefixes for your team? The new query mechanism has plans for these situations!

The new searchfox query endpoint looks like /mozilla-central/query/default. You’ll note that default looks like something that implies there are non-default options. And indeed, the plan is to allow files like this example “preset” dom.toml file to layer additional “terms” and “aliases” onto the base query_core.toml file as well as any other presets you want to build off of. You will need to add your preset to the mozsearch-mozilla repository for the tree in question, but the upside is that any query links you share will work for other people as well!

Faceting in Search Results with Shareable URLs

Bug 1799802: The basic idea of faceted search/filtering is:

  • You start with a basic search query.
  • Your results come back, potentially quite a lot of them. Too many, even!
  • The faceting logic looks at the various attributes of the results and classifies or “facets” them. Does that sound too circular? We just throw things in bins. If a bin ends up having a lot of things in it and there’s some hierarchy to its contents, we recursively bin those contents.
  • The UI presents these facets (bins), giving you a high level overview of the shape of your results, and letting you limit your results to only include certain attribute values, or to exclude based on others.
  • The UI is able to react quickly because it already knows about the result set
A barely relevant screenshot of the bugxhibit UI based on the SIMILE exhibit faceting system.  The UI shows a list of bugzilla bugs grouped by date, with a column consisting of the following facets: bugzilla product, bugzilla component, bug status, bug resolution, assignee, whiteboard flags, keywords, patch count, priority, target milestone, version, QA contact, OS, Votes.  Follow the related link below for a list of blog posts with more details on this.

The cool screenshot above is of a SIMILE Exhibit-based faceting UI I created for bugzilla a while back which may help provide a more immediate concept of how faceting works. See my exhibit blog tag for more in the space.

Here are some example facets that search can soon support:

  • Individual result paths: Categorize results by the path in which they happen. Do you not want to look at any results under devtools/? Push a button and filter out all those devtool results in an instant! Do you only care about layout/? Push a button and only see layout results!
  • Subsystem facets: moz.build files labels every file in mozilla-central so that it has an associated Bugzilla Component. As of Bug 1783761 searchfox now also derives a subsystem mapping from the bugzilla components, which really just means that if you have a component that looks like “Core :: Storage: IndexedDB”, searchfox transforms that first colon into a slash so we get “Core/Storage/IndexedDB”. This would let you restrict your results to “Core/Storage” without having to manually select every Storage bugzilla component or path by hand.
  • Symbol relationships: Did you search for a base class or virtual method which has a number of subclasses/overrides? Do you only care about some subset of the class hierarchy? Then restrict your results to whatever the portion of the set you care about.
  • Recency of changes: Do you only care about seeing results whose blame history indicates it happened recently? Can do! Or maybe you only want to see code that hasn’t been touched in a long time? Uh, that might work less well until we improve the blame situation in Bug 1517978, but it’s always nice to have something to dream about.
  • Code coverage: Only want to see results that runs a lot under our tests? Sure thing! Only want to see results that seem like we don’t have test coverage for? By looking at the result you’re now morally obligated to add test coverage!

Key to this enhancement is that the faceting state will be reflected in the URL (likely the hash) so that you can share it or navigate forward and back and the state will be the same. It’s all too common on the web for state like this to be local to the page, but key to my searchfox vision is that URLs are key. If you do a lot of faceting, the URL may become large an unwieldy, but continuing in the style of :arai‘s fantastic work on Bug 1769936 and follow-ups to make it easy to get usable markdown out of searchfox, we can help wrap your URL in markdown link syntax so that when you paste it somewhere markdown-aware, it looks nice.

Additional Query Constraints

A bunch of those facets mentioned above sound like things that it would be neat to query on, right? Maybe even put them in a preset that you can share with others? Yes, we would add explicit query constraints for those as well, as well as to provide a way to convert faceted query results into a specific query that does not need to be faceted in Bug 1799805.

A variety of other additional queries become possible as well:

  • Searching for lines of text that are near each other, or not near each other, or maybe both inside the same argument list.
  • Locating member fields by type (Bug 1733217), like if you wanted to find all member fields that are smart or raw pointer references to nsILoadInfo.
  • Bug 1779340: Function/method argument list magic.

Result Context Lines For All Result Types, Including Automatic Context

<figcaption>Current query results for C:4 AddOrPut</figcaption>

A major limitation for searchfox searches has been a lack of support for context lines. (Disclaimer: in Bug 1414954 I added secret support for fulltext-only queries by prefixing a search with context:4 or similar, but you would then want to force a fulltext query like context:4 text:my actual search or context:4 re:my.*regexp[y]?.*search.) The query mechanism already supports full context, as the above screenshot is taken from the query for C:4 AddOrPut but note that the UX needs more passes and the gathering mechanism currently needs optimization which I have a WIP for in Bug 1794177

Diagrams

A screenshot of the query `calls-between:'FrameLoader::loadInSameDocument' calls-between:'dispatchWindowEvent'` which is linked below.

The above is a live diagram I just generated with the query calls-between:'FrameLoader::loadInSameDocument' calls-between:'dispatchWindowEvent' against our searchfox index of webkit.

screenshot of the graph resulting from the query `calls-between:'ClientSource::Focus' calls-between:'WindowClient_Binding::focus' depth:10`

This next diagram is a live diagram from mozilla-central I just generated with the query calls-between:'ClientSource::Focus' calls-between:'WindowClient_Binding::focus' depth:10 and which demonstrates searchfox’s understanding of our IPDL bindings, as each of the SendP*/RecvP* pairs is capturing the IPC semantics that are only possible because of searchfox’s understanding of both C++ and IPDL.

The next steps in diagramming will happen in Bug 1773165 with a focus on making the graphs interactive and applying heuristics related to graph clustering based on work on the “fancy branch” prototype and my recent work to derive the sub-component mapping for files that can in turn be propagated to classes/methods so that we can automatically collapse edges that cross sub-component boundaries (but which can be interactively expanded). This has involved a bit of yak-shaving on Bug 1776522 and Bug 1783761 and others.

Note that we also support calls-to:'Identifier' in the query endpoint as well, but the graphs look a lot messier without the clustering heuristics, so I’m not including any in this post.

Most of my work on searchfox is motivated by my desire to use diagrams in system understanding, with much of the other work being necessary because to make useful diagrams, you need to have useful and deep models of the underlying data. I’ll try and write more about this in the future, but this is definitely a case where:

  1. A picture is worth a thousand words and iterations on the diagrams are more useful than the relevant prose.
  2. Providing screen-reader accessible versions of the underlying data is fundamental. I have not yet ported the tree-dual version of the diagram logic from the “fancy” branch and I think this is a precondition to an initial release that’s more than just a proof-of-sorta-works.

Documentation Integration

Our in-tree docs rendered at https://firefox-source-docs.mozilla.org/ are fantastic. Searchfox cannot replace human-authored documentation, but it can help you find them! Have you spent hours understanding code only to find that there was documentation that would help clarify what was going on only after the fact? Bug 1763532 will teach searchfox to index markdown so that documentation definitions and references show up in search and that we can potentially expose those in context menus. Subsequent steps could also index comment contents.

Bug 1458882 will teach searchfox how to link to the rendered documentation.

Improved Language Support

New Language Support via SCIP

With the advent of LSIF and SCIP and in particular the work by the team at sourcegraph to add language indexing built on existing analysis tools, there is now a tremendous amount of low hanging fruit in terms of off-the-shelf language indexing that searchfox can potentially ingest. Thanks to Emilio‘s initial work in Bug 1761287 we know that it’s reasonably straightforward to ingest SCIP data from these indexers.

For each additional language we want to index, we expect the primary effort required will be to make the indexer available in a taskcluster task and appropriately configure it to index the potentially many component roots within the mozilla-central mono-repo. There will also be some searchfox-specific effort required to map the symbols into searchfox’s symbol namespace.

Specific languages we can support (better):

  • Javascript / Typescript via scip-typescript (Bug 1740290): scip-typescript potentially allows us to expose the same enhanced understanding of JS code, especially module-based JS code, that you experience in VS code, including type inference/extraction from JSDoc. Additionally, in Bug 1775130 we can leverage the amazing eslint work already done to bring enhanced analysis to more confusing situations like our mochitests which deal with more complex global situations. Overall, this can allow us to move away from searchfox’s current “soupy” understanding of JS code where it assumes that all the JS it ever sees is running in a single global.
  • Python via scip-python (Bug 1426456)
  • Java / Kotlin via scip-java (Bug 1490144)

Improved C++ Support

Searchfox’s strongest support is for C++ (and its interactions with XPIDL and IPDL), but there is still more to do here. Thankfully Botond is working to improve C++ template handling in Bug 1781178 and related bugs.

Other enhancements:

Improved Mozilla-Specific Language Support

mozilla-central contains a number of Mozilla-specific Interface Definition Languages (IDLs) and Domain Specific Languages (DSLs). Searchfox has existing support for:

  • XPIDL .idl files: Our C++ support here is pretty good because XPIDL files are not preprocessed (beyond in-language support of #include and the ability to put pass-through C++ code including preprocessor directives inside %{C++ and %}demarcated blocks. Bug 1761689 tracks adding support for constants/enums which is not currently supported, and I have WIPs for this. Bug 1800008 tracks adding awareness of the rust bindings.
  • IPDL .ipdl and .ipdlh files: Our C++ support here is good as long as the file is not pre-processed and the rust IPDL parser hasn’t fallen behind the Python parser. Unfortunately a lot of critical files like PContent.ipdl are pre-processed so this currently creates massive blind-spots in searchfox’s understanding of the system. Bug 1661067 will move us to having the Python parser/code generator emit data searchfox can ingest

Searchfox has planned support for:

Pernosco Integration

A timeline visualization of data extracted from a pernosco session using pernosco-bridge.  The specific data is showing IndexedDB database transaction lifetimes happening under the chrome origin with specific calls to AddOrPutRequestOp and CommitOp occurring.<figcaption>pernosco-bridge IDB timeline visualization</figcaption>

Searchfox’s language indexing is inherently a form of static analysis. Consistent with the searchfox vision saying that “searchfox is not the only tool”, it makes sense to attempt to integrate with and build upon the tools that Firefox engineers are already using. Mozilla’s code-coverage data is already integrated with searchfox, and the next logical step is to integrate with pernosco, why not. I created pernosco-bridge as an experimental means of extracting data from pernosco and allowing for interactive visualizations.

The screenshot above is an example of a timeline graph automatically extracted from a config file to show data relevant to IndexedDB. IndexedDB transactions were hierarchically related to their corresponding database and the origin that opened those databases. Within each transaction, ObjectStoreAddOrPutRequestOp and CommitOp operations are graphed. Clicking on the timeline would direct the pernosco tab to jump to those instants in time.

A pernosco-bridge visualization of the sequence of events for DocumentLoadListener handling a redirect.<figcaption>pernosco-bridge DocumentChannel visualization</figcaption>

The above is a different visualization based on a config file for DocumentChannel to help group what’s going on in a pernosco trace and surface the relevant information. If you check out the config file, you will probably find it inscrutable, but with searchfox’s structured understanding of C++ classes landed last year in Bug 1641372 we can imagine leveraging searchfox’s understanding of the codebase to make this process friendly. More importantly, there is the potential to collaboratively build a shared knowledge base of what’s most relevant for classes, so everyone doesn’t need to re-do the same work.

Object graph expressing parent relationships amongst windowGlobalParent and canonicalBrowsingContext, with URI values extracted for the canonicalBrowsingContext.  More detail in the paragraph below.

Using the same information pernosco-bridge used to build the hierarchically organized timelines above with extracted values like URIs, it can also build graphs of the live objects at any moment in time in the trace. Above we can see the relationship between windowGlobalParent instances and their corresponding canonicalBrowsingContexts, plus the URIs of the canonicalBrowsingContexts. We can imagine using this to help visualize representative object graphs in searchfox.

Old screenshot of pecobro source listing with a function `checkIfInRange` with a sparkline showing activity in a trace that is interleaved with source code lines

We can also imagine doing something like the above screenshot from my prior experiment pecobro where we interleave graphs of function activity into source listings.

Token-Centric Blame / “hyperannotate” Support via Microannotate

A screenshot of the microannotate output of https://clicky.visophyte.org/files/microannotate/nsWebBrowserPersist.cpp.html around the call to SaveURIInternal in nsWebBrowserPersist::SerializeNextFile that demonstrates blame tracking occurring on a per-token basis.<figcaption>A demonstration of microannotate’s output</figcaption>

Quoting my dev-platform post about the unfortunate removal of searchfox’s first attempt at blame-skipping: “Revision history and the “annotate” / “blame” UIs for revision control are tricky because they’re built on a sequential, line-centric data-model where moving a function above another function in a file results in a destructive representational decision to treat one function as continuing through history and the other function as removed and then re-added as new code. Reformatting that maintains the overall sequence of tokens but changes how they are distributed across multiple lines also looks like removal of all of the old code and the addition of new code. Tools frequently perform heuristic-based post-passes to help identify intra-line changes which are reflected in diff UIs, as well as (entire) lines of code that are copied/moved in a revision (ex: Phabricator does this).”

The plan to move forward is to move to a token-centric approach using :marco‘s microannotate project as tracked in Bug 1517978. We would also likely want to combine this with heuristics that skip over backout pairs. The screenshot at the top of this section is of example output for nsWebBrowserPersist.cpp where colors distinguish between different blame revision origins. Note that the addition of specific arguments can be seen as well as changes to comments.

Source Listings

  • Bug 1781179: Improved syntax and semantic highlighting in C++ for the tip/head indexed revision.
  • Bug 1583635: Show expansion of C++ macros. Do you ever look at our XPCOM macrology and wish you weren’t about to spend several minutes clicking through those macros to understand what’s happening? This bug, my friend, this bug.
  • Bug 1796870: Adopt use of tree-sitter as a tokenizer which can improve syntax highlighting for other languages as well as position: sticky context for both the tip/head indexed revision but also for historical revisions!
  • Bug 1799557: Improved handling of links to source files that no longer exist by offering to show the last version of the file that existed or try and direct the user to the successor code.
  • Bug 1697671: Link resource:// and chrome:// URLs in source listings to the underlying source files
  • Test Info Boxes
    • Bug 1785129: Add an info box mechanism to indicate the need for data collection review (“data review”) in info boxes on searchfox source listing pages
    • Bug 1797855: Joel Maher and friends have been adding all kinds of great test metadata for searchfox to expose, and soon, this bug shall expose that information. Unfortunately there’s some yak shaving related to logging that remains underway.
  • Bug 1797857: Extend the “Go to header file”/”Go to source file” mechanism to support WPT `.headers` files and xpcshell/mochitest `^headers^` files.

Alternate Views of Data

screenshot of a table UI showing the field layout of nsIContent, mozilla::dom::FragmentOrElement, and mozilla::dom::Element across the 4 supported platforms (win64, macosx64, linux64, and android-armv7), showing field offsets and sizes.

Searchfox is able to provide more than source listings. The above screenshot shows searchfox’s understanding of the field layouts of C++ classes across all the platforms searchfox indexes on as rendered by the “fancy” branch prototype. Bug 1468445 tracks implementing a production quality version of this, noting that the data is already available, so this is straightforward. Bug 1799517 is a variation on this which would help us explicitly capture the destructor order of C++ fields.

Bug 1672307 tracks showing the binary size impact of a given file, class, etc.

Source Directory Listings

In the recently landed Bug 1783761 I moved our directory listings into rust after shaving a whole bunch of yaks. Now we can do a bunch of queries on data about files. Would you like to see all the tests that are disabled in your components? We could do this! Would you like to see all the files in your components that have been modified in the last month but have bad coverage? We could also do that! There are many possibilities here but I haven’t filed bugs for them.

Mozilla Development Workflow Improvements

  • Bug 1732585: Provide a way to search related (phabricator revision) review/bugzilla comments related to the current file
  • Bug 1657786: Create searchfox taskcluster mode/variant that can run the C++ indexer only against changed files for try builds / phabricator requests
  • Bug 1778802: Consider storing m-c analysis data in a git repo artifact with a bounded history via `git checkout –orphan` to enable try branch/code review features and recent semantic history support

Easier Contributions

The largest hurdle new contributors have faced is standing up a virtual machine. In Bug 1612525 we’ve added core support for docker, and we have additional work to do in that bug to document using docker and add additional support for using docker under WSL2 on Windows. Please feel free to drop by https://chat.mozilla.org/#/room/#searchfox:mozilla.org if you need help getting started.

Deeper Integration with Mozilla Infrastructure

Currently much of searchfox runs as EC2 jobs that exists outside of taskcluster, although C++ and rust indexing artifacts as well as all coverage data and test info data comes from taskcluster. Bug 1598502 tracks moving more of searchfox into taskcluster, although presumably the web-servers will still need to exist outside of taskcluster.

Frederik BraunNeue Methoden für Cross-Origin Isolation: Resource, Opener & Embedding Policies mit COOP, COEP, CORP und CORB

This document sat in my archives. I originally created this so I have notes for my participation in the Working Draft podcast - a German podcast for web developers. That's why this article is in German as well. The podcast episode 452 was published in 2020, but I never published this …

The Mozilla BlogHow to talk to kids about social media

An illustratin shows a confused face looking up at various social media icons.<figcaption>Credit: Nick Velazquez / Mozilla</figcaption>

Joy Cho smiles while posing for a photo.
Joy Cho is the founder and creative director of the lifestyle brand and design company Oh Joy! For two years in a row, she was named one of Time’s 30 Most Influential People on the Internet and has the most followed account on Pinterest with over 15 million followers. You can also follow her on Instagram and learn more about her work on ohjoy.com.

I’m part of the first generation of parents of children who’ve been exposed to the internet since birth. I remember the days without cellphones, email, social media, streaming TV shows and quick access to the web (hello, that AOL dial-up ringtone!). So often, I feel unsure about how to set up my kids for success with their own technology use – something my own parents never had to figure out. 

Social media in particular can be scary to think about when it comes to my kids, but I know it can be great too. I’m a designer and business owner who has made amazing connections online, and it has allowed me to create my own community on the internet.

But I have also seen the toll social media takes: making unhealthy comparisons, doomscrolling and being unable to focus on one thing at a time. Do you often find yourself watching TV while also scrolling through Instagram on your phone? I sure do. 

So how do we talk to our kids about social media?

First, consider your own habits

Every family is different and should handle the topic in a way that works for them. Just like rules for treats, chores, bedtime and allowance, social media is just another part of parenting that involves making the best choices for their family. 

From my experience, as someone who spends a lot of social media for work, I believe that we should model the behavior we want our kids to have. Here are some things I’m doing and asking myself: 

1. How much am I on social media? And what am I doing while there?

How much of that time is productive (creating community, catching up with friends, learning something) vs. how much of it feels like I’m on social media just to pass time?

Looking at our own habits first will better inform how we can be role models for our kids. If you feel that you’re on your device more than you’d want to, this is a great time to modify that before establishing what to expect of your kids.

2. How much privacy do I want?

I know some families who don’t post any photos of their kids online. And then some families share what feels like every moment of every day in their kids’ lives on a public social media account. Some people post photos but cover their kids’ faces or only show their kids from the back. 

I used to share my kids’ photos and videos all the time when they were younger, but as they got older, I felt the need to protect their privacy more and more. Now, you will rarely see them on my public feeds. I use social media for business, and my kids are not part of my business, so now I make that distinction because that is what feels right for me and my family.

Your answer might be different. You might have 75 followers of your closest family and friends on a private account and social media is the easiest way to share updates about your family. Whatever you’re doing, make sure it’s intentional and feels right for you.

3. How can I use social media for good?

While social media often gets a bad rep, it also gives us incredible ways to connect with people, offer inspiration and keep us in tune with what’s going on in the world. Think about how you can, too, contribute to that positivity based on what you share and how you share it.

If there are accounts or people that bring upon feelings of comparison, anger or negativity in your daily scroll, consider muting those accounts or unfollowing. Don’t let anyone online steal your peace.

Reflect as a family

Once you’ve reflected and maybe changed a few of your own online habits, here are some questions to ask yourself about your kids’ tech use:

1. What devices and apps feel right for my kids at their current ages?

What am I OK with having my kids seeing or using based on their ages? And how will it change as they get older? At what age can they have their own iPad or their own phone?

My kids are 8 and 11 and neither of them have their own phones yet. They have iPads, and they have access to apps that my husband and I have approved and feel are age-appropriate. They both can be on YouTube Kids solo but can only watch YouTube on our family TV (where the adults can more quickly see and hear what they are watching). They have access to streaming channels under the child settings, which help us feel better about the things they may come across on their own.

2. What kind of limits make sense for social media or device time?

Most kids (and adults!) can’t self-limit their time on a device. So it can be good to set limits until kids can regulate themselves. You can choose to change those limits based on their ages, behavior or as a special reward. Whether that’s three hours per day or one hour per week, decide what you think is appropriate, but make sure to have the conversation with them about it, too.

3. Consider parental controls

You can use family settings to limit time on devices and on specific apps. Some kids need their computer for homework, but you can define what they’ll be able to access and for how long, which helps set boundaries and stay on task for homework. Most streaming services also offer pre-set controls for kids as well. Weigh the benefits of these built-in safeguards based on your family’s needs. 

4. How does using social media affect my kids?

Studies that come out on this topic paint a bleak picture. A report by the brand Dove found that idealized beauty advice on social media can cause low self-esteem in 50% of girls. So before your kids start on social platform, or even if they’re already active on them, you can ask them: Do you find yourself getting cranky when you’re on your device? Or sad, depressed, jealous, confused or angry? How can we prevent that from happening?

This is a good way to start discussing screen time, and having kids ask themselves how much screen time is too much. I find that some children have a major mood shift when they get a lot of screen time, and you can help them recognize that feeling, which will help their ability to regulate themselves.

5. How can my kids use the internet for good?

One thing that I like to remind myself is all the good that can come from the internet. My children have this incredible access to so much knowledge, most of the world’s art, connection to almost every part of the world and so many opportunities for fun and education. Depending on their age, their access and ability to use devices will vary. Can they play educational games or watch educational shows? Can they play games with their friends that allow them to socialize or learn how to work on a team? Can they watch craft videos and learn how to make something for the very first time? Look at the parts that are great about it and guide them in that direction.

Keep safety and privacy in mind

One thing to remember: Whatever I (or my kids) post on social media stays there forever.

While images, videos, tweets and messages can be deleted, information can resurface for others to still see, screenshot or save your content before it goes away. We’ve seen so many examples of past social media postings come back to hurt someone’s career or reputation, so make this clear to your children and keep the conversation going as they grow older.

I hope asking yourself these questions can help prepare you for having the “social media talk” with your own kids. You can always change it up and evolve your family’s take on it. Just as we need to modify things in other aspects of parenting, how our families handle the internet can evolve as technology changes, and as our kids, hopefully, grow up to be the most joyful versions of themselves. 


The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

An illustration reads: The Tech Talk

Talk to your kids about online safety

Get tips

The post How to talk to kids about social media appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird Supernova Preview: The New Calendar Design

In 2023, Thunderbird will reinvent itself with the “Supernova” release, featuring a modernized interface and brand new features like Firefox Sync. One of the major improvements you can look forward to is an overhaul to our calendar UI (user interface). Today we’re excited to give you a preview of what it looks like!

Since this is a work-in-progress, bear with us for a few disclaimers. The most important one is that these screenshots are mock-ups which guide the direction of the new calendar interface. Here are a few other things to consider:

  • We’ve intentionally made this calendar pretty busy to demonstrate how the cleaner UI makes the calendar more visually digestible, even when dealing with many events.
  • Dialogs, popups, tool-tips, and all the companion calendar elements are also being redesigned.
  • Many of the visual changes will be user-customizable.
  • Any inconsistent font sizes you see are only present in the mock-up.
  • Right now we’re showing Light Mode. Dark and High Contrast mode will both be designed and shared in the near future.
  • These current mock-ups were done with the “Relaxed” Density setting in mind, but of course a tighter interface with scalable font-size will be possible.

Thunderbird Supernova Calendar: Monthly, Weekly, Daily Views

Thunderbird 115 Calendar Mockup: Monthly View<figcaption>Thunderbird Supernova Calendar: Monthly View</figcaption>

The first thing you may notice is that Saturday and Sunday are only partially visible. You can choose to visually collapse the weekends to save space.

But wait, we don’t all work Monday through Friday! That’s why you’ll be able to define what your weekend is, and collapse those days instead.

And do you see that empty toolbar at the top? Don’t worry, all the calendar actions will be reachable in context, and the toolbar will be customizable. Flexibility and customization is what you’ve come to expect from Thunderbird, and we’ll continue to provide that.

Thunderbird Supernova 115 Calendar Weekly View<figcaption>Thunderbird Supernova Calendar: Weekly View</figcaption>

Speaking of customization, visual customization options for the calendar will be available via a menu popup. Some (but not all) of the options you’ll see here are:

  • Hide calendar color
  • Hide calendar icons
  • Swap calendar color with category color
  • Collapse weekends
  • Completely remove your weekend days
Thunderbird Supernova 115 Calendar Daily View<figcaption>Thunderbird Supernova Calendar: Daily View</figcaption>

You’ll also see some new hotkey hints in the Search boxes (top middle, top right).

Speaking of Search, we’re moving the “Find Events” area into the side pane. A drop-down will allow choosing which information (such as title, location, and date) you want each event to show.

Thunderbird Supernova Calendar: Event View

Thunderbird 115 Calendar: Event View<figcaption>Thunderbird Supernova Calendar: Event View</figcaption>

The Event view also gets a decidedly modernized look. The important details have a lot more breathing room, yet subheadings like Location, Organizer and Attendees are easier to spot at a glance. Plus, you’ll be able to easily sort and identify the list of attendees by their current RSVP status.

By default, getting to this event preview screen requires only 1 click. And it’s 2 clicks to open the edit view (which you can do either in a new tab or a separate floating window). Because you love customization, you can control the click behavior. Do you want to skip the event preview screen and open the edit screen with just 1 click? We’ll have an option for that in preferences.

Feedback? Questions?

Life gets busy, so we want our new calendar design to look and feel comfortable. It will help you more efficiently sift, sort, and digest all the crucial details of your day.

Do you have questions or feedback about the new calendar in Thunderbird Supernova? We have a public mailing list specifically for User Interface and User Experience in Thunderbird, and it’s very easy to join.

Just head over to this link on TopicBox and click the “Join The Conversation” button!


The post Thunderbird Supernova Preview: The New Calendar Design appeared first on The Thunderbird Blog.

This Week In RustThis Week in Rust 468

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is enum_delegate, a crate to replace dynamic dispatch with enum dispatch.

Thanks to Devin Brite for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

FOSDEM 2023 Rust devroom CFP

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

396 pull requests were merged in the last week

Rust Compiler Performance Triage

A relatively noisy week (most of those have been dropped below, and comments left on GitHub), but otherwise a quiet one in terms of performance changes, with essentially no significant changes occuring.

Triage done by @simulacrum. Revision range: 822f8c2..57d3c58

2 Regressions, 2 Improvements, 3 Mixed; 3 of them in rollups 39 artifact comparisons made in total

See the full report for more details.

Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
  • No Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2022-11-09 - 2022-12-07 🦀

Virtual
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Meanwhile the Rust shop has covers on everything and tag-out to even change settings of the multi-axis laser cutter, but you get trusted with said laser cutter on your first day, and if someone gets hurt people wonder how to make the shop safer.

masklinn on r/rust

Thanks to Anton Fetisov for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Aki Sasakiblue sky: a federation of automation platforms

Premise (Tl;dr)

A federated protocol for automation platforms could usher in a new era of collaboration between open-source projects, corporations, NGOs, and governments. This collaboration could happen, not at the latency of human handoffs, but at the speed of automation.

(I had decided to revive this idea in a blog post before the renewed interest in Mastodon, but the timing is good. I had also debated whether to post this on US election day, but it may be a welcome distraction?)

Background

Once upon a time, an excited computer lab assistant showed my class the world wide web. Left-aligned black text with blue, underlined hypertext on a grey background, interspersed with low-resolution GIFs. Sites, hosted on other people's computers across the country, transferred across analog phone lines at over a thousand baud. "This," he said. "This will change everything."

Some two decades later, I blogged about blue sky, next-gen Release Engineering infrastructure without knowing how we'd get there. Stars aligned, and many teams put in hard work. Today, most of our best ideas made it into taskcluster, the massively scalable, cloud-agnostic automation platform that runs Mozilla's CI and Release Pipelines.

The still-unimplemented idea that's stuck with me the longest is something I referred to as cross-cluster communication.

Simple cases

In the simplest case, what if you could spin up two Taskcluster instances, and one cluster's tasks could depend on the other cluster's tasks and artifacts? Task 2 on Cluster B could remain unscheduled until Task 1 on Cluster A finished. At that point, Task 2 could move to `pending`, then `running`, and use Task 1's artifacts as input.

We might have an upstream dependency that tends to break everything in our pipeline whenever a new release of that dependency ships. Our current solution might involve pinning this dependency and periodically bumping the pin, debugging and bisecting any bustage at that time. But what if we could fire a set of unit and integration tests, not just when the dependency ships a new release, but whenever their CI runs a build off of trunk? We could detect the breaking change much more easily and quickly. Cross-cluster communication would allow for this cross-cluster dependency while leaving the security, billing, and configuration decisions in the hands of the individual cluster owners.

Or we could split up the FirefoxCI cluster. The release pipeline could move to a locked-down cluster for enhanced security monitoring. In contrast, the CI pipeline could remain accessible for easier debugging. We wouldn't want to split the hardware test pools. We have a limited number of those machines. Instead, we could create a third cluster with the hardware test pools, triggering tests against builds generated by both upstream clusters.

Of course, we wouldn't need this layer if we only wanted to connect to one or two upstreams. This concept starts to shine when we scale up. Many-to-many.

Many-to-many

Open source projects could collaborate in ways we haven't yet seen. But this isn't limited to just open source.

If the US were using Taskcluster, municipal offices could collaborate, each owning their own cluster but federating with the others. The state could aggregate each and generate state-wide numbers and reports. The federal government could aggregate the states' data and federate with other nations. NGOs and corporations could also access public data. Traffic. Carbon. The disappearance and migration of wildlife. The spread of disease outbreaks.

A graph of cluster interdependencies. A matrix. A web, if you will. But instead of connecting machines or individuals, we're connecting automation platforms. Instead of cell-to-cell communication this is organ-to-organ communication.

(As I mentioned in the Tl;dr): A federated protocol for automation platforms could usher in a new era of collaboration between open-source projects, corporations, NGOs, and governments. This collaboration could happen, not at the latency of human handoffs, but at the speed of automation.



comment count unavailable comments

Mozilla Performance BlogUnderstanding Performance Impact

A few years ago, a small group of engineers at Mozilla introduced a process to identify the tasks that would have the greatest impact on the performance of Firefox. They would gather each week to look over user submitted profiles, and discuss bugs reports. Each bug would then be assigned a category and a score to reflect its impact on performance. This would help teams to prioritise their work for performance, and proved crucial in delivering the significant speed improvements that were present in Firefox Quantum.

Fast forward to today, and this performance triage process has continued, but you’d be forgiven for not knowing about it. This year we have been making improvements to the way bugs are nominated for triage, how the impact and keywords are determined, and getting more people involved. I’d like to share some of these changes with you, starting with how to request a performance impact review for a bug.

How performance impact reviews are requested

If you believe a bug may have an impact on the performance of our products, you can nominate it for triage. You can do this by editing the bug and setting the “Performance Impact” flag to “?”.

Screenshot of Bugzilla showing the Performance Impact flag with the values expanded.

Performance Impact in Bugzilla

This will cause the bug to show up in the queries used by our triage sheriffs, who will attempt to determine the performance impact at the next meeting. If you have any additional details that may help, we encourage you to mention these as a comment in the bug. If you’re interested in joining the discussion, you can reach out to the lead of the next triage meeting published here.

How performance impact is determined

In the past we relied on the knowledge and experience of the triage sheriffs attending the meeting to determine the performance impact. Whilst efficient, this approach wasn’t great for onboarding new members to the triage rotation, and could also be inconsistent. We resolved this by building a tool to calculate the impact from a series of simple prompts.

Screenshot of the performance impact calculator

Performance Impact Calculator

Starting with a base impact score of zero, each answer in the calculator either increases the base impact score or applies a multiplier. For example:

  • Causes noticeable jank: +2
  • Severe page load impact: +10
  • Affects major websites: ×5
  • Reproduces in Chrome: ×0.3
  • Total impact score = (2+10)×5×0.3 = 18

For insight into how each of the options affects the score, you can check the “Debug” button to display a breakdown next to the labels.

Screenshot showing debug information next to some of the calculator options.

Calculator with debug enabled

If the score is greater than zero then the bug is considered to have an impact on performance. The scores relate to the impact flag values as follows:

  • 0: none
  • 0-40: medium
  • 40+: high

Note that the calculator is being adjusted regularly to ensure it gives an accurate reflection of the performance impact, therefore the values above may at some point be out of date. If you disagree with a performance impact result, please add a comment on the bug with your concerns and either needinfo the triage sheriff or re-nominate the bug for triage. Feel free to try the calculator out for yourself, and if you have any suggestions for improvements, please open an issue.

How performance impact affects you

It’s important to note that the performance team does not directly interfere with the priorities of bugs outside of their own components. During triage, bugs will be reassigned wherever possible to the most appropriate component, and it is the responsibility of the triage owners for those components to set a priority. The goal of the performance impact flag is to provide additional context that may assist the triage owners in their role. That said, a bug with a high performance impact on the product quality and experience could reasonably be expected to cause a Firefox user to switch browsers, and therefore should be considered S2 in severity.

How to get help

The performance triage wiki serves as a guide for running the triage, and goes into some detail on the Bugzilla fields and queries used. Please note that the impact calculator is constantly being tweaked, and your feedback is essential for helping us to improve the formula. If you notice a bug with a performance impact that doesn’t feel quite right, please let us know. You can do this by changing the “performance impact” flag to “?” and adding a comment, or by reaching out in #perf-help on Matrix or Slack.

Mark SurmanMozilla Ventures: Investing in Responsible Tech

Early next year, we will launch Mozilla Ventures, a first-of-its-kind impact venture fund to invest in startups that push the internet — and the tech industry — in a better direction 

___

Many people complain about today’s tech industry. Some say the internet has lost its soul. And some even say it’s impossible to make it better. 

My response: we won’t know unless we try, together. 

Personally, I think it is possible to build successful companies — and great internet products — that put people before profits. Mozilla proves this. But so do WordPress, Hugging Face, ProtonMail, Kickstarter, and a good number of others. All are creating products and technology that respect users — and that are making the internet a healthier place.

I believe that, if we have A LOT more founders creating companies like these, we have a real chance to push the tech industry — and the internet — in a better direction. 

The thing is, the system is stacked against founders like this. It is really, really hard. This struck us when Mozilla briefly piloted a startup support program a couple of years ago. Hundreds of young founders and teams showed up with ideas for products and tech that were ‘very Mozilla’. Yet, we’ve also heard it’s hard to find mission aligned investors, or mentors and incubators who shared their vision for products that put people first.  

Through this pilot, Mozilla found the kinds of mentors these founders were looking for. And, we offered pre-seed investments to dozens of companies. But we also saw the huge need to do more, and to do it systematically over time. Mozilla Ventures will be our first step in filling this need. 

Launching officially in early 2023, Mozilla Ventures will start with an initial $35M, and grow through partnerships with other investors.

The fund will focus on early stage startups whose products are designed to delight users or empower developers — but with the sort of values outlined in the Mozilla Manifesto baked in from day one. Imagine a social network that feels like a truly safe place to connect with your closest family and friends. Or an AI tooling company that makes it easier for developers to detect and mitigate bias when developing digital products and services. Or a company offering a personal digital assistant that is both a joy to use and hyper focused on protecting your privacy. We know there are founders out there who want to build products and companies like these, and that want to do so in a way that looks and feels different than the tech industry of today. 

Processwise, Mozilla Ventures will look for founders with a compelling product vision and alignment with Mozilla’s values. From there, it will look at their team, their product and their business, just as other investors do. And, where all these things add up, we’ll invest. 

The fund will be led by Managing Partner Mohamed Nanabhay. Mohamed brings a decade of experience investing in digital media businesses designed to advance democracy and free speech where those things are hard to come by. Which perfectly sets him up for the job ahead — finding and funding founders who have the odds stacked against them, and then helping them succeed. 

Over the past few months, Mohamed and I have spent a good amount of time thinking about the basic thesis behind the fund (find great startups that align with the Mozilla Manifesto) — and testing this thesis out through conversations with founders. 

Even before we publicly announced Mozilla Ventures in November 2022, we’d already found three companies that validate our belief that companies like this are out there — Secure AI Labs, Block Party and HeyLogin. They are all companies driven by the idea that the digital world can be private, secure, respectful, and that there are businesses to be built creating this world. We’re honored that these companies saw the same alignment we did. They all opened up space on their cap table for Mozilla. And we invested.

Our first few months of conversations with founders (and other investors) have also underlined this: we have more questions than answers. Almost everyone we’ve talked to is excited by the idea of pushing the tech industry in a different direction, especially younger founders. On the flipside, everyone sees huge challenges — existing tech monopolies, venture funding growth at all costs, public cynicism. It’s important to be honest, we don’t have all the answers. We will (collectively) need to work through these challenges as we go. So, that’s what we will do. Our plan is to continue talking to founders — and making select investments — in the months leading up to the launch of the fund. We will also keep talking to fellow travelers like Lucid Capitalism, Startups and Society and Responsible Innovation Labs, who have already started asking some of the tough questions. And, we will continue speaking with a select group of potential co-investors (LPs) who share our values. We believe that, together, we have a chance of putting the tech industry on a truly different course in the years ahead.

The post Mozilla Ventures: Investing in Responsible Tech appeared first on Mark Surman.

The Rust Programming Language BlogAnnouncing Rust 1.65.0

The Rust team is happy to announce a new version of Rust, 1.65.0. Rust is a programming language empowering everyone to build reliable and efficient software.


Before going into the details of the new Rust release, we'd like to draw attention to the tragic death of Mahsa Amini and the death and violent suppression of many others, by the religious morality police of Iran. See https://en.wikipedia.org/wiki/Mahsa_Amini_protests for more details. We stand in solidarity with the people in Iran struggling for human rights.


If you have a previous version of Rust installed via rustup, you can get 1.65.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.65.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.65.0 stable

Generic associated types (GATs)

Lifetime, type, and const generics can now be defined on associated types, like so:

trait Foo {
    type Bar<'x>;
}

It's hard to put into few words just how useful these can be, so here are a few example traits, to get a sense of their power:

/// An `Iterator`-like trait that can borrow from `Self`
trait LendingIterator {
    type Item<'a> where Self: 'a;

    fn next<'a>(&'a mut self) -> Option<Self::Item<'a>>;
}

/// Can be implemented over smart pointers, like `Rc` or `Arc`,
/// in order to allow being generic over the pointer type
trait PointerFamily {
    type Pointer<T>: Deref<Target = T>;

    fn new<T>(value: T) -> Self::Pointer<T>;
}

/// Allows borrowing an array of items. Useful for
/// `NdArray`-like types that don't necessarily store
/// data contiguously.
trait BorrowArray<T> {
    type Array<'x, const N: usize> where Self: 'x;

    fn borrow_array<'a, const N: usize>(&'a self) -> Self::Array<'a, N>;
}

As you can see, GATs are quite versatile and enable a number of patterns that are not currently able to be written. For more information, check out the post announcing the push for stabilization published last year or the stabilization announcement post published last week. The former goes into a bit more depth of a couple of the examples above, while the latter talks about some of the known limitations of this stabilization.

More in depth reading can be found in the associated types section of the nightly reference or the original RFC (which was initially opened over 6.5 years ago!).

let-else statements

This introduces a new type of let statement with a refutable pattern and a diverging else block that executes when that pattern doesn't match.

let PATTERN: TYPE = EXPRESSION else {
    DIVERGING_CODE;
};

Normal let statements can only use irrefutable patterns, statically known to always match. That pattern is often just a single variable binding, but may also unpack compound types like structs, tuples, and arrays. However, that was not usable for conditional matches, like pulling out a variant of an enum -- until now! With let-else, a refutable pattern can match and bind variables in the surrounding scope like a normal let, or else diverge (e.g. break, return, panic!) when the pattern doesn't match.

fn get_count_item(s: &str) -> (u64, &str) {
    let mut it = s.split(' ');
    let (Some(count_str), Some(item)) = (it.next(), it.next()) else {
        panic!("Can't segment count item pair: '{s}'");
    };
    let Ok(count) = u64::from_str(count_str) else {
        panic!("Can't parse integer: '{count_str}'");
    };
    (count, item)
}
assert_eq!(get_count_item("3 chairs"), (3, "chairs"));

The scope of name bindings is the main thing that makes this different from match or if let-else expressions. You could previously approximate these patterns with an unfortunate bit of repetition and an outer let:

    let (count_str, item) = match (it.next(), it.next()) {
        (Some(count_str), Some(item)) => (count_str, item),
        _ => panic!("Can't segment count item pair: '{s}'"),
    };
    let count = if let Ok(count) = u64::from_str(count_str) {
        count
    } else {
        panic!("Can't parse integer: '{count_str}'");
    };

break from labeled blocks

Plain block expressions can now be labeled as a break target, terminating that block early. This may sound a little like a goto statement, but it's not an arbitrary jump, only from within a block to its end. This was already possible with loop blocks, and you may have seen people write loops that always execute only once, just to get a labeled break.

Now there's a language feature specifically for that! Labeled break may also include an expression value, just as with loops, letting a multi-statement block have an early "return" value.

let result = 'block: {
    do_thing();
    if condition_not_met() {
        break 'block 1;
    }
    do_next_thing();
    if condition_not_met() {
        break 'block 2;
    }
    do_last_thing();
    3
};

Splitting Linux debuginfo

Back in Rust 1.51, the compiler team added support for split debug information on macOS, and now this option is stable for use on Linux as well.

  • -Csplit-debuginfo=unpacked will split debuginfo out into multiple .dwo DWARF object files.
  • -Csplit-debuginfo=packed will produce a single .dwp DWARF package alongside your output binary with all the debuginfo packaged together.
  • -Csplit-debuginfo=off is still the default behavior, which includes DWARF data in .debug_* ELF sections of the objects and final binary.

Split DWARF lets the linker avoid processing the debuginfo (because it isn't in the object files being linked anymore), which can speed up link times!

Other targets now also accept -Csplit-debuginfo as a stable option with their platform-specific default value, but specifying other values is still unstable.

Stabilized APIs

The following methods and trait implementations are now stabilized:

Of particular note, the Backtrace API allows capturing a stack backtrace at any time, using the same platform-specific implementation that usually serves panic backtraces. This may be useful for adding runtime context to error types, for example.

These APIs are now usable in const contexts:

Compatibility notes

Other changes

There are other changes in the Rust 1.65 release, including:

  • MIR inlining is now enabled for optimized compilations. This provides a 3-10% improvement in compiletimes for real world crates.
  • When scheduling builds, Cargo now sorts the queue of pending jobs to improve performance.

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.65.0

Many people came together to create Rust 1.65.0. We couldn't have done it without all of you. Thanks!

Support.Mozilla.OrgHow to contribute to Mozilla through user support

SUMO contributors posing in front of whistler inukshuk statue in Whistler

It is with great pleasure that I am announcing the launch of our new contribute page in SUMO a.k.a SUpport.Mozilla.Org. SUMO is one of the oldest contribution areas in Mozilla, and we want to show you just how easy it is to contribute!

There are many ways you can get involved with SUMO, so getting started can be confusing. However, our new contribute page should help with that, since the pages are now updated with simpler steps to follow and a refreshed design.

We also added two new contribution areas, so now we have five ways to contribute:

  1. Answer questions in the support forum
  2. Write help articles
  3. Localize help articles
  4. Provide support on social media channels (newly added)
  5. Respond to mobile store reviews (newly added)

The first 3 areas are nothing new for SUMO contributors. You can contribute by replying to forum posts, writing help articles (or Knowledge Base articles as we call them here), or translating the help article’s content to your respective locales.

Providing support on social media channels is also nothing new to SUMO. But with the ease of tools that we have now, we are able to invite more contributors to the program. In 2020, we started the @FirefoxSupport account on Twitter and as of now, we have posted 4115 tweets and gained 3336 followers. If you’re a social media enthusiast, the Social Support program is a perfect contribution area for you.

Responding to user reviews on the mobile store is something relatively new that we started a couple of years ago to support the Firefox for Android transition from Fennec to Fenix. We realize that the mobile ecosystem is a different territory with different behavior. We wanted to make sure that we serve people where they need us the most, which means providing support for those who leave us app reviews. If this sounds more like your thing, you should definitely join the Mobile Store Support program.

And if you still can’t decide, you can always start by saying hi to us in our Matrix room or contributor forums.

 

Keep on rocking the helpful web,

Kiki

The Mozilla BlogHow to talk to kids about video games

<figcaption>Credit: Nick Velazquez / Mozilla</figcaption>

Dr. Naomi Fisher poses for a photo.
Dr. Naomi Fisher is a U.K.-based clinical psychologist specializing in trauma and autism. She’s the author of “Changing Our Minds: How Children Can Take Control of Their Own Learning.” Her writing has also appeared in the British Psychological Society’s The Psychologist, among other publications. You can follow her on Twitter. Photo: Justine Diamond

I spend a lot of time talking to parents about screens. Most of those conversations are about fear.  

“I’m so worried about my child withdrawing into screens,” they say. “Are they addicted? How can I get them to stop?”  

I understand where they are coming from. I’m a clinical psychologist with 16 years of experience working in the U.K. and France, including for the U.K. National Health Service and in private practice. I’m also the mother of an 11-year-old girl and a teenage boy. 

“Screen time” has become one of the bogeymen of our age. We blame screens for our children’s unhappiness, anger or lack of engagement. We worry about screen time incessantly, so much so that sometimes it seems that the benchmark of a good parent in 2022 is the strictness of your screen time limits.

<figcaption> Credit: Nick Velazquez / Mozilla</figcaption>

Defining screen time

The oddest thing about this is that “screen time” doesn’t really exist. You can’t pin it down unless you think that the screen itself – a sheet of glass – has a magical, harmful effect. A screen is merely a portal to many activities that also happen offline. These include gaming, chatting, reading, writing, watching documentaries, coding, learning languages and art – I could go on.  

Just yesterday, my teenager and I played the online word puzzle Redactle together. We’ve been doing it daily for months, and we’ve learned about history, science, poetry and bed bugs along the way. Do word puzzles become damaging because they are accessed via a sheet of glass? 

Still, parents are afraid of “too much” screen time, and they want firm answers. “Is 30 minutes a day too much?” they ask. When in turn I ask them what their children are doing on screens, they rarely have much idea. “Watching rubbish” or “wasting time” are common responses. It’s not often that parents spend time on screens with their children, many of them saying that they don’t want to encourage it.

Stop counting the minutes

I tell parents to stop counting the minutes for a moment, and instead spend some time watching their children without judgment. They return surprised. 

Parents see their children socializing with friends as they play. They’re designing their own mini-games, or memorizing the countries of the world. They’ve built the Titanic in Minecraft. The “screen time bogeyman” starts to melt away.

For me, screens give families an opportunity.

Photo: Lauren Psyk

They are a chance to connect with our children by doing something they love. And for some young people, there are benefits that they can’t find elsewhere.

Some children I meet don’t feel competent elsewhere in their lives, but feel good about themselves when they play video games. They tell me about Plants vs. Zombies and they come alive. We exchange tips on our favorite way to defend the house from marauding zombies.  They love games, but everyone is telling them that they should be doing something else. Often, no other adult seems interested.

I see young people who are really isolated. They have difficulties making friends, or they have been bullied at school. Online gaming can be their first step towards making connections. They don’t have to start with talking: They type on the in-game chat and when they feel ready, move onto voice chat. They emerge in their own time.  

Some of the young people I work with have difficulty keeping calm throughout the day. For them, their devices provide a way to take up space. They put on their headphones and sink into a familiar game. They recharge, letting them cope with their day for a bit longer. It’s a wonderfully portable way to decompress.

Do games cause unhappiness?

I’m not saying that there’s never a reason to worry. I meet some young people who are very unhappy. They use gaming to avoid their thoughts and feelings, and they get very angry when asked to stop. The adults around them usually blame the games for their unhappiness, thinking that banning them would help improve their well-being.

But here’s the issue: Gaming is rarely the cause of the problem. Instead, it’s a solution that a young person has found to cope with the way they are feeling. Sometimes, gaming can seem like the only thing that makes them happy. Banning video games takes that away, causing a child to feel angry with their parents at the same time. 

We need to address the root of their unhappiness rather than ban something they love, and we need to nurture that relationship. Sometimes, dropping the judgment around gaming can lead to parents and children reconnecting rather than fighting.

Valuing our children’s interests

Appreciating our children’s love of screens is far more than just showing an interest in what they do. When they were little, we looked after their most precious toys, even if they were ragged and dirty. They were important because they were important to our child. We didn’t tell them that their teddy bears were rubbish and we’d like to get rid of them (even if we secretly thought exactly that), because we knew that would hurt them.  

Now they’re older, games and digital creations have replaced stuffed toys. When we demonize screens, we demonize the things our children love. We tell them that the things they value aren’t valuable. We tell them the things they enjoy most are a waste of time. That is never going to be a good way to build a strong and supportive relationship.

Instead, I encourage parents to join their kids. Gaming might bore you, but you can be interested in your child and what makes them come alive. You can value their joy, their curiosity and their exploration. You can give the games a go and see what they find so enthralling. Download Brawl Stars, Minecraft or Roblox, and see if your child will show you how to play. If they don’t want to, find a tutorial video for yourself.

Let them see that you are interested in their passions, because you are interested in them. They will see that you value them for who they are. And from that seed, many good things can grow.  


How to talk to kids about video games

Watch what your kids do on screens, even if at first you don’t see the point. Ask them to tell you about it or just observe.

Ask if you can join them, even if that means watching videos together. Resist the urge to denigrate what they are doing. Search for more websites similar to those they find interesting.

Try something different together. Make suggestions and expand their on-screen horizons. There are quizzes galore on Sporcle, or clever spin-offs from Wordle like Quordle, Absurdle and Fibble.

Connect over a board game. Many family board games have virtual editions. Our family loves the Evolution app. Try Carcassonne, Forbidden Island, Settlers of Catan, or the Game of Life.

Hang out together, apart. Online gaming can be a great way to spend time with your kids when you aren’t with them. Kids can struggle to talk remotely but playing Minecraft or Cluedo together can be lots of fun, even miles apart.


The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

An illustration reads: The Tech Talk

Talk to your kids about online safety

Get tips

The post How to talk to kids about video games appeared first on The Mozilla Blog.

The Mozilla BlogMozilla Ventures: Investing in responsible tech

Early next year, we will launch Mozilla Ventures, a first-of-its-kind impact venture fund to invest in startups that push the internet — and the tech industry — in a better direction 

_____

Many people complain about today’s tech industry. Some say the internet has lost its soul. And some even say it’s impossible to make it better. 

My response: we won’t know unless we try, together. 

Personally, I think it is possible to build successful companies — and great internet products — that put people before profits. Mozilla proves this. But so do WordPress, Hugging Face, ProtonMail, Kickstarter, and a good number of others. All are creating products and technology that respect users — and that are making the internet a healthier place.

I believe that, if we have A LOT more founders creating companies like these, we have a real chance to push the tech industry — and the internet — in a better direction. 

The thing is, the system is stacked against founders like this. It is really, really hard. This struck us when Mozilla briefly piloted a startup support program a couple of years ago. Hundreds of young founders and teams showed up with ideas for products and tech that were ‘very Mozilla’. Yet, we’ve also heard it’s hard to find mission aligned investors, or mentors and incubators who shared their vision for products that put people first.  

Through this pilot, Mozilla found the kinds of mentors these founders were looking for. And, we offered pre-seed investments to dozens of companies. But we also saw the huge need to do more, and to do it systematically over time. Mozilla Ventures will be our first step in filling this need. 

Launching officially in early 2023, Mozilla Ventures will start with an initial $35M, and grow through partnerships with other investors.

The fund will focus on early stage startups whose products are designed to delight users or empower developers — but with the sort of values outlined in the Mozilla Manifesto baked in from day one. Imagine a social network that feels like a truly safe place to connect with your closest family and friends. Or an AI tooling company that makes it easier for developers to detect and mitigate bias when developing digital products and services. Or a company offering a personal digital assistant that is both a joy to use and hyper focused on protecting your privacy. We know there are founders out there who want to build products and companies like these, and that want to do so in a way that looks and feels different than the tech industry of today. 

Processwise, Mozilla Ventures will look for founders with a compelling product vision and alignment with Mozilla’s values. From there, it will look at their team, their product and their business, just as other investors do. And, where all these things add up, we’ll invest. 

The fund will be led by Managing Partner Mohamed Nanabhay. Mohamed brings a decade of experience investing in digital media businesses designed to advance democracy and free speech where those things are hard to come by. Which perfectly sets him up for the job ahead — finding and funding founders who have the odds stacked against them, and then helping them succeed. 

Over the past few months, Mohamed and I have spent a good amount of time thinking about the basic thesis behind the fund (find great startups that align with the Mozilla Manifesto) — and testing this thesis out through conversations with founders. 

Even before we publicly announced Mozilla Ventures in November 2022, we’d already found three companies that validate our belief that companies like this are out there — Secure AI Labs, Block Party and HeyLogin. They are all companies driven by the idea that the digital world can be private, secure, respectful, and that there are businesses to be built creating this world. We’re honored that these companies saw the same alignment we did. They all opened up space on their cap table for Mozilla. And we invested.

Our first few months of conversations with founders (and other investors) have also underlined this: we have more questions than answers. Almost everyone we’ve talked to is excited by the idea of pushing the tech industry in a different direction, especially younger founders. On the flipside, everyone sees huge challenges — existing tech monopolies, venture funding growth at all costs, public cynicism. It’s important to be honest, we don’t have all the answers. We will (collectively) need to work through these challenges as we go. So, that’s what we will do. Our plan is to continue talking to founders — and making select investments — in the months leading up to the launch of the fund. We will also keep talking to fellow travelers like Lucid Capitalism, Startups and Society and Responsible Innovation Labs, who have already started asking some of the tough questions. And, we will continue speaking with a select group of potential co-investors (LPs) who share our values. We believe that, together, we have a chance of putting the tech industry on a truly different course in the years ahead.

The post Mozilla Ventures: Investing in responsible tech appeared first on The Mozilla Blog.

Mozilla Addons BlogBegin your MV3 migration by implementing new features today

Early next year, Firefox will release Mozilla’s Manifest V3 (MV3). Therefore, it’s an ideal time to consider migrating your Manifest V2 extensions. One of our goals throughout our approach to MV3 has been to gradually release new WebExtensions features that enable you to begin implementing APIs that are compatible with MV3. To this end, we recently released some exciting new features you should know about…

 

MV3 changes you can make to your extension right now

 

Event Pages

In Firefox MV3, we’re providing Event Pages as the background script. Event Pages retain several important features, including access to DOM and WebAPIs that are not available with the new service worker backgrounds used in Google Chrome.

We enabled Event Pages for MV2 (aka non-persistent background pages that can be closed and respawned based on browser events) in Firefox 106. This update is a major step toward MV3 because all extensions must adopt Event Pages in MV3. But you can make this change today and use new Event Pages benefits such as:

  • Resiliency against unexpected system crashes. Now we can restart a corrupted background page without hindering the user.
  • No need for an extension reboot to reset a background page.
  • Save on memory resources by putting idle background pages to sleep.
How do I implement Event Pages?

To turn your background into an Event Page, set `persistent: false` on the background page in your manifest.json. Here’s more info on background scripts with implementation details.

Now that your background script is non-persistent, you need to tell Firefox when to wake up the page if it’s suspended. There are two methods available:

  1. Use an event listener like `browser.tabs.onCreated` in your background script.  Event listeners must be added at the top level execution of your script. This way, if your background is sleeping Firefox knows to wake the script whenever a new tab is spawned. This works with nearly all events in the WebExtensions API. Here’s more info on adding listeners. (Note that Firefox recognizes arguments passed to addListener and does not create multiple listeners for the same set of arguments.)
  2. Use `browser.runtime.getBackgroundPage` if you need a background page to run processes unrelated to events. For instance, you may need a background script to run a process while the user is involved with a browser action or side panel. Use this API anytime you need direct access to a background page that may be suspended or closed. Here’s more info on background script functions.

Menus and Scripting APIs also support persistent data:

  • Menu items created by an event page are available after they’re registered — even if the event page is terminated. The event page respawns as necessary to menu events.
  • Registered scripts can be injected into matching web pages without the need for a running Event Page.

Scripting

You can take another big step toward MV3 by switching to the new Scripting API. This API consolidates several scripting related APIs — contentScripts.register(), tabs.insertCSS(), tabs.removeCSS(), and tabs.executeScript() — and adds capabilities to register, update, and unregister content scripts at runtime.

Also, arbitrary strings can no longer be executed because the code parameter has been removed. So you’ll need to move any arbitrary strings executed as scripts into files contained within the extension, or to the func property used with, if necessary, the args parameter.

This API requires the scripting permission.

Preparing for MV3 restrictions

MV3 will impose enhanced restrictions on several features. Most of these restrictions are outlined in the MV3 migration guide. By following the steps detailed in the guide, there are some ways you can begin modifying your MV2 extension to make it comply more closely with MV3 requirements. A few noteworthy areas include…

Conform to MV3’s Content Security Policy

Mozilla’s long-standing add-on policies prohibit remote code execution. In keeping with these policies, the content_security_policy field no longer supports sources permitting remote code in script-related directives, such as script-src or `’unsafe-eval’`. The only permitted values for the `script-src` directive is `’self’` and `’wasm-unsafe-eval’`. `’wasm-unsafe-eval’` must be specified in the CSP if an extension wants to use WebAssembly. In MV3, content scripts are subject to the same CSP as other parts of the extension.

Historically, a custom extension CSP required object-src to be specified. This is not required in MV3 and was removed from MV2 in Firefox 106 (see object-src in content_security_policy on MDN). This change makes it easier for extensions to customize the CSP with minimal boilerplate.

The Content Security Policy (CSP) is more restrictive in MV3. If you are using a custom CSP in your MV2 add-on, you can validate the CSP by temporarily running it as an MV3 extension.  See the MV3 migration guide for details.

Upgrade insecure requests – https by default

When communicating with external servers, extensions will use https by default. Extensions should replace the “http:” and ”ws:” schemes in their source code with secure alternatives, “https:” and ”wss:”. The default MV3 CSP includes the upgrade-secure-requests directive, to enforce the use of secure schemes even if an insecure scheme was used.

Extensions can opt out of this https requirement by overriding the content_security_policy and omitting the upgrade-secure-requests, provided that no user data is transmitted insecurely through the extension.

Opt-in permissions

All MV3 permissions, including host permissions, are opt-in for users. This necessitated a significant Firefox design change — the introduction of the Unified Extensions button — so users can easily grant or deny website specific permissions at any time (the button is enabled on Firefox Nightly for early testing and feedback).

The Unified Extensions button gives Firefox users direct control over website specific extension permissions.

Therefore, you must ensure your extension has permission whenever it accesses APIs covered by a permission, accesses a tab, or uses Fetch API. MV2 already has APIs that enables you to check for permissions and watch for changes in permission. When necessary, you can get the current permission status. However, rather than always checking, use the permissions.onAdded and permissions.onRemoved event APIs to watch for changes.

Update content scripts

While content scripts continue to have access to the same extension APIs in MV3 as in MV2, most of the special exceptions and extension specific capabilities have been removed from the web platform APIs (DOM APIs). In particular, the extension’s host permissions no longer apply to Fetch and XMLHttpRequest.

CSP for content scripts

With MV2 no CSP is applied to content scripts. In MV3, content scripts are subjected to the same CSP as other parts of the extension (see CSP for content scripts on MDN). Notably, this means that remote code cannot be executed from the content script. Some existing uses can be replaced with functionality from the Scripting API such as func and args (see the “Scripting” section above), which is available to MV2 extensions.

XHR and Fetch

With MV2 you also have access to some APIs, such as XMLHttpRequest and Fetch, from both extension and web page contexts. This allows for cross origin requests in a way that is not available in MV3. In MV3, XHR and Fetch operate as if the web page itself was using them, and are subject to cross origin controls.

Content scripts can continue using XHR and Fetch by first making requests to background scripts. A background script can then use Fetch to get the data and return the necessary information to the content script. To avoid privacy issues, set the “credentials” option to “omit” and cache option to “no-cache”. In the future, we may offer an API to support the make-request-on-behalf-of-a-document-in-a-tab use case.

Will Chrome MV3 extensions work in Firefox MV2?

The release of MV3 in Firefox is distinct from Chrome. Add-ons intended to work across different browsers will, in most cases, require some level of adjustment to be compatible in both Firefox and Chrome. That said, we are committed to a high level of compatibility. We will be providing additional APIs and features in the near future. If you’ve converted your Chrome extension to Google’s MV3, you may be able to consolidate some of those changes into your Firefox MV2 extension. Here are a few areas to investigate:

  • Service Workers are not yet available in Firefox; however many scripts may work interchangeably between Service Workers and Event Pages, depending on functionality. To get things working, you may need to remove service worker specific APIs. See Service Worker Global Scope for more information.
  • DNR is not yet available in Firefox. Firefox retains WebRequest blocking in MV3, which can be used in place of DNR. When DNR is available, simple request modifications can be moved over to DNR.
  • The storage.session API is not yet available in Firefox. You can use other storage mechanisms in the meantime.

Hopefully, we’ve provided helpful information so you can use the new MV2 features to start your migration to MV3. As always, we appreciate your feedback and welcome questions. Here are the ways to get in touch:

The post Begin your MV3 migration by implementing new features today appeared first on Mozilla Add-ons Community Blog.

Andrew HalberstadtHow to Work on Taskcluster Github

Taskcluster Github is the Taskcluster service responsible for kick starting tasks on Github repositories. At a high level:

  1. You install a Taskcluster app from the Github marketplace.
  2. This app sends webhooks to the Github service.
  3. Upon receiving a webhook, the Github service processes your repository’s .taskcluster.yml file.
  4. The Github service schedules tasks (if any) and updates the Github checks suite, or comments on your push / pull-request if there is an error.

While the service itself is relatively simple, testing it locally can be a pain! One approach might be to try and synthesize Github’s webhook events, and then intercept the network requests that the Github service makes in response. But this is tricky to do, and without actually seeing the results in a proper Github repo, it’s hard to be sure that your changes are working as intended.

Ideally you would have a real repo, with a development version of the app listed in the Github Marketplace, hooked up to a Taskcluster Github service running on your local machine. This way you could trigger webhooks by performing real actions in your repo (such as opening a pull-request). Better yet, you could see exactly how your Github service changes react!

Thanks to a lot of great work from Yarik, this is easier than ever and is all documented (or linked to) from this page. If you are already familiar with Taskcluster development, or enjoy figuring things out yourself, you may wish to skip this post and read the docs instead. But if you are a Taskcluster newbie, and would appreciate some hand holding, follow along for a step by step tutorial on how to work on and test Taskcluster Github!

Hacks.Mozilla.OrgRevamp of MDN Web Docs Contribution Docs

The MDN Web Docs team recently undertook a project to revamp and reorganize the “Contribution Docs”. These are all the pages on MDN that describe what’s what – the templates and page structures, how to perform a task on MDN, how to contribute to MDN, and the community guidelines to follow while contributing to this massive open source project.

The contribution docs are an essential resource that help authors navigate the MDN project. Both the community as well as the partner and internal teams reference it regularly whenever we want to cross-check our policies or how-tos in any situation. Therefore, it was becoming important that we spruce up these pages to keep them relevant and up to date.

Cleanup

This article describes the updates we made to the “Contribution Docs”.

Reorganization

To begin with, we grouped and reorganized the content into two distinct buckets – Community guidelines and Writing guidelines. This is how the project outline looks like now:

  • You’ll now find all the information about open source etiquette, discussions, process flows, users and teams, and how to get in touch with the maintainers in the  Community guidelines section.
  • You’ll find the information about how to write for MDN, what we write, what we regard as experimental, and so on in the Writing guidelines section.

Next, we shuffled the information around a bit so that logically similar pieces sit together. We also collated information that was scattered across multiple pages into more logical chunks.

For example, the Writing style guide now also includes information about “Write with SEO in mind”, which was earlier a separate page elsewhere.

We also restructured some documents, such as the Writing style guide. This document is now divided into the sections “General writing guidelines”, “Writing style”, and “Page components”. In the previous version of the style guide, everything was grouped under “Basics”.

Updates and rewrites

In general, we reviewed and removed outdated as well as repeated content. The cleanup effort also involved doing the following:

  • Removing and redirecting common procedural instructions, such as setting up Git and using Github, to Github docs, instead of repeating the steps on MDN.
  • Moving some repository-specific information to the respective repository. For example, a better home for the content about “Matching web features to browser release version numbers” is in the mdn/browser-compat-data repository.
  • Rewriting a few pages to make them relevant to the currently followed guidelines and processes.
  • Documenting our process flows for issues and pull requests on mdn/content vs other repositories on mdn. This is an ongoing task as we tweak and define better guidelines to work with our partners and community.

New look

As a result of the cleanup effort, the new “Contributor Docs” structure looks like this:

Community guidelines

Writing guidelines

 

Comparing the old with the new

The list below will give you an idea of the new home for some of the content in the previous version:

  • “Contributing to MDN”
    • New home: Community guidelines > Contributing to MDN Web Docs
  • “Get started on MDN”
    • New home: Community guidelines > Contributing to MDN Web Docs > Getting started with MDN Web Docs
  • “Basic etiquette for open source projects”
    • New home: Community guidelines > Contributing to MDN Web Docs > Open source etiquette
  • “Where is everything on MDN”
    • New home: Community guidelines > Contributing to MDN Web Docs > MDN Web Docs Repositories
  • “Localizing MDN”
    • New home: Community guidelines > Contributing to MDN Web Docs > Translated content
  • “Does this belong on MDN Web Docs”, “Editorial policies”, “Criteria for inclusion”, “Process for selection”, “Project guidelines”
    • New home: Writing guidelines > What we write
  • “Criteria for inclusion”, “Process for selection”, “Project guidelines”
    • New home: Writing guidelines > What we write > Criteria for inclusion on MDN Web Docs
  • “MDN conventions and definitions”
    • New home for definitions: Writing guidelines > Experimental, deprecated and obsolete
    • New home for conventions: Writing guidelines > What we write
  • “Video data on MDN”
    • New home: Writing guidelines > How-to guides > How to add images and media
  • “Structured data on MDN”
    • New home: Writing guidelines > How-to guides > How to use structured data
  • “Content structures”
    • New home: Writing guidelines > Page structures

Summary 

The Contribution Docs are working documents — they are reviewed and edited regularly to keep them up to date with editorial and community policies. Giving them a good spring clean allows easier maintenance for us and our partners.

The post Revamp of MDN Web Docs Contribution Docs appeared first on Mozilla Hacks - the Web developer blog.

Support.Mozilla.OrgIntroducing Lucas Siebert

Hey folks,

I’m super delighted to introduce you to our new Technical Writer, Lucas Siebert. Lucas is joining the content team alongside Abby and Fabi. Some of you may meet him already in our previous community call in October. Here’s a bit more info about Lucas:

Hi, everyone! I’m Lucas, Mozilla’s newest Technical Writer. I’m super excited to work alongside you all to provide content for our knowledge base. You will find me authoring, proofreading, editing, and localizing articles. If you have suggestions for making our content more accurate and user-friendly, please get in touch!

Please join me to congratulate and welcome Lucas!

Mike HommeyAnnouncing git-cinnabar 0.5.11 and 0.6.0rc2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get version 0.5.11 on github. Or get version 0.6.0rc2 on github.

What’s new in 0.5.11?

  • Fixed compatibility with python 3.11.
  • Disabled inexact copy/rename detection, that was enabled by accident.
  • Updated git to 2.38.1 for the helper.

What’s new in 0.6.0rc2?

  • Improvements and bug fixes to git cinnabar self-update. Note: to upgrade from 0.6.0rc1, don’t use the self-update command except on Windows. Please use the download.py script instead, or install from the release artifacts on https://github.com/glandium/git-cinnabar/releases/tag/0.6.0rc2.
  • Disabled inexact copy/rename detection, that was enabled by accident.
  • Removed dependencies on msys DLLs on Windows.
  • Based on git 2.38.1.
  • Other minor fixes.

The Rust Programming Language BlogGeneric associated types to be stable in Rust 1.65

As of Rust 1.65, which is set to release on November 3rd, generic associated types (GATs) will be stable — over six and a half years after the original RFC was opened. This is truly a monumental achievement; however, as with a few of the other monumental features of Rust, like async or const generics, there are limitations in the initial stabilization that we plan to remove in the future.

The goal of this post is not to teach about GATs, but rather to briefly introduce them to any readers that might not know what they are and to enumerate a few of the limitations in initial stabilization that users are most likely to run into. More detailed information can be found in the RFC, in the GATs initiative repository, in the previous blog post during the start of the stabilization push, in the associated items section in the nightly reference, or in the open issues on Github for GATs

What are GATs

At its core, generic associated types allow you to have generics (type, lifetime, or const) on associated types. Note that this is really just rounding out the places where you can put generics: for example, you can already have generics on freestanding type aliases and on functions in traits. Now you can just have generics on type aliases in traits (which we just call associated types). Here's an example of what a trait with a GAT would look like:

trait LendingIterator {
    type Item<'a> where Self: 'a;

    fn next<'a>(&'a mut self) -> Self::Item<'a>;
}

Most of this should look familiar; this trait looks very similar to the Iterator trait from the standard library. Fundamentally, this version of the trait allows the next function to return an item that borrows from self. For more detail about the example, as well as some info on what that where Self: 'a is for, check out the push for stabilization post.

In general, GATs provide a foundational basis for a vast range of patterns and APIs. If you really want to get a feel for how many projects have been blocked on GATs being stable, go scroll through either the tracking issue: you will find numerous issues from other projects linking to those threads over the years saying something along the lines of "we want the API to look like X, but for that we need GATs" (or see this comment that has some of these put together already). If you're interested in how GATs enable a library to do zero-copy parsing, resulting in nearly a ten-fold performance increase, you might be interested in checking out a blog post on it by Niko Matsakis.

All in all, even if you won't need to use GATs directly, it's very possible that the libraries you use will use GATs either internally or publically for ergonomics, performance, or just because that's the only way the implementation works.

When GATs go wrong - a few current bugs and limitations

As alluded to before, this stabilization is not without its bugs and limitations. This is not atypical compared to prior large language features. We plan to fix these bugs and remove these limitations as part of ongoing efforts driven by the newly-formed types team. (Stayed tuned for more details in an official announcement soon!)

Here, we'll go over just a couple of the limitations that we've identified that users might run into.

Implied 'static requirement from higher-ranked trait bounds

Consider the following code:

trait LendingIterator {
    type Item<'a> where Self: 'a;
}

pub struct WindowsMut<'x, T> {
    slice: &'x mut [T],
}

impl<'x, T> LendingIterator for WindowsMut<'x, T> {
    type Item<'a> = &'a mut [T] where Self: 'a;
}

fn print_items<I>(iter: I)
where
    I: LendingIterator,
    for<'a> I::Item<'a>: Debug,
{ ... }

fn main() {
    let mut array = [0; 16];
    let slice = &mut array;
    let windows = WindowsMut { slice };
    print_items::<WindowsMut<'_, usize>>(windows);
}

Here, imagine we wanted to have a LendingIterator where the items are overlapping slices of an array. We also have a function print_items that prints every item of a LendingIterator, as long as they implement Debug. This all seems innocent enough, but the above code doesn't compile — even though it should. Without going into details here, the for<'a> I::Item<'a>: Debug currently implies that I::Item<'a> must outlive 'static.

This is not really a nice bug. And of all the ones we'll mention today, this will likely be the one that is most limiting, annoying, and tough to figure out. This pops up much more often with GATs, but can be found with code that doesn't use GATs at all. Unfortunately, fixing this requires some refactorings to the compiler that isn't a short-term project. It is on the horizon though. The good news is that, in the meantime, we are working on improving the error message you get from this code. This is what it will look like in the upcoming stabilization:

error[E0597]: `array` does not live long enough
   |
   |     let slice = &mut array;
   |                 ^^^^^^^^^^ borrowed value does not live long enough
   |     let windows = WindowsMut { slice };
   |     print_items::<WindowsMut<'_, usize>>(windows);
   |     -------------------------------------------- argument requires that `array` is borrowed for `'static`
   | }
   | - `array` dropped here while still borrowed
   |
note: due to current limitations in the borrow checker, this implies a `'static` lifetime
   |
   |     for<'a> I::Item<'a>: Debug,
   |                          ^^^^

It's not perfect, but it's something. It might not cover all cases, but if have a for<'a> I::Item<'a>: Trait bound somewhere and get an error that says something doesn't live long enough, you might be running into this bug. We're actively working to fix this. However, this error doesn't actually come up as often as you might expect while reading this (from our experience), so we feel the feature is still immensely useful enough even with it around.

Traits with GATs are not object safe

So, this one is a simple one. Making traits with GATs object safe is going to take a little bit of design work for its implementation. To get an idea of the work left to do here, let's start with a bit of code that you could write on stable today:

fn takes_iter(_: &dyn Iterator) {}

Well, you can write this, but it doesn't compile:

error[E0191]: the value of the associated type `Item` (from trait `Iterator`) must be specified
 --> src/lib.rs:1:23
  |
1 | fn takes_iter(_: &dyn Iterator) {}
  |                       ^^^^^^^^ help: specify the associated type: `Iterator<Item = Type>`

For a trait object to be well-formed, it must specify a value for all associated types. For the same reason, we don't want to accept the following:

fn no_associated_type(_: &dyn LendingIterator) {}

However, GATs introduce an extra bit of complexity. Take this code:

fn not_fully_generic(_: &dyn LendingIterator<Item<'static> = &'static str>) {}

So, we've specified the value of the associated type for one value of of the Item's lifetime ('static), but not for any value, like this:

fn fully_generic(_: &dyn for<'a> LendingIterator<Item<'a> = &'a str>) {}

While we have a solid idea of how to implement requirement in some future iterations of the trait solver (that uses more logical formulations), implementing it in the current trait solver is more difficult. Thus, we've chosen to hold off on this for now.

The borrow checker isn't perfect and it shows

Keeping with the LendingIterator example, let's start by looking at two methods on Iterator: for_each and filter:

trait Iterator {
    type Item;

    fn for_each<F>(self, f: F)
    where
        Self: Sized,
        F: FnMut(Self::Item);
    
    fn filter<P>(self, predicate: P) -> Filter<Self, P>
    where
        Self: Sized,
        P: FnMut(&Self::Item) -> bool;
}

Both of these take a function as an argument. Closures are often used these. Now, let's look at the LendingIterator definitions:

trait LendingIterator {
    type Item<'a> where Self: 'a;

    fn for_each<F>(mut self, mut f: F)
    where
        Self: Sized,
        F: FnMut(Self::Item<'_>);

    fn filter<P>(self, predicate: P) -> Filter<Self, P>
    where
        Self: Sized,
        P: FnMut(&Self::Item<'_>) -> bool;
}

Looks simple enough, but if it really was, would it be here? Let's start by looking at what happens when we try to use for_each:

fn iterate<T, I: for<'a> LendingIterator<Item<'a> = &'a T>>(iter: I) {
    iter.for_each(|_: &T| {})
}
error: `I` does not live long enough
   |
   |     iter.for_each(|_: &T| {})
   |                   ^^^^^^^^^^

Well, that isn't great. Turns out, this is pretty closely related to the first limitation we talked about earlier, even though the borrow checker does play a role here.

On the other hand, let's look at something that's very clearly a borrow checker problem, by looking at an implementation of the Filter struct returned by the filter method:

impl<I: LendingIterator, P> LendingIterator for Filter<I, P>
where
    P: FnMut(&I::Item<'_>) -> bool, // <- the bound from above, a function
{
    type Item<'a> = I::Item<'a> where Self: 'a; // <- Use the underlying type

    fn next(&mut self) -> Option<I::Item<'_>> {
        // Loop through each item in the underlying `LendingIterator`...
        while let Some(item) = self.iter.next() {
            // ...check if the predicate holds for the item...
            if (self.predicate)(&item) {
                // ...and return it if it does
                return Some(item);
            }
        }
        // Return `None` when we're out of items
        return None;
    }
}

Again, the implementation here shouldn't seem surprising. We, of course, run into a borrow checker error:

error[E0499]: cannot borrow `self.iter` as mutable more than once at a time
  --> src/main.rs:28:32
   |
27 |     fn next(&mut self) -> Option<I::Item<'_>> {
   |             - let's call the lifetime of this reference `'1`
28 |         while let Some(item) = self.iter.next() {
   |                                ^^^^^^^^^^^^^^^^ `self.iter` was mutably borrowed here in the previous iteration of the loop
29 |             if (self.predicate)(&item) {
30 |                 return Some(item);
   |                        ---------- returning this value requires that `self.iter` is borrowed for `'1`

This is a known limitation in the current borrow checker and should be solved in some future iteration (like Polonius).

Non-local requirements for where clauses on GATs

The last limitation we'll talk about today is a bit different than the others; it's not a bug and it shouldn't prevent any programs from compiling. But it all comes back to that where Self: 'a clause you've seen in several parts of this post. As mentioned before, if you're interested in digging a bit into why that clause is required, see the push for stabilization post.

There is one not-so-ideal requirement about this clause: you must write it on the trait. Like with where clauses on functions, you cannot add clauses to associated types in impls that aren't there in the trait. However, if you didn't add this clause, a large set of potential impls of the trait would be disallowed.

To help users not fall into the pitfall of accidentally forgetting to add this (or similar clauses that end up with the same effect for a different set of generics), we've implemented a set of rules that must be followed for a trait with GATs to compile. Let's first look at the error without writing the clause:

trait LendingIterator {
    type Item<'a>;

    fn next<'a>(&'a mut self) -> Self::Item<'a>;
}
error: missing required bound on `Item`
 --> src/lib.rs:2:5
  |
2 |     type Item<'a>;
  |     ^^^^^^^^^^^^^-
  |                  |
  |                  help: add the required where clause: `where Self: 'a`
  |
  = note: this bound is currently required to ensure that impls have maximum flexibility
  = note: we are soliciting feedback, see issue #87479 <https://github.com/rust-lang/rust/issues/87479> for more information

This error should hopefully be helpful (you can even cargo fix it!). But, what exactly are these rules? Well, ultimately, they end up being somewhat simple: for methods that use the GAT, any bounds that can be proven must also be present on the GAT itself.

Okay, so how did we end up with the required Self: 'a bound. Well, let's take a look at the next method. It returns Self::Item<'a>, and we have an argument &'a mut self. We're getting a bit into the details of the Rust language, but because of that argument, we know that Self: 'a must hold. So, we require that bound.

We're requiring these bounds now to leave room in the future to potentially imply these automatically (and of course because it should help users write traits with GATs). They shouldn't interfere with any real use-cases, but if you do encounter a problem, check out the issue mentioned in the error above. And if you want to see a fairly comprehensive testing of different scenarios on what bounds are required and when, check out the relevant test file.

Conclusion

Hopefully the limitations brought up here and explanations thereof don't detract from overall excitement of GATs stabilization. Sure, these limitations do, well, limit the number of things you can do with GATs. However, we would not be stabilizing GATs if we didn't feel that GATs are not still very useful. Additionally, we wouldn't be stabilizing GATs if we didn't feel that the limitations weren't solvable (and in a backwards-compatible manner).

To conclude things, all the various people involved in getting this stabilization to happen deserve the utmost thanks. As said before, it's been 6.5 years coming and it couldn't have happened without everyone's support and dedication. Thanks all!

Mozilla Localization (L10N)L10n Report: October 2022 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

First of all, thanks to all localizers who contributed to a successful MR release (106) for Firefox desktop. While the new content wasn’t as large as previous major releases, it was definitely challenging, with new feature names added for the first time in a long time.

What’s next? We expect a period of stabilization, with bug fixes that will require new strings, followed by a low volume of new content. We’ll make sure to keep an eye out for the next major release in 2023, and provide as much context as possible for both translation and testing.

Now more than ever it’s a good time to make sure you’re following the Bugzilla component for your locale, testing Nightly builds, and keeping an eye out for potential feedback on social media.

One other update is that we have made significant progress in removing legacy formats from Firefox:

  • All DTD strings have been removed and migrated to Fluent. Given the nature of our infrastructure — we need to support all shipping versions including ESR — the strings will remain available in Pontoon until late Summer 2023, when Firefox ESR 102 will become obsolete. In the meantime, all these files have been marked as low priority in Pontoon (see for example Italian, tag “Legacy DTD strings (ESR 102)”).
  • We have started migrating some plural strings from .properties to Fluent. We are aware that plural forms in .properties were confusing, using a semicolon as separator and only a comment to distinguish them from standard strings. For this reason, we’ll also try to prevent developers from adding new plural strings using this format.

What’s new or coming up in mobile

We have recently launched our Major Release on both Mobile and Desktop! This was v106 release. Thank you to all localizers who have worked hard on this global launch. There were more than 274 folks working on this, and (approximately) 67,094 translations!

Thank you!

Here are the main MR features on mobile:

  • New wallpapers
  • Recently synced tabs will now appear in the “Jump Back” section of your home page
  • Users will see CFR (UI popups) pointing to the new MR features. Existing users updating to 106 should also see new onboarding screens introducing the MR features

What’s new or coming up in web projects

Firefox Relay Website

A bunch of strings were added as the result of a new feature that’s only available in Canada and the US at the moment. Locale specific files were created. This is the first time a product team targets non-English users as well as English users in both countries with a new feature.  Since we don’t have Canadian French and US Spanish communities, these pages were assigned to the French and Mexican Spanish communities respectively. Please give these pages higher priority as they are time sensitive and there is a promotion going on. The promotion encourages users to sign up for both Firefox Relay and Mozilla VPN as a bundle at a discounted price. Thanks to both communities for helping out.

There will be promotional strings added to the Firefox Relay Add-on project. The strings are available for all locales to localize but the promotion is only visible for users in the US and Canada.

What’s new or coming up in Pontoon

Pontoon profile pages have a brand new look: check out this blog post for more information about this change, and don’t forget to update your profile with relevant contact information, to help both project managers and fellow localizers get in touch if needed.

Events

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

 

Image by Elio Qoshi

 

 

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

 

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Mozilla Performance BlogWhat’s new with the Firefox Profiler? (Q3 2022)

As the Perf-Tools team, we are responsible for the Firefox Profiler. This newsletter gives an overview of the new features and improvements we’ve made in the third quarter of 2022.

You can find the previous newsletter here which was about the first half of 2022.

Here are some highlights.

Power tracks on Windows 11 and Apple Silicon

Thanks to the new capabilities in Windows 11 and Apple Silicon, we can now capture the power use of Firefox while profiling. We gather this information into useful power tracks with tooltips showing both the instant power but also the energy in the current ranges:

3 tracks dedicated to the power usage are displayed, including a tooltip on mouse hover showing absolute values.

This can be enabled by using the preset Power from the profiler popup:

Select the "Power" preset using the profiler popup

In October, the work continued to support this feature on Linux and Intel Macs.

Thanks Florian Quèze and Dan Robertson for working on this!

Improvements in the timeline

We’ve been busy improving the timeline (the top part of the profiler analysis UI) to make it easier to use when there are a lot of tracks.

Other changes and fixes

Improvements in the user interface

We also made some small usability changes in other parts of the interface, also with the help of a few contributors.

Importers and non-visible changes

Some features and fixes related to importers were done, especially thanks to a few contributors:

And finally we achieved a few “under the hood” changes:

Contributors in Q3 2022

Lots of awesome people contributed to our codebases both on GitHub and mozilla-central in. We are thankful to all of them! Here’s a list of people who contributed to Firefox Profiler code:

  • Amila Welihinda (amilajack)
  • Bhavya Joshi (Bhavya0304)
  • Florian Quèze (fqueze)
  • Johannes Bechberger (parttimenerd)
  • Julien Wajsberg (julienw)
  • Khairul Azhar Kasmiran (kazarmy)
  • Marc Leclair (MarcLeclair)
  • Markus Stange (mstange)
  • Nazım Can Altınova (canova)
  • Paul Adenot (padenot)

We are also thankful of our localization team:

  • Andreas Pettersson (sv-SE)
  • Artem Polivanchuk (uk)
  • Bund10z (es-CL)
  • Fjoerfoks (fy-NL)
  • Francesco Lodolo (it)
  • Ian Neal (en-GB)
  • Jim Spentzos (el)
  • Luna Jernberg (sv-SE)
  • Marcelo Ghelman (pt-BR)
  • Mark Heijl (nl)
  • Melo46 (ia)
  • Michael Köhler (de)
  • Pin-guang Chen (zh-TW)
  • Théo Chevalier (fr)
  • ZiriSut (kab)
  • ravmn (es-CL)
  • robovoice (de)
  • wxie (zh-CN)
  • Іhor Hordiichuk (uk)
  • 你我皆凡人 (zh-CN)

Our translations happen on Pontoon, and you’re very welcome to contribute, either for an existing or for a new language.

Thanks a lot!

Conclusion

Thanks for reading! If you have any questions or feedback, please feel free to reach out to me on Matrix (@julienw:mozilla.org). You can also reach out to our team on Firefox Profiler channel on Matrix (#profiler:mozilla.org).

If you profiled something and are puzzled with the profile you captured, we also have the Joy of Profiling (#joy-of-profiling:mozilla.org) channel where people share their profiles and get help from the people who are more familiar with the Firefox Profiler. In addition to that, we have the Joy of Profiling Open Sessions where some Firefox Profiler and Performance engineers gather together on a Zoom call to answer questions or analyze the profiles you captured. It’s usually happening every Monday, and you can follow the “Performance Office Hours” calendar to learn more about it.

Data@MozillaThis Week in Glean: Page Load Data, Three Ways (Or, How Expensive Are Events?)

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. All “This Week in Glean” blog posts are listed in the TWiG index).

At Mozilla we make, among other things, Web Browsers which we tend to call Firefox. The central activity in a Web Browser like Firefox is loading a web page. It gets done a lot by each and every one of our users, and so you can imagine that data about pageloads is of important business interest to us.

But exactly because this is done a lot and by every one of our users, this inspires concerns of scale and cost. How much does it cost us to learn more about pageloads?[0]

As with all things in Data, the answer is the same: “Well, it depends.”

In this case it depends on how you record the data. How you record the data depends on what questions you hope to answer with it. We’re going to stick to the simplest of questions to make this (highly-suspect) comparison even remotely comparable.

Option 1: Just the Counts, Ma’am

I say page loads are done a lot, but how much is “a lot”? If that’s our only question, maybe the data we need is simply a count of pageloads. Glean already has a metric type for counting things, so it should be fairly quick to implement.

This should be cheap, right? Just a single number? Well, it depends.

Scale 1: Frequency

The count of pageloads is just a single number. One, maybe as many as eight, bytes to record, store, transmit, retain, and analyze. But Firefox has to report it more than once, so we need to first scale our cost of “one, maybe as many as eight, bytes” by the number of times we send this information.

When we first implemented Firefox’s pageload count in Glean, I wanted to send it on the builtin “metrics” ping which is sent once a day from anyone running Firefox that day[1]. In an effort to gain more complete and timely data, we ended up adding it to the builtin “baseline” ping which is sent (on average for Firefox Desktop) 8 or more times per day.

For our frequency scale we thus use 8/day.

Scale 2: Population

These 8 recordings per day are sent by about 200M users over a month. Days and months aren’t easy to scale between as not all users use Firefox every day, and our population gains new users and loses old users at variable rates… so I recalculated the Frequency scale to be in terms of months and found that we get 68 pings per month from these roughly 200M users.

So the cost is pretty easy to calculate then? Whatever the cost is of storing and transmitting 200M x 68/month x eight bytes ~= 109 GB?

Not entirely. But until and unless those other costs are not comparable between options, we can just treat them as noise. This cost, rendered in the size of the data, of about 109GB? It’ll do.

Option 2: What an Event

Page loads are interesting not just in how many of them there are, but also about what type of load they are and how long the load took. The order of a page load in between other events might also be of interest: did it happen before or after some network trouble? Did a bunch of pageloads happen all at once, or spread across the day? We might wish to instrument page loads as Glean events.

Events are each more expensive than a count. They carry a timestamp (eight bytes) and repeat their names each time they’re recorded (some strings, say fifteen bytes).

(We are not counting the load type or how long the load took in our calculations of the size of an individual sample as we’re still trying to compare methods of answering the same “How many page loads are there?” question.)

Scale 3: Page Loads

“Each time they’re recorded”, huh. Guess that means we get to multiply by the number of page loads. Each Firefox Desktop user, over the course of a month, loads on average 1190 pages[2]. This means instead of sending 68 numbers a month, we’re sending 1190 batches of strings a month.

So the comparable cost is whatever the cost is of storing and transmitting 200M x (eight bytes and fifteen bytes) x 1190 ~= 5.47TB..

We’ve jumped an order of magnitude here. And we’re not done.

Option 3: Custom Pings, and Custom Pings Only

What if the context we wish to record alongside the event of a page load cannot fit inside Glean’s prudent “event” metric type limits? What if the collected pageload data would benefit from a retention limit or access control list different from other counts or events? What if you want to submit this data to be uploaded as soon as it has been recorded? In that case, we could send a pageload as a Glean custom ping.

We’ve not (yet) done this in Firefox Desktop (at least partially because it complicates ordering amongst other events: the Glean SDK expends a lot of effort to ensure the timestamps between events are reliable. Ping times are client times which are subject to the whims of the user.), so I’m going to get even hand-wavier than before as I try to determine how large each individual data sample will be.

A Glean custom ping without any metrics in it comes to around 500 bytes. When our data platform ingests the ping and turns it into a row in a dataset, we add some metadata which adds another 300 bytes or so (which only affects storage inside the Data Platform and doesn’t add costs to client storage or client bandwidth).

We could go deeper and cost out the network headers, the costs of using TLS to ensure the integrity of the connection… but we’d be here all day. So I’m gonna call that 200 bytes to make it a nice round 1000 bytes per ping.

We’re sending these pings per pageload, so the cost is whatever the cost is of storing and transmitting 200M x 1190 x 1000 bytes = 238TB.

Rule of Thumb: 50x

There you have it: for each step up the cost ladder you’re adding an extra 50x multiplier to the cost of storing and transmitting the data. The reality’s actually much worse if it’s harder to analyze and reason about the data as it gets more complex (which it in most cases is) because, as you might remember from one of my previous explorations in costing out metrics: it’s the human costs of things (like analysis) that really getcha.

But you have to balance it out. If adding more context and information ensures your analyses only have to look in one place for its data instead of trying to tie together loosely-coupled concepts from multiple locations… if using a custom ping ensures you have everything you need and don’t have to form a committee to resource an engineer to add implementation which needs to be deployed and individually validated… if you’re willing to bet 50x or 250x the cost on getting it right the first time, then that could be a good price to pay.

But is this the case for you and your data?

Well, it depends.

:chutten

[0]: Avid readers of this blog may notice that this isn’t the first time I’ve written on the costs of data. And it likely won’t be the last!

[1]: How often a “metrics” ping is sent is a little more complicated than “once a day”, but it averages out to about that much so I’m sticking with it for this napkin.

[2]: Yes there are some wild and wacky outliers included in the figure “an average of 1190 page loads” that I’m not bothering to clean up. You can Page Loads Georg to your hearts’ content.

[3]: This is about how many characters the JSON-encoded ping payload comes to, uncompressed.

(This post is a syndicated copy of the original.)

Chris H-CThis Week in Glean: Page Load Data, Three Ways (Or, How Expensive Are Events?)

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. All “This Week in Glean” blog posts are listed in the TWiG index).

At Mozilla we make, among other things, Web Browsers which we tend to call Firefox. The central activity in a Web Browser like Firefox is loading a web page. It gets done a lot by each and every one of our users, and so you can imagine that data about pageloads is of important business interest to us.

But exactly because this is done a lot and by every one of our users, this inspires concerns of scale and cost. How much does it cost us to learn more about pageloads?[0]

As with all things in Data, the answer is the same: “Well, it depends.”

In this case it depends on how you record the data. How you record the data depends on what questions you hope to answer with it. We’re going to stick to the simplest of questions to make this (highly-suspect) comparison even remotely comparable.

Option 1: Just the Counts, Ma’am

I say page loads are done a lot, but how much is “a lot”? If that’s our only question, maybe the data we need is simply a count of pageloads. Glean already has a metric type for counting things, so it should be fairly quick to implement.

This should be cheap, right? Just a single number? Well, it depends.

Scale 1: Frequency

The count of pageloads is just a single number. One, maybe as many as eight, bytes to record, store, transmit, retain, and analyze. But Firefox has to report it more than once, so we need to first scale our cost of “one, maybe as many as eight, bytes” by the number of times we send this information.

When we first implemented Firefox’s pageload count in Glean, I wanted to send it on the builtin “metrics” ping which is sent once a day from anyone running Firefox that day[1]. In an effort to gain more complete and timely data, we ended up adding it to the builtin “baseline” ping which is sent (on average for Firefox Desktop) 8 or more times per day.

For our frequency scale we thus use 8/day.

Scale 2: Population

These 8 recordings per day are sent by about 200M users over a month. Days and months aren’t easy to scale between as not all users use Firefox every day, and our population gains new users and loses old users at variable rates… so I recalculated the Frequency scale to be in terms of months and found that we get 68 pings per month from these roughly 200M users.

So the cost is pretty easy to calculate then? Whatever the cost is of storing and transmitting 200M x 68/month x eight bytes ~= 109 GB?

Not entirely. But until and unless those other costs are not comparable between options, we can just treat them as noise. This cost, rendered in the size of the data, of about 109GB? It’ll do.

Option 2: What an Event

Page loads are interesting not just in how many of them there are, but also about what type of load they are and how long the load took. The order of a page load in between other events might also be of interest: did it happen before or after some network trouble? Did a bunch of pageloads happen all at once, or spread across the day? We might wish to instrument page loads as Glean events.

Events are each more expensive than a count. They carry a timestamp (eight bytes) and repeat their names each time they’re recorded (some strings, say fifteen bytes).

(We are not counting the load type or how long the load took in our calculations of the size of an individual sample as we’re still trying to compare methods of answering the same “How many page loads are there?” question.)

Scale 3: Page Loads

“Each time they’re recorded”, huh. Guess that means we get to multiply by the number of page loads. Each Firefox Desktop user, over the course of a month, loads on average 1190 pages[2]. This means instead of sending 68 numbers a month, we’re sending 1190 batches of strings a month.

So the comparable cost is whatever the cost is of storing and transmitting 200M x (eight bytes and fifteen bytes) x 1190 ~= 5.47TB..

We’ve jumped an order of magnitude here. And we’re not done.

Option 3: Custom Pings, and Custom Pings Only

What if the context we wish to record alongside the event of a page load cannot fit inside Glean’s prudent “event” metric type limits? What if the collected pageload data would benefit from a retention limit or access control list different from other counts or events? What if you want to submit this data to be uploaded as soon as it has been recorded? In that case, we could send a pageload as a Glean custom ping.

We’ve not (yet) done this in Firefox Desktop (at least partially because it complicates ordering amongst other events: the Glean SDK expends a lot of effort to ensure the timestamps between events are reliable. Ping times are client times which are subject to the whims of the user.), so I’m going to get even hand-wavier than before as I try to determine how large each individual data sample will be.

A Glean custom ping without any metrics in it comes to around 500 bytes. When our data platform ingests the ping and turns it into a row in a dataset, we add some metadata which adds another 300 bytes or so (which only affects storage inside the Data Platform and doesn’t add costs to client storage or client bandwidth).

We could go deeper and cost out the network headers, the costs of using TLS to ensure the integrity of the connection… but we’d be here all day. So I’m gonna call that 200 bytes to make it a nice round 1000 bytes per ping.

We’re sending these pings per pageload, so the cost is whatever the cost is of storing and transmitting 200M x 1190 x 1000 bytes = 238TB.

Rule of Thumb: 50x

There you have it: for each step up the cost ladder you’re adding an extra 50x multiplier to the cost of storing and transmitting the data. The reality’s actually much worse if it’s harder to analyze and reason about the data as it gets more complex (which it in most cases is) because, as you might remember from one of my previous explorations in costing out metrics: it’s the human costs of things (like analysis) that really getcha.

But you have to balance it out. If adding more context and information ensures your analyses only have to look in one place for its data instead of trying to tie together loosely-coupled concepts from multiple locations… if using a custom ping ensures you have everything you need and don’t have to form a committee to resource an engineer to add implementation which needs to be deployed and individually validated… if you’re willing to bet 50x or 250x the cost on getting it right the first time, then that could be a good price to pay.

But is this the case for you and your data?

Well, it depends.

:chutten

[0]: Avid readers of this blog may notice that this isn’t the first time I’ve written on the costs of data. And it likely won’t be the last!

[1]: How often a “metrics” ping is sent is a little more complicated than “once a day”, but it averages out to about that much so I’m sticking with it for this napkin.

[2]: Yes there are some wild and wacky outliers included in the figure “an average of 1190 page loads” that I’m not bothering to clean up. You can Page Loads Georg to your hearts’ content.

[3]: This is about how many characters the JSON-encoded ping payload comes to, uncompressed.

Karl Dubost"Thousand" Values of CSS

W3C TPAC 2022 in Vancouver is over. It was strange to meet after these 3 years away. There would be a lot more to say about this. During the CSS WG meetings, participants are talking about all kind of CSS values. It's quickly confusing.

The processing order is explained in details in the Cascading and Inheritance Level 5

night on the beach in Vancouver.

Actual Value

There is not really a definition for the actual value.

A used value is in principle ready to be used, but a user agent may not be able to make use of the value in a given environment. For example, a user agent may only be able to render borders with integer pixel widths and may therefore have to approximate the used width. Also, the font size of an element may need adjustment based on the availability of fonts or the value of the font-size-adjust property. The actual value is the used value after any such adjustments have been made.

in 4.6. Actual Values, CSS Cascading and Inheritance Level 5, Editor’s Draft, 21 October 2022

if I had to rewrite this, I would probably say:

The actual value is the used value after any adjustments made depending for the computing environment context.

Think for example, about rounding adjustments when drawing on a screen for a certain resolution.

Used Value

The used value is the result of taking the computed value and completing any remaining calculations to make it the absolute theoretical value used in the formatting of the document.

in 4.5. Used Values, CSS Cascading and Inheritance Level 5, Editor’s Draft, 21 October 2022

Let's reuse the example of the specification.

For example, a declaration of width: auto can’t be resolved into a length without knowing the layout of the element’s ancestors, so the computed value is auto, while the used value is an absolute length, such as 100px.

Computed Value

The computed value is the result of resolving the specified value as defined in the “Computed Value” line of the property definition table, generally absolutizing it in preparation for inheritance.

in 4.4. Computed Values, CSS Cascading and Inheritance Level 5, Editor’s Draft, 21 October 2022

For example, when we specify the font-size of a paragraph, when the default font-size of the document is 16px.

p {font-size: 2em;}

The computed value will be 32px.

🚨 window.getComputedStyle(elt) doesn't return systematically the computed value. It returns the resolved value (see below)

  <p style="width:auto;">confusing?</p>
</div>
const para = document.querySelector('p');
const usedValue = window.getComputedStyle(para).width;

The computed value will be auto, but the the resolved value will be the current width of the parent element.

Specified Value

The specified value is the value of a given property that the style sheet authors intended for that element. It is the result of putting the cascaded value through the defaulting processes, guaranteeing that a specified value exists for every property on every element.

in 4.3. Specified Values, CSS Cascading and Inheritance Level 5, Editor’s Draft, 21 October 2022

<p class="a">falls</p>
<p>and streams</p>

with

p {color: green;}
p.a {color: red;}

To extract the specified value

document.styleSheets[0].cssRules[1].style.getPropertyValue('color')

will return red.

If you need access to the full declaration, use cssRule.cssText

document.styleSheets[0].cssRules[1].cssText

will return p.a { color: red; }

To not be confused with

document.querySelector('.a').style.cssText

which will return an empty string, because there is no style attribute on the element.

Cascaded Value

The cascaded value represents the result of the cascade: it is the declared value that wins the cascade (is sorted first in the output of the cascade). If the output of the cascade is an empty list, there is no cascaded value.

in 4.2. Cascaded Values, CSS Cascading and Inheritance Level 5, Editor’s Draft, 21 October 2022

<p class="a">falls</p>
<p>and streams</p>

with

p {color: green;}
p.a {color: red;}

The cascaded value for <p class="a"> is red. Note that the used value will be rgb(255, 0, 0).

Declared Value

Each property declaration applied to an element contributes a declared value for that property associated with the element.

in 4.1. Declared Values, CSS Cascading and Inheritance Level 5, Editor’s Draft, 21 October 2022

Initial Value

Each property has an initial value defined in the property’s definition table. If the property is not an inherited property and the cascade does not result in a value then the specified value of the property is its initial value.

in 7.1 Initial Values, CSS Cascading and Inheritance Level 5, Editor’s Draft, 21 October 2022

For example, the initial value on background-color is transparent. This is the color that you can find in the default CSS.

Resolved Value

This time, we changed specification.

getComputedStyle() was historically defined to return the "computed value" of an element or pseudo-element. However, the concept of "computed value" changed between revisions of CSS while the implementation of getComputedStyle() had to remain the same for compatibility with deployed scripts. To address this issue this specification introduces the concept of a resolved value.

in 9. Resolved Values, CSS Object Model (CSSOM), Editor’s Draft, 18 October 2022

The resolved value is either the computed value or the used value. This is dependent of each property.

Relative value

This is not really defined, but this is mentioned in computed value explanations.

A specified value can be either absolute (i.e., not relative to another value, as in red or 2mm) or relative (i.e., relative to another value, as in auto, 2em). Computing a relative value generally absolutizes it.

in the example 12 of the section about computed Value.

For example anything such as em, percentages, relative URLs, etc.

Absolute Value

Not defined but the term is used.The absolute value is a value which has no dependency on the environment, such as red or 3px.

If I have forgotten some. Let me know.

Otsukare!

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 106-107)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 106 and 107 Nightly release cycles.

👷🏽‍♀️ New features

  • We’ve implemented support for module import.meta.resolve (disabled by default).
  • We’re working on implementing more Wasm GC instructions (disabled by default).

⚙️ Modernizing JS modules

We’re working on improving our implementation of modules. This includes supporting modules in Workers, adding support for Import Maps, and ESMification (replacing the JSM module system for Firefox internal JS code with standard ECMAScript modules).

  • See the AreWeESMifiedYet website for the status of ESMification.

💾 Robust Caching

We’re working on better (in-memory) caching of JS scripts based on the new Stencil format. This will let us integrate better with other resource caches used in Gecko and might also allow us to potentially cache JIT-related hints.

The team is currently working on removing the dependency on JSContext for off-thread parsing. This will make it easier to integrate with browser background threads and will further simplify the JS engine.

  • We’re refactoring exception handling in the frontend to be less dependent on JSContext.
  • We’ve simplified the Stencil APIs for off-thread parsing.

🚀 Performance

  • We’ve added a separate GC Zone for shared permanent things. This allowed us to make some fields non-atomic.
  • We optimized the megamorphic cache by inlining certain cache lookups in JIT code.
  • We fixed a performance cliff in the frontend with concurrent delazification enabled.
  • We fixed scalar replacement of ‘call objects’ to work again.
  • We improved performance of lexical environment objects.
  • We improved bytecode generated for certain post-increment/decrement operations.
  • We implemented JIT inlining of String.prototype.toLowerCase based on a lookup table.
  • We simplified and optimized the object slots (re)allocation code.
  • We added a fast path for megamorphic property set/add for plain objects.
  • We optimized code in the JITs for adding new dense elements to objects.

📚 Miscellaneous

  • We changed GC allocations to avoid undefined behavior.
  • We added an API for the Firefox profiler to access JIT code.
  • We cleaned up Wasm GC objects.
  • We added more documentation for the register allocator.
  • We updated irregexp to the latest version.
  • We updated wast to the latest version.
  • We cleaned up a lot of code in the frontend.

Nick FitzgeraldHow Fuzzy are Your Fuzzers?

As long as a fuzzer is uncovering a steady stream of bugs, we can have confidence it’s serving its purpose. But a silent fuzzer is harder to interpret: is our program finally free of bugs, or is the fuzzer simply unable to reach the code in which they are hidden?

Code coverage reports can help here: we can manually check which functions and blocks of code the fuzzer has executed. We can see what coverage is missing that we want or expected to be covered, and then figure out ways to help the fuzzer explore that code. We implement those changes, run the fuzzer again, check the coverage reports again, and can verify our changes had the desired effect.

But how can we be sure that the fuzzer will continue exercising these code paths — especially in evolving code bases with many developers collaborating together? Imagine this scenario: we have a generator that creates test cases that are guaranteed to be syntactically correct, but aren’t guaranteed to type check even if they do in practice 99% of the time. Therefore, our try-and-compile-the-input fuzz target intentionally ignores type errors so it can skip to the next probably-well-typed input, hoping that compiling that next input will trigger an internal compiler assertion or find some other bug. However, some change in one of the generator’s dependencies perturbed the generator so that now it only generates ill-typed programs. After this change, the fuzzer will never exercise our compiler’s mid-end optimizations and backend code generation because it always bounces off the type checker. This is a huge reduction in code exercised by the fuzzer and nothing alerted us to this regression!1

Manually checking coverage reports every week, month, or whenever you happen to remember is tedious. Even worse, if we accidentally introduce a coverage regression, we won’t catch that until the next manual review. What if we unknowingly cut a release during one of these periods? We could ship bugs that we would otherwise have caught — not good!

This isn’t a hypothetical scenario. We’ve been bitten by it in the Wasmtime project, as detailed in the following quote from one of our security advisories:

This bug was discovered when we discovered that Wasmtime’s fuzz target for exercising GC and stack maps, table_ops, was mistakenly not performing any actual work, and hadn’t been for some time now. This meant that while the fuzzer was reporting success it wasn’t actually doing anything substantive. After the fuzz target was fixed to exercise what it was meant to, it quickly found this issue.

Catching bugs early, before you release them, is much preferable to the alternative! Exposing users to bugs isn’t good and writing security advisories and patches isn’t fun.

A more robust solution than periodic coverage reviews is to manually instrument your code with counters or other metrics and then write tests that run your fuzz target N times and assert that your metrics match your expectations within some margin of error. This technique is really low effort and high reward for fuzz targets that are specifically designed to exercise one corner of your system. Even just a single counter or boolean flag can provide lots of value!

For example, I wrote a Wasmtime fuzz target to test our backtrace capturing functionality. The fuzz target is composed of two parts:

  1. A generator that creates pseudo-random Wasm programs consisting of a bunch of functions that arbitrarily call each other, all while dynamically maintaining a shadow stack of function activations that always reflects actual execution.

  2. An oracle that takes these generated test cases, runs them in Wasmtime, and asserts that when we capture an actual backtrace in Wasmtime, it matches the generated program’s shadow stack of activations. Crucially, the oracle also returns the length of the deepest backtrace that it captured.

Now, we need to make sure that this fuzz target is actually exercising what we want it to, and isn’t going off the rails by, for example, returning early from the first function every time and therefore never actually exercising stack capture with many frames on the stack. To do this, I wrote a regular test that generates random buffers of data with an RNG, generates test cases from that random data, runs our oracle on those test cases, and asserts that we capture a stack trace of length ten in a reasonable amount of time. Easy!

// We should quickly capture a stack at least this deep. We
// consider this deep enough to be a "non-trivial" stack.
const TARGET_STACK_DEPTH: usize = 10;

#[test]
fn stacks_smoke_test() {
    // Use a fixed seed so the corpus of generated test cases
    // is deterministic.
    let mut rng = SmallRng::seed_from_u64(0);
    let mut buf = vec![0; 2048];

    for _ in 0..1024 {
        rng.fill_bytes(&mut buf);

        // Generate a new `Stacks` test case from the raw
        // data, using the `arbitrary` crate.
        let u = Unstructured::new(&buf);
        if let Ok(stacks) = Stacks::arbitrary_take_rest(u) {
            // Run the test case through our `check_stacks`
            // oracle.
            let max_stack_depth = check_stacks(stacks);

            // If we reached our target stack depth, then we
            // passed the test!
            if max_stack_depth >= TARGET_STACK_DEPTH {
                return;
            }
        }
    }

    panic!(
        "never generated a `Stacks` test case that reached \
        {TARGET_STACK_DEPTH} deep stack frames",
    );
}

Now we know that we won’t ever accidentally make a change that silently makes it so that we only test capturing stack traces of depth one in this fuzz target. If we tried to make that change, this test would fail, alerting us to the problem.

Of course, this technique isn’t a silver bullet. For more general fuzz targets that are testing basically the whole system, rather than a specific feature, there isn’t a single counter or metric to rely on. Some code paths might take a while to be discovered by the fuzzer, longer than you’d want to wait for in a unit test, even if it should be found eventually. But maybe there are a few counters you can implement as low-hanging fruit and get 80% of the benefits for 20% of the effort?

Finally, that earlier quote from one of our Wasmtime security advisories ends with the following:

Further testing has been added to this fuzz target to ensure that in the future we’ll detect if it’s failing to exercise GC.

We are confident we’ll detect if that fuzz target starts failing to exercise garbage collection because now we count how many garbage collections are triggered in each iteration of the fuzz target, and assert that we trigger at least one garbage collection within a small number of iterations. Simple and easy to implement, but we’ll never have that particular whoops-we-never-triggered-a-GC-in-this-fuzz-target-designed-to-exercise-the-GC egg on our faces again!

Many thanks to Alex Crichton, Chris Fallin, and Jim Blandy for reading drafts of this blog post and providing valuable feedback!


  1. As an aside, a super neat feature for OSS-Fuzz to grow would be automatically filing an issue whenever a fuzz target’s coverage dramatically drops after pulling the latest code from upstream or something like that. 

Mike TaylorHow to get the Chrome major version from the User-Agent or UA-CH headers

Over the weekend frontend-firebrand Alex Russell tweeted that it’s confusing these days to know how to get the major version of a Chromium browser, either via the User-Agent header (given User-Agent Reduction), or from User-Agent Client Hints.

(btw, I asked DALLE2 Alex’s question “Can you make hide or hare of this?”, and yeah, it mostly can.)

a photorealistic AI-generated image of a rabbit in a field. both of its ears appear to be on the same side of its head

Before I spend any energy improving the docs (thanks for the bug, Alex), let me try to give the simplest answer to the question: how do I find out what major version of Chromium a user is on in the year 2022+?

There’s two ways to go about this.

User-Agent

In Chrome 101, we shipped the terribly named “Phase 4” (that’s on me) part of the User-Agent Reduction project. Prior to version 101, you would see something in the User-Agent (either via HTTP or reflected by navigator.userAgent) that looks like so:

"Chrome/100.0.4896.75"

In Phase 4, we reduced the MINOR.BUILD.PATCH portions of the version to the constant “0.0.0”:

"Chrome/101.0.0.0"

But in terms of browser version changes coming to the UA string, that’s it. So if your use case is simple and you just need the major version of the browser, you can continue to parse the UA string as before (hopefully using an actively maintained project like ua-parser)—it will continue to be updated for each new major version update (as I write this, my browser reports Chrome/106.0.0.0).


User-Agent Client Hints

If you want to get slightly fancier, and ditch your existing UA parsing library dependency (but maybe pick up a structured headers parsing library dependency), you can get the Chrome version from the new Sec-CH-UA header.

Sec-CH-UA: "Chromium";v="106", "Google Chrome";v="106", "Not;A=Brand";v="99"

Admittedly, it looks kinda weird. But the good news is you don’t have to care about Client Hints to receive the header—it’s sent by default. Feel free to consult https://web.dev/migrate-to-ua-ch/ if you do have to care about those.

The other cool thing is you can use the new navigator.userAgentData.brands API to get the equivalent of Sec-CH-UA header and do something like the following (maybe working around a known bug) and it will work in Chrome, Chromium, Edge, Opera, Brave, etc.

navigator.userAgentData.brands.some(
  item => item.brand == "Chromium" && item.version > 105
)

So that’s it. If you need the major version of a Chromium browser you can keep using User-Agent, or use Sec-CH-UA.

But if you need the full version of the browser, or the OS, that’s a different blog post I’ll get to once someone tweets their way onto my TODO list.

The Talospace ProjectFirefox 106 on POWER

Firefox 106 is out, with PDF editing, the "Firefox View" feature for finding previous content on both your own desktop and any Firefox Sync-connected devices, and a big update to WebRTC. Of course, that only happens if you build with WebRTC on, and if you do you'll still need Dan Horák's patch from bug 1775202 or the browser won't link on 64-bit Power ISA (alternatively put --disable-webrtc in your .mozconfig if you don't need WebRTC). Otherwise the build works with the .mozconfigs from Firefox 105 and the PGO-LTO patch from Firefox 101.

Firefox NightlyA new release, a new button and much more – These Weeks in Firefox: Issue 126

Highlights

  • Firefox 106 went out today! Here are some blog posts talking about what is new, as well as the release notes!
  • The Unified Extensions toolbar button (part of the Manifest v3 project) has been enabled by default in Nightly.
  • Raphaël Ferrand added a patch to format the value of grid-template-areas in the inspector (bug)
    • The `grid-template-areas` portion of the Rules pane of the DevTools Inspector displays the names of the grid sections in a grid to make it easier to map their names to their locations.

      We heard you like grids.

  • Emilio is burning away XUL layout! See his mailing list post here for details.
  • Power profiling now works on Linux and MacOS with intel CPUs. (previously only worked on MacOS with Apple Silicon and Windows 11). Thanks Dan Robertson for the work on the Linux patch!

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Angel Villalobos
  • Bryan Macoy
  • Itiel
  • Jonas Jenwald [:Snuffleupagus]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Thanks to Nika, ExtensionPolicyService and WebExtensionPolicy have been tweaked to provide a thread-safe subset of these internals – Bug 1793995 and Bug 1794657
    • Nika has also switched MatchPattern and MatchGlob to the rust regex engine – Bug 1693271
  • As part of the work related to the “Unified Extensions” button
    • Added origin controls informations to the panel entries – Bug 1784218
    • Attention indicator on pending content scripts and ungranted host permission – Bug 1777343 / Bug 1795597
    • Tweaked working of the host permissions notes localised strings – Bug 1794504
    • Disable extension entry in the panel when host permissions have been already granted – Bug 1794085
    • Run content scripts on action click for MV3 extension with activeTab permission – Bug 1793494
  • Fixed (in Firefox 107) regression related to the WebExtensions install panel disappearing when switching between tabs – Bug 1792971 (regressed in Firefox 106 by Bug 1565751)
WebExtension APIs
  • Fixed NativeMessaging WebExtensions API regression introduced in 106 (fixed in 107 and fix uplifted to 106) – Bug 1791788 (regressed by Bug 1772942)
  • Improved error reported when runtime.sendNativeMessage fails – Bug 1792094
  • Improved error handling on the scripting.executeScript API method – Bug 1740608

Developer Tools

DevTools
  • Wartmanm fixed a bug in the console autocomplete when the debugger is paused (bug)
  • Zachary Svoboda made it so the JSON Viewer gets automatically focused on load, allowing to scroll in the page via the keyboard easier (bug)
  • Fabien Casters fixed an issue in the Netmonitor Raw header where text selection wouldn’t persist (bug)
  • Alex made the iframe we use in the Netmonitor to render HTML responses more secure, ensuring it runs in a content process  (bug)
  • We fixed an annoying bug in the console input where hitting the Fn key while having text selected would delete the selection (bug)
  • Honza fixed half a dozen of bugs on our documentation (bug, bug, bugbug, bug, bug)
  • The debugger source tree in Browser Toolbox is now sorted thanks to Alex. The main thread will always appear first, then all processes sorted by pid, and finally workers (bug)
    • The debugger source tree in the Browser Toolbox showing a list of threads, starting with the Main Thread, followed by threads in various subprocesses sorted low to high by their process ID.

      This should make it easier to find the thread and code you’re looking for in the Browser Toolbox.

  • Alex is also leading an effort to properly maintain our sourcemap package (https://github.com/mozilla/source-map), which a lot of people are using as it’s faster than Chrome/Safari implementations. He has promising result to make it even faster, hopefully we’ll share some numbers in the weeks to come
WebDriver BiDi
  • James fixed handling of undefined in pointerMove and wheel actions (bug)
  • Sasha added Realm support to `script.evaluate`, `script.callFunction`, and `script.disown` so you evaluate script in a given realm (bug)
  • And she also enabled `script.evaluate`, `script.callFunction` and `script.disown` on release (bug).
  • A new version (v0.32.0) of geckodriver was released, with better support for Firefox in Snap containers (issue)
  • Julian added support for `referenceContext` parameter for the `browsingContext.create` command (bug, spec), so the new browsing context would be physically inserted after the referenced one.
  • Henrik fixed a bug where local IP addresses would cause Remote Agent to start WebSocket on IPv6 instead of IPv4 (bug)
  • Sasha added support for serialization of complex objects (bug)
  • Finally, during the AllHands, Sasha prototyped a WebConsole UI using WebDriver BiDi (bug). This lives on https://firefox-dev.tools/bidi-webconsole-prototype/
    • A WebConsole that looks very similar to the Firefox DevTools console, but running within a web page and connected to a local Firefox for logging and script execution.

      Note that you have to launch Firefox in a special way in order to open the port for WebDriver BiDi clients.

Fluent

ESMification status

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)

Performance Tools (aka Firefox Profiler)

  • The Source View is now focusable and therefore you can copy some text out of it.

Search and Navigation

Chris H-CSeven-Year Moziversary

Seven years ago today I began working at Mozilla.

What have I been up to this year? Not blogging, that’s for sure. I’m not sure if I can lay the entire blame of this at the feet of *gestures at everything*, but with the retirement of the This Week in Glean rotation, I’ve gone from infrequently blogging to never blogging.

Which is weird. I like doing it. It can be very fun. It isn’t usually too difficult. Seems like the intersection of all the things that would make it not only something I could do but something I want to do.

And yet. Here we are with barely a post to show for it. Alas.

If blogging is what I’ve not been doing, then what have I been not not doing? More Firefox on Glean stuff. Spent a lot of time and tears trying to get a proper Migration from Legacy Telemetry to Glean moving. Alas, it is not to be. However, we’ve crested over 100 Glean metrics being sent from Firefox Desktop, and the number isn’t going down… so 2022 has been the year of Glean on the Desktop, whether it was a flagship Platform Initiative or not.

In other news, we just got back from Hawaii where there was the first real All Hands in absolutely forever (since January 2020). It was stressful and weird and emotional and quite fun and necessary and I wanna have another one and I don’t want to have to fly so far for it and and and…

Predictions for the next year of Moz Work:

  • There’ll be another All Hands
  • Glean will continue to gain ground in Firefox Desktop
  • “FOG Migration” will not happen

:chutten

Mozilla ThunderbirdNeed Help With Thunderbird? Here’s How To Get Support

We understand that email and calendaring can be a vital part of your work day, and just as important to your personal life. We also realize that sometimes you’ll have questions about using Thunderbird. That’s where the amazing Thunderbird Community enters the picture. Whether you need tech support or just need a simple answer to a question, here’s how to find the help you need. And how to help the people who are helping you!


Why Community Support?

We celebrate the fact that our software is open-source and funded by donations. It’s this community-powered model that helped us thrive during the past few years.

The generous donations of our users have allowed us to build a solid foundation for Thunderbird’s future. Our core team of engineers and developers is devoting their time to improving Thunderbird from all angles, from visuals to features to infrastructure.

But because we’re open-source, a global community of contributors also helps improve Thunderbird by adding ideas, code, bug fixes, helpful documentation, translations… and user support!

So, our approach to support reflects our commitment to open-source and open development: we invite knowledgeable, friendly people to help their fellow Thunderbird users. This means fewer barriers to getting help, regardless of your native language, your time zone, or your skill level.

Thunderbird Trouble? Try This First!

Sometimes, a custom setting or Thunderbird Add-On might be causing your problem. And there’s an easy way to figure that out: try Troubleshoot Mode.

Troubleshoot Mode (previously called Safe Mode) is a special way of starting Thunderbird that can be used to find and fix problems with your installation. Troubleshoot Mode will run Thunderbird with some features (like Add-ons) and settings disabled. If the problem you’re experiencing does not happen in Troubleshoot Mode, you’ve already done a lot to narrow down what’s causing the issue!

Always try Troubleshoot Mode before reporting a problem. Just follow this link to learn how to turn Troubleshoot Mode on and off.

#1: Thunderbird Community Support

When you have a question about Thunderbird or need some help, this dedicated support page is the best place to visit:

➡ https://support.mozilla.org/products/thunderbird

The global Thunderbird Community has many experienced experts who volunteer their time and knowledge to help fellow Thunderbird users fix their issues.

You’ll find an extensive (and always growing) knowledge base of articles covering Thunderbird’s features, and helpful how-to guides on customization, privacy settings, exporting, and much more.

If your search doesn’t produce a satisfying result, you can ask the community a question from the same page. All you’ll need is a Firefox Account and an email address to receive notifications about responses to your question.

#2: Reddit (/r/Thunderbird)

With nearly half a billion monthly active users, Reddit is ranked as the 9th most popular website in the world. You might already have an account! We have our own “Subreddit” where Thunderbird volunteers and staff members answer user questions and share important updates: https://www.reddit.com/r/Thunderbird/

Reddit works well as a support forum. It has fast notifications, a threaded conversations view, and easy-to-read interface. If you can’t find your answer elsewhere, ask us on Reddit!

Screenshots Are Helpful: Include Them!

Taking a screenshot of the problem you’re having is a great way to show the developers and volunteers your problem, especially if you’re having difficulty describing it with words.

Before posting on the Thunderbird Support Forum or Reddit, try to capture screenshots of the issue. Here are links explaining how to do that on Windows, Linux, and macOS:

Include Your OS and Thunderbird Versions

You want your problem solved as quickly as possible so you can get on with your work! One productive step toward doing that is to always include your operating system (e.g. Ubuntu 22.04, macOS Monterey 12.5, Windows 10) and exact version of Thunderbird (e.g. Thunderbird 102.3.1) in your initial question.

It really speeds up the process and helps the Thunderbird Community to better assist you.

To find out your version of Thunderbird, click the App menu (), and then “Help” and then “About Thunderbird.”


Obviously we hope you never have problems with Thunderbird, but if you ever need help, we hope the above resources and tips help you solve them!

💡 Do you want to request a new feature for Thunderbird? We’d love to see your ideas! Just browse to this page on Mozilla Connect and tell us.

Thunderbird is the leading open-source, cross-platform email and calendaring client, free for business and personal use. We want it to stay secure and become even better. Donations allow us to hire more developers and add exciting new features.

Click here to make a donation. 

The post Need Help With Thunderbird? Here’s How To Get Support appeared first on The Thunderbird Blog.

Firefox NightlyPiP subtitles, screenshots in ‘about’ pages & more – These Weeks in Firefox: Issue 125

Highlights

  • Highlights and updates are a little thin this week since a lot of the team is still recovering from All Hands.
  • cmkm added support for Picture-in-Picture subtitles for Frontend Masters! Check out an example here.
  • Shane made a change to allow accessing quick actions (via ‘> <SPACEBAR>’) even when the user has removed them from the suggestions.
  • Once bug 1790855 lands, the screenshots toolbar button will be enabled on about: pages when `screenshots.browser.component.enabled` is set to true. Bug is currently queued for landing. Set `screenshots.browser.component.enabled` to true to see the ongoing screenshots work.
  • Across the tree, more than 25% of our JSMs have been converted to ESMs! Keep up the good work, everybody!

Friends of the Firefox team

Introductions/Shout-Outs

  • DJ Walker
  • Jonathan Sudiaman

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Brian Pham
  • Nolan Ishii

New contributors (🌟 = first patch)

Project Updates

Fluent

  • According to the L10n Development team, we are currently on track to have all DTD strings removed from the codebase by the time Firefox 107 uplifts!
  • eemeli is in the process of converting all of our PluralForm .properties string usages to Fluent
  • https://www.arewefluentyet.com/

ESMification status

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)

Picture-in-Picture

Search and Navigation

  • Quick Actions:
  • Jonathan and Nolan migrated some more of our modules to ES modules.

Hacks.Mozilla.OrgImproving Firefox responsiveness on macOS

If you’re running Firefox on macOS you might have noticed that its responsiveness has improved significantly in version 103, especially if you’ve got a lot of tabs, or when your machine is busy running other applications at the same time. This improvement was achieved via a small change in how locking is implemented within Firefox’s memory allocator.

Firefox uses a highly customized version of the jemalloc memory allocator across all architectures. We’ve diverged significantly from upstream jemalloc in order to guarantee optimal performance and memory usage on Firefox.

Memory allocators have to be thread safe and – in order to be performant – need to be able to serve a large number of concurrent requests from different threads. To achieve this, jemalloc uses locks within its internal structures that are usually only held very briefly.

Locking within the allocator is implemented differently than in the rest of the codebase. Specifically, creating mutexes and using them must not issue new memory allocations because that would lead to infinite recursion within the allocator itself. To achieve this the allocator tends to use thin locks native to the underlying operating system. On macOS we relied for a long time on OSSpinLock locks.

As the name suggests these are not regular mutexes that put threads trying to acquire them to sleep if they’re already taken by another thread. A thread attempting to lock an already locked instance of OSSpinLock will busy-poll the lock instead of waiting for it to be released, which is commonly referred to as spinning on the lock.

This might seem counter-intuitive, as spinning consumes CPU cycles and power and is usually frowned upon in modern codebases. However, putting a thread to sleep has significant performance implications and thus is not always the best option.

In particular, putting a thread to sleep and then waking it up requires two context switches as well as saving and restoring the thread state to/from memory. Depending on the CPU and workload the thread state can range from several hundred bytes to a few kilobytes. Putting a thread to sleep also has indirect performance effects.

For example, the caches associated with the core the thread was running on were likely holding useful data. When a thread is put to sleep another thread from an unrelated workload might then be selected to run in its place, replacing the data in the caches with new data.

When the original thread is restored it might end up on a different core, or on the same core but with cold caches, filled with unrelated data. Either way, the thread will proceed execution more slowly than if it had kept running undisturbed.

Because of all the above, it might be advantageous to let a thread spin briefly if the lock it’s trying to acquire is only held for a brief period of time. It can result in both higher performance and lower power consumption as the cost of spinning is less than sleeping.

However spinning has a significant drawback: if it goes on for too long it can be detrimental, as it will just waste cycles. Worse still, if the machine is heavily loaded, spinning might put additional load on the system, potentially slowing down precisely the thread that owns the lock, increasing the chance of further threads needing the lock, spinning some more.

As you might have guessed by now OSSpinLock offered very good performance on a lightly loaded system, but behaved poorly as load ramped up. More importantly it had two fundamental flaws: it spinned in user-space and never slept.

Spinning in user-space is a bad idea in general, as user-space doesn’t know how much load the system is currently experiencing. In kernel-space a lock might make an informed decision, for example not to spin at all if the load is high, but OSSpinLock had no such provision, nor did it adapt.

But more importantly, when it couldn’t really grab a lock it would yield instead of sleeping. This is particularly bad because the kernel has no clue that the yielding thread is waiting on a lock, so it might wake up another thread that is also fighting for the same lock instead of the one that owns it.

This will lead to more spinning and yielding and the resulting user experience will be terrible. On heavily loaded systems this could lead to a near live-lock and Firefox effectively hanging. This problem with OSSpinLock was known within Apple hence its deprecation.

Enter os_unfair_lock, Apple’s official replacement for OSSpinLock. If you still use OSSpinLock you’ll get explicit warnings to use it instead.

So I went ahead and used it, but the results were terrible. Performance in some of our automated tests degraded by as much as 30%. os_unfair_lock might be better behaved than OSSpinLock, but it sucked.

As it turns out os_unfair_lock doesn’t spin on contention, it makes the calling thread sleep right away when it finds a contended lock.

For the memory allocator this behavior was suboptimal and the performance regression unacceptable. In some ways, os_unfair_lock had the opposite problem of OSSpinLock: it was too willing to sleep when spinning would have been a better choice. At this point, it’s worth mentioning while we’re at it that pthread_mutex locks are even slower on macOS so those weren’t an option either.

However, as I dug into Apple’s libraries and kernel, I noticed that some spin locks were indeed available, and they did the spinning in kernel-space where they could make a more informed choice with regards to load and scheduling. Those would have been an excellent choice for our use-case.

So how do you use them? Well, it turns out they’re not documented. They rely on a non-public function and flags which I had to duplicate in Firefox.

The function is os_unfair_lock_with_options() and the options I used are OS_UNFAIR_LOCK_DATA_SYNCHRONIZATION and OS_UNFAIR_LOCK_ADAPTIVE_SPIN.

The latter asks the kernel to use kernel-space adaptive spinning, and the former prevents it from spawning additional threads in the thread pools used by Apple’s libraries.

OS_UNFAIR_LOCK_DATA_SYNCHRONIZATION and OS_UNFAIR_LOCK_ADAPTIVE_SPIN. The latter asks the kernel to use kernel-space adaptive spinning, and the former prevents it from spawning additional threads in the thread pools used by Apple's libraries.

Did they work? Yes! Performance on lightly loaded systems was about the same as OSSpinLock but on loaded ones, they provided massively better responsiveness. They also did something extremely useful for laptop users: they cut down power consumption as a lot less cycles were wasted having the CPUs spinning on locks that couldn’t be acquired.

Unfortunately, my woes weren’t over. The OS_UNFAIR_LOCK_ADAPTIVE_SPIN flag is supported only starting with macOS 10.15, but Firefox also runs on older versions (all the way to 10.12).

As an intermediate solution, I initially fell back to OSSpinLock on older systems. Later I managed to get rid of it for good by relying on os_unfair_lock plus manual spinning in user-space.

This isn’t ideal but it’s still better than relying on OSSpinLock, especially because it’s needed only on x86-64 processors, where I can use pause instructions in the loop which should reduce the performance and power impact when a lock can’t be acquired.

When two threads are running on the same physical core, one using pause instructions leaves almost all of the core’s resources available to the other thread. In the unfortunate case of two threads spinning on the same core they’ll still consume very little power.

At this point, you might wonder if os_unfair_lock – possibly coupled with the undocumented flags – would be a good fit for your codebase. My answer is likely yes but you’ll have to be careful when using it.

If you’re using the undocumented flags be sure to routinely test your software on new beta versions of macOS, as they might break in future versions. And even if you’re only using os_unfair_lock public interface beware that it doesn’t play well with fork(). That’s because the lock stores internally the mach thread IDs to ensure consistent acquisition and release.

These IDs change after a call to fork() as the thread creates new ones when copying your process’ threads. This can lead to potential crashes in the child process. If your application uses fork(), or your library needs to be fork()-safe you’ll need to register at-fork handlers using pthread_atfork() to acquire all the locks in the parent before the fork, then release them after the fork (also in the parent), and reset them in the child.

Here’s how we do it in our code.

The post Improving Firefox responsiveness on macOS appeared first on Mozilla Hacks - the Web developer blog.

Wladimir PalantScirge: When your employer mandates spyware

I recently noticed Scirge advertising itself to corporations, promising to “solve” data leaks. Reason enough to take a look into how they do it. Turns out: by pushing a browser extension to all company employees which could be misused as spyware. Worse yet, it obfuscates data streams, making sure that employees cannot see what data is being collected. But of course we know that no employer would ever abuse functionality like that, right?

A pair of daemonic eyes on top of the Scirge logo<figcaption> Image credits: Scirge, netalloy </figcaption>

How it works

There is no point searching for Scirge in any of the extension stores, you won’t find it there. Each company is provided with their individual build of the Scirge extension, configured with the company’s individual Scirge backend. The extension is then supposed to be deployed “automatically using central management tools such as Active Directory Group Policy” (see documentation).

This means that there are no independent user counts available, impossible to tell how widely this extension is deployed. But given any Scirge server, inspecting extension source code is still possible: documentation indicates that the Firefox extension is accessible under /extension/firefox/scirge.xpi and the Chrome one under /extension/firefox/scirge.crx.

The stated goal of the browser extension is to look over your shoulder, recording where you log in and what credentials you use. The idea is recognizing “Shadow IT,” essential parts of the company infrastructure which the management isn’t aware of. And you would never use your work computer for private stuff anyway, right?

What it can do

The browser extension downloads its policy rules from the (company-managed) Scirge server. One part of this policy are awareness messages. These are triggered by conditions like weak or autofilled passwords. Possible actions are an alert message, an HTML message injected into the page or a redirect to some address. This part of the functionality is mostly unproblematic: only few possible trigger conditions, HTML code is passed through DOMPurify, redirects can only go to HTTP or HTTPS addresses.

The website policies are more of an issue. These policies can match single pages, entire domains or use regular expressions to cover the entire internet. And on matching websites all your login credentials can be sent to the Scirge server along with the full address of the page and additional metadata.

If server admins activate the “Collect password hashes for password hygiene checks (only secure hash is stored)” setting, the password itself and not merely password complexity data will be sent to the server. To quote Scirge documentation:

If enabled, passwords will be hashed on the endpoints and sent back to the Central Server (in double encrypted channel). … This is useful for private password reuse monitoring.

And the main product page chimes in:

only industry standard secure hashes are stored at the Central Server database, so password reuse, password sharing, or the use of already breached passwords can become visible to your security departments.

SHA-1 as “industry standard secure hash”

Yes, passwords are indeed hashed before being sent to the server. Yet what Scirge describes as “industry standard secure hashes” is actually the SHA-1 hashing algorithm. Let that sink in.

First of all, SHA-1 is considered cryptographically broken. But that doesn’t matter here, already because the SHA hashing algorithms were never meant to be used with passwords in the first place. They are way too easy to reverse, see for example this article:

Out of the roughly 320 million hashes, we were able to recover all but 116 of the SHA-1 hashes, a roughly 99.9999% success rate.

That was five years ago, today’s hardware is again more capable. It can be assumed that the security level of storing SHA-1 hashes is barely above storing passwords as plain text.

For storing passwords on a server securely, the baseline are the PBKDF2 and bcrypt hashing algorithms. These also offer too little protection given modern hardware, which is why new applications should use memory-intensive algorithms like scrypt or Argon2. But any of these algorithms offers orders of magnitude more protection than SHA-1.

Never mind the fact that each password needs to be hashed with a unique salt. Otherwise the computational effort of reversing all passwords stored in the database will be the same as the effort of reversing merely one of them. But unique salts are incompatible with the goal of checking for password reuse. So at the very least Scirge could introduce per-user salts.

“Double encrypted channel” as obfuscation

All of this would be less problematic if employees could inspect the policies or the data being sent to the server. But the system doesn’t allow for such transparency. To quote Scirge documentation once again:

The Endpoint Browser Extensions communicates via secure HTTPS protocol ensuring that the communication is encrypted.

To make this even more secure, Scirge uses authenticated encryption for the Endpoint Browser Extension and the Central Server communication. Using public-key authenticated encryption, the Central Server encrypts its message specifically for the endpoint, using the given endpoint’s public key.

So the communication with the Scirge server uses TLS and a second encryption system based on the same principles. This sounds pretty pointless. An attacker capable of breaking up TLS connections will be able to do the same with Scirge’s custom encryption scheme.

What this does achieve: looking at the extension communication with the browser’s built-in Developer Tools won’t give you anything. You will be able to see beyond the TLS encryption, getting the data before/after the custom encryption scheme is applied requires considerably more skill however. Most employees won’t be able to do it.

And so most people who got this browser extension forced onto them by their employer are out of luck: they won’t be able to verify what it is being used for and what data it actually collects. I’m certain many employers approve.

Andrew SutherlandAndrew’s Searchfox Vision 2022

Searchfox (source, config source) is Mozilla’s primary code searching tool for Firefox introduced by Bill McCloskey in 2016 which built upon prior work on DXR. This product vision post describes my personal vision for searchfox and the rationale that underpins it. I’m also writing an accompanying road map that describes specific potential enhancements in support of this vision which I will publish soon and goes into the concrete potential features that would be implemented in the spirit of this vision.

Note that the process of developing searchfox is iterative and done in consultation with its users and other contributors, primarily in the searchfox channel on chat.mozilla.org and in its bugzilla component. Accordingly, these documents should be viewed as a basis for discussion rather than a strict project plan.

The Whys Of Searchfox

Searchfox is a Tool For System Understanding

Searchfox enables exploration and understanding of the Firefox codebase as it exists now, as it existed in the past, and to support understanding of the ramifications of potential changes.

Searchfox Is A Tool For Shared System Understanding

Firefox is a complex piece of software which has more going on than any one person can understand at a time. Searchfox enables Firefox’s contributors to leverage the documentation artifacts of other teams and contributors when exploring in isolation, and to communicate more effectively when interacting.

Searchfox Is Not The Only Tool

Searchfox integrates relevant data from automation and other tools in the Firefox development ecosystem where they make sense and provides deep links into those tools or first steps to help you get started without having to start from nothing.

The Hows Of Searchfox

Searchfox is Immediate: Low Latency and Complete

Searchfox’s results should be available in their entirety when the page load completes, ideally in much less than a second. There should be no asynchronous lazy loading or spinners. Practically speaking, if you could potentially see something on a page, you should be able to ctrl-f for it.

In situations where results are too voluminous to be practically useful, Searchfox should offer targeted follow-on searches that can relax limits and optionally provide for additional constraints so that iterative progress is made.

Searchfox is Accessible

Searchfox should always present a usable accessibility tree. This includes ensuring that any dynamically generated graphical representations such as graphviz-style diagrams have a directly usable accessibility tree or an alternate representation that maximally captures any hierarchy or clustering present in the visual presentation.

Searchfox Favors Iterative Exploration and Low Activation Energy

Searchfox seeks to avoid UX patterns where you have to metaphorically start from a blank sheet of paper or face decision paralysis choosing among a long list of options. Instead, start from whatever needle you have (an identifier name, a source file location, a string you saw in the UI) and searchfox will help you iterate and refine from there.

Searchfox Provides Stable, Useful URLs When Possible and Markdown For More Complex Situations

If you’re looking at something in searchfox, you should be able to share it as a URL, although there may be a few URLs to choose from such as whether to use a permalink which includes a specific revision identifier. More complicated situations may merit searchfox providing you with markdown that you can paste in tools that understand markdown.

Mike HommeyAnnouncing git-cinnabar 0.6.0rc1

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.10?

  • Full rewrite of git-cinnabar in Rust.
  • Push performance is between twice and 10 times faster than 0.5.x, depending on scenarios.
  • Based on git 2.38.0.
  • git cinnabar fetch now accepts a --tags flag to fetch tags.
  • git cinnabar bundle now accepts a -t flag to give a specific bundlespec.
  • git cinnabar rollback now accepts a --candidates flag to list the metadata sha1 that can be used as target of the rollback.
  • git cinnabar rollback now also accepts a --force flag to allow any commit sha1 as metadata.
  • git cinnabar now has a self-update subcommand that upgrades it when a new version is available. The subcommand is only available when building with the self-update feature (enabled on prebuilt versions of git-cinnabar).

William LachanceUsing Sphinx in a Monorepo

Just wanted to type up a couple of notes about working with Sphinx (the python documentation generator) inside a monorepo, an issue I’ve been struggling with (off and on) at Voltus since I started. I haven’t seen much written about this topic despite (I suspect) it being a reasonably frequent problem.

In general, there’s a lot to like about Sphinx: it’s great at handling deeply nested trees of detailed documentation with cross-references inside a version control system. It has local search that works pretty well and some themes (like readthedocs) scale pretty nicely to hundreds of documents. The directives and roles system is pretty flexible and covers most of the common things one might want to express in technical documentation. And if the built-in set of functionality isn’t enough, there’s a wealth of third party extension modules. My only major complaint is that it uses the somewhat obscure restructuredText file format by default, but you can get around that by using the excellent MyST extension.

Unfortunately, it has a pretty deeply baked in assumption that all documentation for your project lives inside a single subfolder. This is fine for a small repository representing a single python module, like this:

<root>
README.md
setup.cfg
pyproject.toml
mymodule/
docs/

However, this doesn’t work for a large monorepo, where you would typically see something like:

<root>/module-1/submodule-a
<root>/module-1/submodule-b
<root>/module-2/submodule-c
...

In a monorepo, you usually want to include a module’s documentation inside its own directory. This allows you to use your code ownership constraints for documentation, among other things.

The naive solution would be to create a sphinx site for every single one of these submodules. This is what happened at Voltus and I don’t recommend it. For a large monorepo you’ll end up with dozens, maybe hundreds of documentation “sites”. Under this scenario, discoverability becomes a huge problem: no longer can you rely on tables of contents and the built-in search to discover content: you just have to “know” where things live. I’m more than nine months in here and I’m still discovering new documentation.

It would be much better if we could somehow collect documentation from other parts of the repository into a single site. Is this possible? tl;dr: Yes. There’s a few solutions, each with their pros and cons.

The obvious solution that doesn’t work

The most obvious solution here is to create a symbolic link inside your documentation directory, say the following:

<root>/docs/
<root>/docs/module-1/submodule-a -> <root>/module-1/submodule-a/docs

Unfortunately, this doesn’t work. ☹️ Sphinx doesn’t follow symbolic links.

Solution 1: Just copy the files in

The most obvious solution is to just copy the files from various parts of the monorepo into place, as part of the build system. Mozilla did this for Firefox, with the moztreedocs system.

The results look pretty good, but this is a bespoke solution. Aside from general ideas, there’s no way I’m going to be able to apply anything in moztreedocs to Voltus’s monorepo (which is based on a completely different build system). And being honest, I’m not sure if the 40+ hour (estimated) effort to reimplement it would be a good use of time compared to other things I could be doing.

Solution 2: Use the include directive with MyST

Later versions of MyST include support for directly importing a markdown file from another part of the repository.

This is a limited form of embedding: it won’t let you import an entire directory of markdown files. But if your submodules mostly just include content in the form of a README.md (or similar), it might just be enough. Just create a directory for these files to live (say services) and slot them in:

<root>/docs/services/module-1/submodule-a/index.md:

```{include} ../../../module-1/submodule-a/README.md
```

I’m currently in the process of implementing this solution inside Voltus. I have optimism that this will be a big (if incremental) step up over what we have right now. There are obviously limits, but you can cram a lot of useful information in a README. As a bonus, it’s a pretty nice marker for those spelunking through the source code (much more so than a forest of tiny documentation files).

Solution 3: Sphinx Collections

This one I just found about today: Sphinx Collections is a small python module that lets you automatically import entire directories of files into your sphinx tree, under a _collections module. You configure it in your top-level conf.py like this:

extensions = [
    ...
    "sphinxcontrib.collections"
]

collections = {
    "submodule-a": {
        "driver": "symlink",
        "source": "/monorepo/module-1/submodule-a/docs",
        "target": "submodule-a"
    },
    ...
}

After setting this up, submodule-a is now available under _collections and you can include it in your table of contents like this:

...

```{toctree}
:caption: submodule-a

_collections/submodule-a/index.md
```

...

At this point, submodule-a’s documentation should be available under http://<my doc domain>/_collections/submodule-a/index.html

Pretty nifty. The main downside I’ve found so far is that this doesn’t play nicely with the Edit on GitHub links that the readthedocs theme automatically inserts (it thinks the files exist under _collections), but there’s probably a way to work around that.

I plan on investigating this approach further in the coming months.

Tantek ÇelikW3C TPAC 2022 Sustainability Community Group Meeting

This year’s W3C TPAC Plenary Day was a combination of the first ever AC open session in the early morning, and breakout sessions in the late morning and afternoon. Nick Doty proposed a breakout session for Sustainability for the Web and W3C which he & I volunteered to co-chair, as co-chairs of the Sustainability (s12y) CG which we created on Earth Day earlier this year. Nick & I met during a break on Wednesday afternoon and made plans for how we would run the session as a Sustainability CG meeting, which topics to introduce, how to deal with unproductive participation if any, and how to focus the latter part of the session into follow-up actions.

We agreed that our primary role as chairs should be facilitation. We determined a few key meeting goals, in particular to help participants:

  • Avoid/minimize any trolling or fallacy arguments (based on experience from 2021)
  • Learn who is interested in which sustainability topics & work areas
  • Determine clusters of similar, related, and overlapping sustainability topics
  • Focus on prioritizing actual sustainability work rather than process mechanics
  • Encourage active collaboration in work areas (like a do-ocracy)

The session went better than I expected. The small meeting room was packed with ~20 participants, with a few more joining us on Zoom (which thankfully worked without any issues, thanks to the W3C staff for setting that up so all we had to do as chairs was push a button to start the meeting!).

I am grateful for everyone’s participation and more importantly the shared sense of collaboration, teamwork, and frank urgency. It was great to meet & connect in-person, and see everyone on video who took time out of their days across timezones to join us. There was a lot of eagerness in participation, and Nick & I did our best to give everyone who wanted to speak time to contribute (the IRC bot Zakim's two minute speaker timer feature helped).

It was one of the more hopeful meetings I participated in all week. Thanks to Yoav Weiss for scribing the minutes. Here are a few of the highlights.

Session Introduction

Nick introduced himself and proposed topics of discussion for our breakout session.

  • How we can apply sustainbility to web standards
  • Goals we could work on as a community
  • Consider metrics to enable other measures to take effect
  • Measure the impact of the W3C meetings themselves
  • Working mode and how we talk about sustainability in W3C
  • Horizontal reviews

I introduced myself and my role at Mozilla as one our Environmental Champions, and noted that it’s been three years since we had the chance to meet in person at TPAC. Since then many of us who participate at W3C have recognized the urgency of sustainability, especially as underscored by recent IPCC reports. From the past few years of publications & discussions:

For our TPAC 2022 session, I asked that we proceed with the assumption of sustainability as a principle, and that if folks came to argue with that, that they should raise an issue with the TAG, not this meeting.

In the Call for Participation in the Sustainability Community Group, we highlighted both developing a W3C practice of Sustainability (s12y) Horizontal Review (similar to a11y, i18n, privacy, security) as proposed at TPAC 2021, and an overall venue for participants to discuss all aspects of sustainability with respect to web technologies present & future. For our limited meeting time, I asked participants to share how they want to have the biggest impact on sustainability at W3C, with the web in general, and actively prioritize our work accordingly.

Work Areas, Groups, Resources

Everyone took turns introducing themselves and expressing which aspects of sustainability were important to them, noting any particular background or applicable expertise, as well as which other W3C groups they are participating in, as opportunities for liaison and collaboration. Several clusters of interest emerged:

  • Technologies to reduce energy usage
  • W3C meetings and operations
  • Measurement
  • System Effects
  • Horizontal Review
  • Principles

The following W3C Groups were noted which are either already working on sustainability related efforts or would be good for collaboration, and except for the TAG, had a group co-chair in the meeting!

I proposed adding a liaisons section to our public Sustainability wiki page accordingly explicitly listing these groups and specific items for collaboration. Participants also shared the following links to additional efforts & resources:

Sustainability Work In Public By Default

Noting that since all our work on sustainability is built on a lot of public work by others, the best chance of our work having an impact is to also do it publicly, I proposed that Sustainability CG work in public by default, as well as sustainability work at W3C in general, and that we send that request to the AB to advise W3C accordingly. The proposal was strongly supported with no opposition.

Active Interest From Organizations

There were a number of organizations whose representatives indicated that they are committed to making a positive impact on the environment, and would like to work on efforts accordingly in the Sustainability CG, or would at least see if they could contact experts at their organizations to see if any of them were interested in contributing.

  • Igalia
  • mesur.io
  • Mozilla
  • Lawrence Berkeley National Laboratory
  • Washington Post

Meeting Wrap-up And Next Steps

We finished up the meeting with participants signing up to work on each of the work areas (clusters of interest noted above) that they were personally interested in working on. This has been captured on our wiki: W3C Wiki: Sustainability Work Areas.

The weekend after the meeting I wrote up an email summary of the meeting & next steps and sent it directly to those who were present at the meeting, encouraging them to Join the Sustainability Community Group (requires a W3C account) for future emails and updates. Nick & I are also on the W3C Community Slack #sustainability channel which I recommended joining. Signup link: https://www.w3.org/slack-w3ccommunity-invite

Next Steps: we encouraged everyone who signed up for a Work Area to reach out to each other directly and determine their preferred work mode, including in which venue they’d like to do the work, whether in the Sustainability CG, another CG, or somewhere else. We noted that work on sustainable development & design of web sites in particular should be done directly with the Sustainable Web Design CG (sustyweb), “a community group dedicated to creating sustainable websites”.

Some possibilities for work modes that Work Area participants can use:

  • W3C Community Slack #sustainability channel
  • public-sustainability email list of the Sustainability CG
  • Our Sustainability wiki page, creating "/" subpages as needed

There is lots of work to do across many different areas for sustainability & the web, and for technology as a whole, which lends itself to small groups working in parallel. Nick & I want to help facilitate those that have the interest, energy, and initiative to do so. We are available to help Work Area participants pick a work mode & venue that will best meet their needs and help them get started on their projects.

The Talospace ProjectFirefox 105 on POWER

Firefox 105 is out. No, it's not your imagination: I ended up skipping a couple versions. I wasn't able to build Firefox 103 because gcc 12 in Fedora 36 caused weird build failures until it was finally fixed; separately, building 104 and working more on the POWER9 JavaScript JIT got delayed because I'd finally had it with the performance issues and breakage in GNOME 42 and took a couple weeks renovating Plasma so I could be happy with my desktop environment again. Now I'm on track again with everything hopefully maintainable and my workflows properly restored, and we're back to the grind with both those concerns largely resolved.

Unfortunately, we have a couple new ones. Debug builds broke in Fx103 using our standard .mozconfig when mfbt/lz4/xxhash.h was upgraded, because we compile with -Og and it wants to compile its functions with static __inline__ __attribute__((always_inline, unused)). When gcc builds a deoptimized debugging build and fails to inline those functions, it throws a compilation error, and the build screeches to a halt. (This doesn't affect Fedora's build because they always build at a sufficient optimization level such that these functions do indeed get inlined.) After a little thinking, this is the new debug .mozconfig:


export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24" # or as you likez
ac_add_options --enable-application=browser
ac_add_options --enable-optimize="-Og -mcpu=power9 -fpermissive -DXXH_NO_INLINE_HINTS=1"
ac_add_options --enable-debug
ac_add_options --enable-linker=bfd
ac_add_options --without-wasm-sandboxed-libraries

export GN=/home/censored/bin/gn # if you haz
This builds, or at least compiles, but fails at linkage because of the second problem. This time, it's libwebrtc ... again. To glue the Google build system onto Mozilla's, there is a fragile and system-dependent permuting-processing step that again has broken and Mozilla would like a definitive fix. Until then, we're high and dry because the request is for the generated build file to be generated correctly rather than just patching the generated build file. That's a much bigger knot to unravel and building the gn tool it depends on used to be incredibly difficult (it's now much easier and I was able to upgrade, but all this has done is show me where the problem is and it's not a straightforward fix). If this is not repaired, then various screen capture components used by libwebrtc are not compiled, and linking will fail. Right now it looks like we're the only platform affected even though aarch64 has been busted by the same underlying issue in the past.

The easy choice, especially if you don't use WebRTC, is just add ac_add_options --disable-webrtc to your .mozconfig. I don't use WebRTC much and I'm pretty lazy so ordinarily I would go this route — except you, gentle reader, expect me to be able to tell you when Firefox compiles are breaking, so that brings us to the second option: Dan Horák's patch. This also works and is the version I'm typing into now. Expect you will have to carry this patch in your local tree for a couple versions until this gets dealt with.

Fortunately, the PGO-LTO patch for Firefox 101 still applies to Fx105, so you can still use that. While the optimized .mozconfig is unchanged, here it is for reference:


export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24" # or as you likez
ac_add_options --enable-application=browser
ac_add_options --enable-optimize="-O3 -mcpu=power9 -fpermissive"
ac_add_options --enable-release
ac_add_options --enable-linker=bfd
ac_add_options --enable-lto=full
ac_add_options --without-wasm-sandboxed-libraries
ac_add_options MOZ_PGO=1

export GN=/home/censored/bin/gn # if you haz
export RUSTC_OPT_LEVEL=2
I've got one other issue to settle, and then I hope to get back to porting the JavaScript and Wasm JIT to 102ESR. But my real life and my $DAYJOB interfere with my after-hours hacking, so contributors still solicited so our work can benefit the OpenPOWER community. When it's one person working on it, things are slower.

Niko MatsakisRust 2024…the year of everywhere?

I’ve been thinking about what “Rust 2024” will look like lately. I don’t really mean the edition itself — but more like, what will Rust feel like after we’ve finished up the next few years of work? I think the answer is that Rust 2024 is going to be the year of “everywhere”. Let me explain what I mean. Up until now, Rust has had a lot of nice features, but they only work sometimes. By the time 2024 rolls around, they’re going to work everywhere that you want to use them, and I think that’s going to make a big difference in how Rust feels.

Async everywhere

Let’s start with async. Right now, you can write async functions, but not in traits. You can’t write async closures. You can’t use async drop. This creates a real hurdle. You have to learn the workarounds (e.g., the async-trait crate), and in some cases, there are no proper workarounds (e.g., for async-drop).

Thanks to a recent PR by Michael Goulet, static async functions in traits almost work on nightly today! I’m confident we can work out the remaining kinks soon and start advancing the static subset (i.e., no support for dyn trait) towards stabilization.

The plans for dyn, meanwhile, are advancing rapidly. At this point I think we have two good options on the table and I’m hopeful we can get that nailed down and start planning what’s needed to make the implementation work.

Once async functions in traits work, the next steps for core Rust will be figuring out how to support async closures and async drop. Both of them add some additional challenges — particularly async drop, which has some complex interactions with other parts of the language, as Sabrina Jewson elaborated in a great, if dense, blog post — but we’ve started to develop a crack team of people in the async working group and I’m confident we can overcome them.

There is also library work, most notably settling on some interop traits, and defining ways to write code that is portable across allocators. I would like to see more exploration of structured concurrency1, as well, or other alternatives to select! like the stream merging pattern Yosh has been advocating for.

Finally, for extra credit, I would love to see us integrate async/await keywords into other bits of the function body, permitting you to write common patterns more easily. Yoshua Wuyts has had a really interesting series of blog posts exploring these sorts of ideas. I think that being able to do for await x in y to iterate, or (a, b).await as a form of join, or async let x = … to create a future in a really lightweight way could be great.

Impl trait everywhere

The impl Trait notation is one of Rust’s most powerful conveniences, allowing you to omit specific types and instead talk about the interface you need. Like async, however, impl Trait can only be used in inherent functions and methods, and can’t be used for return types in traits, nor can it be used in type aliases, let bindings, or any number of other places it might be useful.

Thanks to Oli Scherer’s hard work over the last year, we are nearing stabilization for impl Trait in type aliases. Oli’s work has also laid the groundwork to support impl trait in let bindings, meaning that you will be able to do something like

let iter: impl Iterator<Item = i32> = (0..10);
//        ^^^^^^^^^^^^^ Declare type of `iter` to be “some iterator”.

Finally, the same PR that added support for async fns in traits also added initial support for return-position impl trait in traits. Put it all together, and we are getting very close the letting you use impl trait everywhere you might want to.

There is still at least one place where impl Trait is not accepted that I think it should be, which is nested in other positions. I’d like you to be able to write impl Fn(impl Debug), for example, to refer to “some closure that takes an argument of type impl Debug” (i.e., can be invoked multiple times with different debug types).

Generics everywhere

Generic types are a big part of how Rust libraries are built, but Rust doesn’t allow people to write generic parameters in all the places they would be useful, and limitations in the compiler prevent us from making full use of the annotations we do have.

Not being able to use generic types everywhere might seem abstract, particularly if you’re not super familiar with Rust. And indeed, for a lot of code, it’s not a big deal. But if you’re trying to write libraries, or to write one common function that will be used all over your code base, then it can quickly become a huge blocker. Moreover, given that Rust supports generic types in many places, the fact that we don’t support them in some places can be really confusing — people don’t realize that the reason their idea doesn’t work is not because the idea is wrong, it’s because the language (or, often, the compiler) is limited.

The biggest example of generics everywhere is generic associated types. Thanks to hard work by Jack Huey, Matthew Jasper, and a number of others, this feature is very close to hitting stable Rust — in fact, it is in the current beta, and should be available in 1.65. One caveat, though: the upcoming support for GATs has a number of known limitations and shortcomings, and it gives some pretty confusing errors. It’s still really useful, and a lot of people are already using it on nightly, but it’s going to require more attention before it lives up to its full potential.

You may not wind up using GATs in your code, but it will definitely be used in some of the libraries you rely on. GATs directly enables common patterns like Iterable that have heretofore been inexpressible, but we’ve also seen a lot of examples where its used internally to help libraries present a more unified, simpler interface to their users.

Beyond GATs, there are a number of other places where we could support generics, but we don’t. In the previous section, for example, I talked about being able to have a function with a parameter like impl Fn(impl Debug) — this is actually an example of a “generic closure”. That is, a closure that itself has generic arguments. Rust doesn’t support this yet, but there’s no reason we can’t.

Oftentimes, though, the work to realize “generics everywhere” is not so much a matter of extending the language as it is a matter of improving the compiler’s implementation. Rust’s current traits implementation works pretty well, but as you start to push the bounds of it, you find that there are lots of places where it could be smarter. A lot of the ergonomic problems in GATs arise exactly out of these areas.

One of the developments I’m most excited about in Rust is not any particular feature, it’s the formation of the new types team. The goal of this team is to revamp the compiler’s trait system implementation into something efficient and extensible, as well as building up a core set of contributors.

Making Rust feel simpler by making it more uniform

The topics in this post, of course, only scratch the surface of what’s going on in Rust right now. For example, I’m really excited about “everyday niceties” like let/else-syntax and if-let-pattern guards, or the scoped threads API that we got in 1.63. There are exciting conversations about ways to improve error messages. Cargo, the compiler, and rust-analyzer are all generally getting faster and more capable. And so on, and so on.

The pattern of having a feature that starts working somewhere and then extending it so that it works everywhere seems, though, to be a key part of how Rust development works. It’s inspiring also because it becomes a win-win for users. Newer users find Rust easier to use and more consistent; they don’t have to learn the “edges” of where one thing works and where it doesn’t. Experienced users gain new expressiveness and unlock patterns that were either awkward or impossible before.

One challenge with this iterative development style is that sometimes it takes a long time. Async functions, impl Trait, and generic reasoning are three areas where progress has been stalled for years, for a variety of reasons. That’s all started to shift this year, though. A big part of is the formation of new Rust teams at many companies, allowing a lot more people to have a lot more time. It’s also just the accumulation of the hard work of many people over a long time, slowly chipping away at hard problems (to get a sense for what I mean, read Jack’s blog post on NLL removal, and take a look at the full list of contributors he cited there — just assembling the list was impressive work, not to mention the actual work itself).

It may have been a long time coming, but I’m really excited about where Rust is going right now, as well as the new crop of contributors that have started to push the compiler faster and faster than it’s ever moved before. If things continue like this, Rust in 2024 is going to be pretty damn great.

  1. Oh, my beloved moro! I will return to thee! 

The Rust Programming Language BlogAnnouncing Rust 1.64.0

The Rust team is happy to announce a new version of Rust, 1.64.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.64.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.64.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.64.0 stable

Enhancing .await with IntoFuture

Rust 1.64 stabilizes the IntoFuture trait. IntoFuture is a trait similar to IntoIterator, but rather than supporting for ... in ... loops, IntoFuture changes how .await works. With IntoFuture, the .await keyword can await more than just futures; it can await anything which can be converted into a Future via IntoFuture - which can help make your APIs more user-friendly!

Take for example a builder which constructs requests to some storage provider over the network:

pub struct Error { ... }
pub struct StorageResponse { ... }:
pub struct StorageRequest(bool);

impl StorageRequest {
    /// Create a new instance of `StorageRequest`.
    pub fn new() -> Self { ... }
    /// Decide whether debug mode should be enabled.
    pub fn set_debug(self, b: bool) -> Self { ... }
    /// Send the request and receive a response.
    pub async fn send(self) -> Result<StorageResponse, Error> { ... }
}

Typical usage would likely look something like this:

let response = StorageRequest::new()  // 1. create a new instance
    .set_debug(true)                  // 2. set some option
    .send()                           // 3. construct the future
    .await?;                          // 4. run the future + propagate errors

This is not bad, but we can do better here. Using IntoFuture we can combine "construct the future" (line 3) and "run the future" (line 4) into a single step:

let response = StorageRequest::new()  // 1. create a new instance
    .set_debug(true)                  // 2. set some option
    .await?;                          // 3. construct + run the future + propagate errors

We can do this by implementing IntoFuture for StorageRequest. IntoFuture requires us to have a named future we can return, which we can do by creating a "boxed future" and defining a type alias for it:

// First we must import some new types into the scope.
use std::pin::Pin;
use std::future::{Future, IntoFuture};

pub struct Error { ... }
pub struct StorageResponse { ... }
pub struct StorageRequest(bool);

impl StorageRequest {
    /// Create a new instance of `StorageRequest`.
    pub fn new() -> Self { ... }
    /// Decide whether debug mode should be enabled.
    pub fn set_debug(self, b: bool) -> Self { ... }
    /// Send the request and receive a response.
    pub async fn send(self) -> Result<StorageResponse, Error> { ... }
}

// The new implementations:
// 1. create a new named future type
// 2. implement `IntoFuture` for `StorageRequest`
pub type StorageRequestFuture = Pin<Box<dyn Future<Output = Result<StorageResponse, Error>> + Send + 'static>>
impl IntoFuture for StorageRequest {
    type IntoFuture = StorageRequestFuture;
    type Output = <StorageRequestFuture as Future>::Output;
    fn into_future(self) -> Self::IntoFuture {
        Box::pin(self.send())
    }
}

This takes a bit more code to implement, but provides a simpler API for users.

In the future, the Rust Async WG hopes to simplify the creating new named futures by supporting impl Trait in type aliases (Type Alias Impl Trait or TAIT). This should make implementing IntoFuture easier by simplifying the type alias' signature, and make it more performant by removing the Box from the type alias.

C-compatible FFI types in core and alloc

When calling or being called by C ABIs, Rust code can use type aliases like c_uint or c_ulong to match the corresponding types from C on any target, without requiring target-specific code or conditionals.

Previously, these type aliases were only available in std, so code written for embedded targets and other scenarios that could only use core or alloc could not use these types.

Rust 1.64 now provides all of the c_* type aliases in core::ffi, as well as core::ffi::CStr for working with C strings. Rust 1.64 also provides alloc::ffi::CString for working with owned C strings using only the alloc crate, rather than the full std library.

rust-analyzer is now available via rustup

rust-analyzer is now included as part of the collection of tools included with Rust. This makes it easier to download and access rust-analyzer, and makes it available on more platforms. It is available as a rustup component which can be installed with:

rustup component add rust-analyzer

At this time, to run the rustup-installed version, you need to invoke it this way:

rustup run stable rust-analyzer

The next release of rustup will provide a built-in proxy so that running the executable rust-analyzer will launch the appropriate version.

Most users should continue to use the releases provided by the rust-analyzer team (available on the rust-analyzer releases page), which are published more frequently. Users of the official VSCode extension are not affected since it automatically downloads and updates releases in the background.

Cargo improvements: workspace inheritance and multi-target builds

When working with collections of related libraries or binary crates in one Cargo workspace, you can now avoid duplication of common field values between crates, such as common version numbers, repository URLs, or rust-version. This also helps keep these values in sync between crates when updating them. For more details, see workspace.package, workspace.dependencies, and "inheriting a dependency from a workspace".

When building for multiple targets, you can now pass multiple --target options to cargo build, to build all of those targets at once. You can also set build.target to an array of multiple targets in .cargo/config.toml to build for multiple targets by default.

Stabilized APIs

The following methods and trait implementations are now stabilized:

These types were previously stable in std::ffi, but are now also available in core and alloc:

These types were previously stable in std::os::raw, but are now also available in core::ffi and std::ffi:

We've stabilized some helpers for use with Poll, the low-level implementation underneath futures:

In the future, we hope to provide simpler APIs that require less use of low-level details like Poll and Pin, but in the meantime, these helpers make it easier to write such code.

These APIs are now usable in const contexts:

Compatibility notes

  • As previously announced, linux targets now require at least Linux kernel 3.2 (except for targets which already required a newer kernel), and linux-gnu targets now require glibc 2.17 (except for targets which already required a newer glibc).

  • Rust 1.64.0 changes the memory layout of Ipv4Addr, Ipv6Addr, SocketAddrV4 and SocketAddrV6 to be more compact and memory efficient. This internal representation was never exposed, but some crates relied on it anyway by using std::mem::transmute, resulting in invalid memory accesses. Such internal implementation details of the standard library are never considered a stable interface. To limit the damage, we worked with the authors of all of the still-maintained crates doing so to release fixed versions, which have been out for more than a year. The vast majority of impacted users should be able to mitigate with a cargo update.

  • As part of the RLS deprecation, this is also the last release containing a copy of RLS. Starting from Rust 1.65.0, RLS will be replaced by a small LSP server showing the deprecation warning.

Other changes

There are other changes in the Rust 1.64 release, including:

  • Windows builds of the Rust compiler now use profile-guided optimization, providing performance improvements of 10-20% for compiling Rust code on Windows.

  • If you define a struct containing fields that are never used, rustc will warn about the unused fields. Now, in Rust 1.64, you can enable the unused_tuple_struct_fields lint to get the same warnings about unused fields in a tuple struct. In future versions, we plan to make this lint warn by default. Fields of type unit (()) do not produce this warning, to make it easier to migrate existing code without having to change tuple indices.

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.64.0

Many people came together to create Rust 1.64.0. We couldn't have done it without all of you. Thanks!

Niko MatsakisDyn async traits, part 9: call-site selection

After my last post on dyn async traits, some folks pointed out that I was overlooking a seemingly obvious possibility. Why not have the choice of how to manage the future be made at the call site? It’s true, I had largely dismissed that alternative, but it’s worth consideration. This post is going to explore what it would take to get call-site-based dispatch working, and what the ergonomics might look like. I think it’s actually fairly appealing, though it has some limitations.

If we added support for unsized return values…

The idea is to build on the mechanisms proposed in RFC 2884. With that RFC, you would be able to have functions that returned a dyn Future:

fn return_dyn() -> dyn Future<Output = ()> {
    async move { }
}

Normally, when you call a function, we can allocate space on the stack to store the return value. But when you call return_dyn, we don’t know how much space we need at compile time, so we can’t do that1. This means you can’t just write let x = return_dyn(). Instead, you have to choose how to allocate that memory. Using the APIs proposed in RFC 2884, the most common option would be to store it on the heap. A new method, Box::new_with, would be added to Box; it acts like new, but it takes a closure, and the closure can return values of any type, including dyn values:

let result = Box::new_with(|| return_dyn());
// result has type `Box<dyn Future<Output = ()>>`

Invoking new_with would be ergonomically unpleasant, so we could also add a .box operator. Rust has had an unstable box operator since forever, this might finally provide enough motivation to make it worth adding:

let result = return_dyn().box;
// result has type `Box<dyn Future<Output = ()>>`

Of course, you wouldn’t have to use Box. Assuming we have sufficient APIs available, people can write their own methods, such as something to do arena allocation…

let arena = Arena::new();
let result = arena.new_with(|| return_dyn());

…or perhaps a hypothetical maybe_box, which would use a buffer if that’s big enough, and use box otherwise:

let mut big_buf = [0; 1024];
let result = maybe_box(&mut big_buf, || return_dyn()).await;

If we add postfix macros, then we might even support something like return_dyn.maybe_box!(&mut big_buf), though I’m not sure if the current proposal would support that or not.

What are unsized return values?

This idea of returning dyn Future is sometimes called “unsized return values”, as functions can now return values of “unsized” type (i.e., types who size is not statically known). They’ve been proposed in RFC 2884 by Olivier Faure, and I believe there were some earlier RFCs as well. The .box operator, meanwhile, has been a part of “nightly Rust” since approximately forever, though its currently written in prefix form, i.e., box foo2.

The primary motivation for both unsized-return-values and .box has historically been efficiency: they permit in-place initialization in cases where it is not possible today. For example, if I write Box::new([0; 1024]) today, I am technically allocating a [0; 1024] buffer on the stack and then copying it into the box:

// First evaluate the argument, creating the temporary:
let temp: [u8; 1024] = ...;

// Then invoke `Box::new`, which allocates a Box...
let box: *const T = allocate_memory();

// ...and copies the memory in.
std::ptr::write(box, temp);

The optimizer may be able to fix that, but it’s not trivial. If you look at the order of operations, it requires making the allocation happen before the arguments are allocated. LLVM considers calls to known allocators to be “side-effect free”, but promoting them is still risky, since it means that more memory is allocated earlier, which can lead to memory exhaustion. The point isn’t so much to look at exactly what optimizations LLVM will do in practice, so much as to say that it is not trivial to optimize away the temporary: it requires some thoughtful heuristics.

How would unsized return values work?

This merits a blog post of its own, and I won’t dive into details. For our purposes here, the key point is that somehow when the callee goes to return its final value, it can use whatever strategy the caller prefers to get a return point, and write the return value directly in there. RFC 2884 proposes one solution based on generators, but I would want to spend time thinking through all the alternatives before we settled on something.

Using dynamic return types for async fn in traits

So, the question is, can we use dyn return types to help with async function in traits? Continuing with my example from my previous post, if you have an AsyncIterator trait…

trait AsyncIterator {
    type Item;
    
    async fn next(&mut self) -> Option<Self::Item>;
}

…the idea is that calling next on a dyn AsyncIterator type would yield dyn Future<Output = Option<Self::Item>>. Therefore, one could write code like this:

fn use_dyn(di: &mut dyn AsyncIterator) {
    di.next().box.await;
    //       ^^^^
}

The expression di.next() by itself yields a dyn Future. This type is not sized and so it won’t compile on its own. Adding .box produces a Box<dyn AsyncIterator>, which you can then await.3

Compared to the Boxing adapter I discussed before, this is relatively straightforward to explain. I’m not entirely sure which is more convenient to use in practice: it depends how many dyn values you create and how many methods you call on them. Certainly you can work around the problem of having to write .box at each call-site via wrapper types or helper methods that do it for you.

Complication: dyn AsyncIterator does not implement AsyncIterator

There is one complication. Today in Rust, every dyn Trait type also implements Trait. But can dyn AsyncIterator implement AsyncIterator? In fact, it cannot! The problem is that the AsyncIterator trait defines next as returning impl Future<..>, which is actually shorthand for impl Future<..> + Sized, but we said that next would return dyn Future<..>, which is ?Sized. So the dyn AsyncIterator type doesn’t meet the bounds the trait requires. Hmm.

But…does dyn AsyncIterator have to implement AsyncIterator?

There is no “hard and fixed” reason that dyn Trait types have to implement Trait, and there are a few good reasons not to do it. The alternative to dyn safety is a design like this: you can always create a dyn Trait value for any Trait, but you may not be able to use all of its members. For example, given a dyn Iterator, you could call next, but you couldn’t call generic methods like map. In fact, we’ve kind of got this design in practice, thanks to the where Self: Sized hack that lets us exclude methods from being used on dyn values.

Why did we adopt object safety in the first place? If you look back at RFC 255, the primary motivation for this rule was ergonomics: clearer rules and better error messages. Although I argued for RFC 255 at the time, I don’t think these motivations have aged so well. Right now, for example, if you have a trait with a generic method, you get an error when you try to create a dyn Trait value, telling you that you cannot create a dyn Trait from a trait with a generic method. But it may well be clearer to get an error at the point where you to call that generic method telling you that you cannot call generic methods through dyn Trait.

Another motivation for having dyn Trait implement Trait was that one could write a generic function with T: Trait and have it work equally well for object types. That capability is useful, but because you have to write T: ?Sized to take advantage of it, it only really works if you plan carefully. In practice what I’ve found works much better is to implement Trait to &dyn Trait.

What would it mean to remove the rule that dyn AsyncIterator: AsyncIterator?

I think the new system would be something like this…

  • You can always4 create a dyn Foo value. The dyn Foo type would define inherent methods based on the trait Foo that use dynamic dispatch, but with some changes:
    • Async functions and other methods defined with -> impl Trait return -> dyn Trait instead.
    • Generic methods, methods referencing Self, and other such cases are excluded. These cannot be handled with virtual dispatch.
  • If Foo is object safe using today’s rules, dyn Foo: Foo holds. Otherwise, it does not.5
    • On a related but orthogonal note, I would like to make a dyn keyword required to declare dyn safety.

Implications of removing that rule

This implies that dyn AsyncIterator (or any trait with async functions/RPITIT6) will not implement AsyncIterator. So if I write this function…

fn use_any<I>(x: &mut I)
where
    I: ?Sized + AsyncIterator,
{
    x.next().await
}

…I cannot use it with I = dyn AsyncIterator. You can see why: it calls next and assumes the result is Sized (as promised by the trait), so it doesn’t add any kind of .box directive (and it shouldn’t have to).

What you can do is implement a wrapper type that encapsulates the boxing:

struct BoxingAsyncIterator<'i, I> {
    iter: &'i mut dyn AsyncIterator<Item = I>
}

impl<I> AsyncIterator for BoxingAsyncIterator<'i, I> {
    type Item = I;
    
    async fn next(&mut self) -> Option<Self::Item> {
        self.iter.next().box.await
    }
}

…and then you can call use_any(BoxingAsyncIterator::new(ai)).7

Limitation: what if you wanted to do stack allocation?

One of the goals with the previous proposal was to allow you to write code that used dyn AsyncIterator which worked equally well in std and no-std environments. I would say that goal was partially achieved. The core idea was that the caller would choose the strategy by which the future got allocated, and so it could opt to use inline allocation (and thus be no-std compatible) or use boxing (and thus be simple).

In this proposal, the call-site has to choose. You might think then that you could just choose to use stack allocation at the call-site and thus be no-std compatible. But how does one choose stack allocation? It’s actually quite tricky! Part of the problem is that async stack frames are stored in structs, and thus we cannot support something like alloca (at least not for values that will be live across an await, which includes any future that is awaited8). In fact, even outside of async, using alloca is quite hard! The problem is that a stack is, well, a stack. Ideally, you would do the allocation just before your callee returns, but that’s when you know how much memory you need. But at that time, your callee is still using the stack, so your allocation is on the wrong spot.9 I personally think we should just rule out the idea of using alloca to do stack allocation.

If we can’t use alloca, what can we do? We have a few choices. In the very beginning, I talked about the idea of a maybe_box function that would take a buffer and use it only for really large values. That’s kind of nifty, but it still relies on a box fallback, so it doesn’t really work for no-std.10 Might be a nice alternative to stackfuture though!11

You can also achieve inlining by writing wrapper types (something tmandry and I prototyped some time back), but the challenge then is that your callee doesn’t accept a &mut dyn AsyncIterator, it accepts something like &mut DynAsyncIter, where DynAsyncIter is a struct that you defined to do the wrapping.

All told, I think the answer in reality would be: If you want to be used in a no-std environment, you don’t use dyn in your public interfaces. Just use impl AsyncIterator. You can use hacks like the wrapper types internally if you really want dynamic dispatch.

Question: How much room is there for the compiler to get clever?

One other concern I had in thinking about this proposal was that it seemed like it was overspecified. That is, the vast majority of call-sites in this proposal will be written with .box, which thus specifies that they should allocate a box to store the result. But what about ideas like caching the box across invocations, or “best effort” stack allocation? Where do they fit in? From what I can tell, those optimizations are still possible, so long as the Box which would be allocated doesn’t escape the function (which was the same condition we had before).

The way to think of it: by writing foo().box.await, the user told us to use the boxing allocator to box the return value of foo. But we can then see that this result is passed to await, which takes ownership and later frees it. We can thus decide to substitute a different allocator, perhaps one that reuses the box across invocations, or tries to use stack memory; this is fine so long as we modifed the freeing code to match. Doing this relies on knowing that the allocated value is immediately returned to us and that it never leaves our control.

Conclusion

To sum up, I think for most users this design would work like so…

  • You can use dyn with traits that have async functions, but you have to write .box every time you call a method.
  • You get to use .box in other places too, and we gain at least some support for unsized return values.12
  • If you want to write code that is sometimes using dyn and sometimes using static dispatch, you’ll have to write some awkward wrapper types.13
  • If you are writing no-std code, use impl Trait, not dyn Trait; if you must use dyn, it’ll require wrapper types.

Initially, I dismissed call-site allocation because it violated dyn Trait: Trait and it didn’t allow code to be written with dyn that could work in both std and no-std. But I think that violating dyn Trait: Trait may actually be good, and I’m not sure how important that latter constraint truly is. Furthermore, I think that Boxing::new and the various “dyn adapters” are probably going to be pretty confusing for users, but writing .box on a call-site is relatively easy to explain (“we don’t know what future you need, so you have to box it”). So now it seems a lot more appealing to me, and I’m grateful to Olivier Faure for bringing it up again.

One possible extension would be to permit users to specify the type of each returned future in some way. As I was finishing up this post, I saw that matthieum posted an intriguing idea in this direction on the internals thread. In general, I do see a need for some kind of “trait adapters”, such that you can take a base trait like Iterator and “adapt” it in various ways, e.g. producing a version that uses async methods, or which is const-safe. This has some pretty heavy overlap with the whole keyword generics initiative too. I think it’s a good extension to think about, but it wouldn’t be part of the “MVP” that we ship first.

Thoughts?

Please leave comments in this internals thread, thanks!

Appendix A: the Output associated type

Here is an interesting thing! The FnOnce trait, implemented by all callable things, defines its associated type Output as Sized! We have to change this if we want to allow unsized return values.

In theory, this could be a big backwards compatibility hazard. Code that writes F::Output can assume, based on the trait, that the return value is sized – so if we remove that bound, the code will no longer build!

Fortunately, I think this is ok. We’ve deliberately restricted the fn types so you can only use them with the () notation, e.g., where F: FnOnce() or where F: FnOnce() -> (). Both of these forms expand to something which explicitly specifies Output, like F: FnOnce<(), Output = ()>. What this means is that even if you really generic code…

fn foo<F, R>(f: F)
where
    F: FnOnce<Output = R>
{
    let value: F::Output = f();
    ...
}

…when you write F::Output, that is actually normalized to R, and the type R has its own (implicit) Sized bound.

(There’s was actually a recent unsoundness related to this bound, closed by this PR, and we discussed exactly this forwards compatibility question on Zulip.)

Footnotes

  1. I can hear you now: “but what about alloca!” I’ll get there. 

  2. The box foo operator supported by the compiler has no current path to stabilization. There were earlier plans (see RFC 809 and RFC 1228), but we ultimately abandoned those efforts. Part of the problem, in fact, was that the precedence of box foo made for bad ergonomics: foo.box works much better. 

  3. If you try to await a Box<dyn Future> today, you get an error that it needs to be pinned. I think we can solve that by implementing IntoFuture for Box<dyn Future> and having that convert it to Pin<Box<dyn Future>>

  4. Or almost always? I may be overlooking some edge cases. 

  5. Internally in the compiler, this would require modifying the definition of MIR to make “dyn dispatch” more first-class. 

  6. Don’t know what RPITIT stands for?! “Return position impl trait in traits!” Get with the program! 

  7. This is basically what the “magical” Boxing::new would have done for you in the older proposal. 

  8. Brief explanation of why async and alloca don’t mix here. 

  9. I was told Ada compiles will allocate the memory at the top of the stack, copy it over to the start of the function’s area, and then pop what’s left. Theoretically possible! 

  10. You could imagine a version that aborted the code if the size is wrong, too, which would make it no-std safe, but not in a realiable way (aborts == yuck). 

  11. Conceivably you could set the size to size_of(SomeOtherType) to automatically determine how much space is needed. 

  12. I say at least some because I suspect many details of the more general case would remain unstable until we gain more experience. 

  13. You have to write awkward wrapper types for now, anyway. I’m intrigued by ideas about how we could make that more automatic, but I think it’s way out of scope here. 

Firefox NightlyThese Weeks In Firefox: Issue 124

Highlights

Friends of the Firefox team

Introductions/Shout-Outs

  • Welcome Schalk! Schalk has been contributing for a while and is the community manager for MDN Web Docs, and is hanging out to hear about DevTools-y things and other interesting things going on in Firefox-land to help promote them to the wider community

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • axtinemvsn (one of our CalState students!)
  • Itiel

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Fixed a regression on accessing static theme resources from other extensions (introduced in Firefox 105 by Bug 1711168, new restrictions on accessing extensions resources not explicitly set as web_accessible_resources) – Bug 1786564 (landed in Firefox 105) and Bug 1790115 (landed in Firefox 106, followup fix related to extension pages running in private browsing windows)
  • Small tweaks and fixes related to the unified extensions toolbar button – Bug 1790015 / Bug 1784223 / Bug 1789407
  • Cleanups related to the Manifest Version 3 CSP – Bug 1789751 (removed localhost from script-src directive) / Bug 1766881 (removed unnecessary object-src)
Addon Manager & about:addons
  • Emilio enable modern flexbox use in the about:addons page (instead of XUL layout) – Bug 1790308
  • Itiel has updated the about:addons accent color var to use the Photon color and updated the “Available Updates” dot badge to use the expected Photon accent color – Bug 1787651

Developer Tools

DevTools
  • Eugene fixed a bug with the Network Monitor Websocket inspector, where messages would disappear when using filters in combination with the “keep messages” checkbox (bug)
  • Alex is updating the devtools codebase to prepare for ESM-ification:
  • The Network Monitor used to incorrectly show sizes in kibibytes (1024-based) instead of kilobytes (1000-based). Hubert fixed this issue and we now show the correct sizes and correct units everywhere in the Netmonitor (bug)
  • Alex keeps fixing bugs and UX issues around WebExtension debugging. Whenever you reloaded an extension, the Debugger would no longer show its sources. This was a recent regression, but it is now fixed and tested (bug)
  • Hubert fixed a bug with the new Edit and Resend panel, where we would crash if the request was too big. (bug)
  • Nicolas fixed a performance regression in the StyleEditor (bug), which was caused by performing too many cross compartment property access.
WebDriver BiDi
  • We added basic support for the “script.getRealms” command which returns the information about available WindowRealms, including sandboxes. This information contains realm ids which will be used to run script evaluation commands. (bug)
  • We extended the Remote Agent implementation to allow Marionette and WebDriver BiDi to open and close tabs in GeckoView applications. As a result we were able to enable ~300 additional WebDriver tests on Android. (bug)

ESMification status

Lint, Docs and Workflow

  • https is now the default to use in tests.
    • Please only disable the rule if you explicitly need to test insecure connections – and add a comment if you do disable.
  • You can now specify a –rule parameter to ./mach eslint (not ./mach lint -l eslint), which allows you to test turning on an ESLint rule.
  • We now have two new rules which are currently manually run.
    • The rules:
      • mozilla/valid-ci-uses checks that:
        • Ci.nsIFoo is a valid interface.
        • Ci.nsIFoo.CONSTANT is a valid constant available on the interface.
      • mozilla/valid-services-property checks that:
        • Services.foo.bar() is a valid property on the interface associated with Services.foo.
    • These will be added to run on CI as a tier-2 task in the next couple of months.
    • For now, they can be manually run via
      • ​​MOZ_OBJDIR=objdir-ff-opt ./mach eslint –rule=”mozilla/valid-services-property: error” –rule=”mozilla/valid-ci-uses: error” *
      • There are a few non-critical existing failures which will be resolved before CI lands.

Migration Improvements (CalState LA Project)

  • Students had a Hack Weekend the weekend before last to get up to speed with our tooling
  • Quite a few Good First Bugs landed to support the ESMification process
  • We’re starting the students off on researching the best ways of importing favicons from other browsers into Firefox. Watch this space!

Picture-in-Picture

Search and Navigation

Storybook / Reusable components

  • The ./mach storybook commands have landed!
    • ./mach storybook install # Run this the first time
    • ./mach storybook
    • ./mach storybook launch # Run this in a separate shell

Niko MatsakisWhat I meant by the “soul of Rust”

Re-reading my previous post, I felt I should clarify why I called it the “soul of Rust”. The soul of Rust, to my mind, is definitely not being explicit about allocation. Rather, it’s about the struggle between a few key values — especially productivity and versatility1 in tension with transparency. Rust’s goal has always been to feel like a high-level but with the performance and control of a low-level one. Oftentimes, we are able to find a “third way” that removes the tradeoff, solving both goals pretty well. But finding those “third ways” takes time — and sometimes we just have to accept a certain hit to one value or another for the time being to make progress. It’s exactly at these times, when we have to make a difficult call, that questions about the “soul of Rust” starts to come into play. I’ve been thinking about this a lot, so I thought I would write a post that expands on the role of transparency in Rust, and some of the tensions that arise around it.

Why do we value transparency?

From the draft Rustacean Principles:

🔧 Transparent: “you can predict and control low-level details”

The C language, famously, maps quite closely to how machines typically operate. So much so that people have sometimes called it “portable assembly”.2 Both C++ and Rust are trying to carry on that tradition, but to add on higher levels of abstraction. Inevitably, this leads to tension. Operator overloading, for example, makes figuring out what a + b more difficult.3

Transparency gives you control

Transparency doesn’t automatically give high performance, but it does give control. This helps when crafting your system, since you can set it up to do what you want, but it also helps when analyzing its performance or debugging. There’s nothing more frustrating than starting at code for hours and hours only to realize that the source of your problem isn’t anywhere in the code you can see — it lies in some invisible interaction that wasn’t made explicit.

Transparency can cost performance

The flip-side of transparency is overspecification. The more directly your program maps to assembly, the less room the compiler and runtime have to do clever things, which can lead to lower performance. In Rust, we are always looking for places where we can be less transparent in order to gain performance — but only up to a point. One example is struct layout: the Rust compiler retains the freedom to reorder fields in a struct, enabling us to make more compact data structures. That’s less transparent than C, but usually not in a way that you care about. (And, of course, if you want to specify the order of your fields, we offer the #[repr] attribute.)

Transparency hurts versatility and productivity

The bigger price of transparency, though, is versatility. It forces everyone to care about low-level details that may not actually matter to the problem at hand4. Relevant to dyn async trait, most async Rust systems, for example, perform allocations left and right. The fact that a particular call to an async function might invoke Box::new is unlikely to be a performance problem. For those users, selecting a Boxing adapter adds to the overall complexity they have to manage for very little gain. If you’re working on a project where you don’t need peak performance, that’s going to make Rust less appealing than other languages. I’m not saying that’s bad, but it’s a fact.

A zero-sum situation…

At this moment in the design of async traits, we are struggling with a core question here of “how versatile can Rust be”. Right now, it feels like a “zero sum situation”. We can add in something like Boxing::new to preserve transparency, but it’s going to cost us some in versatility — hopefully not too much.

…for now?

I do wonder, though, if there’s a “third way” waiting somewhere. I hinted at this a bit in the previous post. At the moment, I don’t know what that third way is, and I think that requiring an explicit adapter is the most practical way forward. But it seems to me that it’s not a perfect sweet spot yet, and I am hopeful we’ll be able to subsume it into something more general.

Some ingredients that might lead to a ‘third way’:

  • With-clauses or capabilities: I am intrigued by the idea of [with-clauses] and the general idea of scoped capabilities. We might be able to think about the “default adapter” as something that gets specified via a with-clause?
  • Const evaluation: One of the niftier uses for const evaluation is for “meta-programming” that customizes how Rust is compiled. For example, we could potentially let you write a const fn that creates the vtable data structure for a given trait.
  • Profiles and portability: Can we find a better way to identify the kinds of transparency that you want, perhaps via some kind of ‘profiles’? I feel we already have ‘de facto’ profiles right now, but we don’t recognize them. “No std” is a clear example, but another would be the set of operating systems or architectures that you try to support. Recognizing that different users have different needs, and giving people a way to choose which one fits them best, might allow us to be more supportive of all our users — but then again, it might make it make Rust “modal” and more confusing.

Comments?

Please leave comments in this internals thread. Thanks!

Footnotes

  1. I didn’t write about versatility in my original post: instead I focused on the hit to productivity. But as I think about it now, versatility is really what’s at play here — versatility really meant that Rust was useful for high-level things and low-level things, and I think that requiring an explicit dyn adaptor is unquestionably a hit against being high-level. Interestingly, I put versatility after transparency in the list, meaning that it was lower priority, and that seems to back up the decision to have some kind of explicit adaptor. 

  2. At this point, some folks point out all the myriad subtleties and details that are actually hidden in C code. Hush you. 

  3. I remember a colleague at a past job discovering that somebody had overloaded the -> operator in our codebase. They sent out an angry email, “When does it stop? Must I examine every dot and squiggle in the code?” (NB: Rust supports overloading the deref operator.) 

  4. Put another way, being transparent about one thing can make other things more obscure (“can’t see the forest for the trees”). 

Niko MatsakisDyn async traits, part 8: the soul of Rust

In the last few months, Tyler Mandry and I have been circulating a “User’s Guide from the Future” that describes our current proposed design for async functions in traits. In this blog post, I want to deep dive on one aspect of that proposal: how to handle dynamic dispatch. My goal here is to explore the space a bit and also to address one particularly tricky topic: how explicit do we have to be about the possibility of allocation? This is a tricky topic, and one that gets at that core question: what is the soul of Rust?

The running example trait

Throughout this blog post, I am going to focus exclusively on this example trait, AsyncIterator:

trait AsyncIterator {
    type Item;
    async fn next(&mut self) -> Option<Self::Item>;
}

And we’re particularly focused on the scenario where we are invoking next via dynamic dispatch:

fn make_dyn<AI: AsyncIterator>(ai: AI) {
    use_dyn(&mut ai); // <— coercion from `&mut AI` to `&mut dyn AsyncIterator`
}

fn use_dyn(di: &mut dyn AsyncIterator) {
    di.next().await; // <— this call right here!
}

Even though I’m focusing the blog post on this particular snippet of code, everything I’m talking about is applicable to any trait with methods that return impl Trait (async functions themselves being a shorthand for a function that returns impl Future).

The basic challenge that we have to face is this:

  • The caller function, use_dyn, doesn’t know what impl is behind the dyn, so it needs to allocate a fixed amount of space that works for everybody. It also needs some kind of vtable so it knows what poll method to call.
  • The callee, AI::next, needs to be able to package up the future for its next function in some way to fit the caller’s expectations.

The first blog post in this series1 explains the problem in more detail.

A brief tour through the options

One of the challenges here is that there are many, many ways to make this work, and none of them is “obviously best”. What follows is, I think, an exhaustive list of the various ways one might handle the situation. If anybody has an idea that doesn’t fit into this list, I’d love to hear it.

Box it. The most obvious strategy is to have the callee box the future type, effectively returning a Box<dyn Future>, and have the caller invoke the poll method via virtual dispatch. This is what the async-trait crate does (although it also boxes for static dispatch, which we don’t have to do).

Box it with some custom allocator. You might want to box the future with a custom allocator.

Box it and cache box in the caller. For most applications, boxing itself is not a performance problem, unless it occurs repeatedly in a tight loop. Mathias Einwag pointed out if you have some code that is repeatedly calling next on the same object, you could have that caller cache the box in between calls, and have the callee reuse it. This way you only have to actually allocate once.

Inline it into the iterator. Another option is to store all the state needed by the function in the AsyncIter type itself. This is actually what the existing Stream trait does, if you think about it: instead of returning a future, it offers a poll_next method, so that the implementor of Stream effectively is the future, and the caller doesn’t have to store any state. Tyler and I worked out a more general way to do inlining that doesn’t require user intervention, where you basically wrap the AsyncIterator type in another type W that has a field big enough to store the next future. When you call next, this wrapper W stores the future into that field and then returns a pointer to the field, so that the caller only has to poll that pointer. One problem with inlining things into the iterator is that it only works well for &mut self methods, since in that case there can be at most one active future at a time. With &self methods, you could have any number of active futures.

Box it and cache box in the callee. Instead of inlining the entire future into the AsyncIterator type, you could inline just one pointer-word slot, so that you can cache and reuse the Box that next returns. The upside of this strategy is that the cached box moves with the iterator and can potentially be reused across callers. The downside is that once the caller has finished, the cached box lives on until the object itself is destroyed.

Have caller allocate maximal space. Another strategy is to have the caller allocate a big chunk of space on the stack, one that should be big enough for every callee. If you know the callees your code will have to handle, and the futures for those callees are close enough in size, this strategy works well. Eric Holk recently released the [stackfuture crate] that can help automate it. One problem with this strategy is that the caller has to know the size of all its callees.

Have caller allocate some space, and fall back to boxing for large callees. If you don’t know the sizes of all your callees, or those sizes have a wide distribution, another strategy might be to have the caller allocate some amount of stack space (say, 128 bytes) and then have the callee invoke Box if that space is not enough.

Alloca on the caller side. You might think you can store the size of the future to be returned in the vtable and then have the caller “alloca” that space — i.e., bump the stack pointer by some dynamic amount. Interestingly, this doesn’t work with Rust’s async model. Async tasks require that the size of the stack frame is known up front.

Side stack. Similar to the previous suggestion, you could imagine having the async runtimes provide some kind of “dynamic side stack” for each task.2 We could then allocate the right amount of space on this stack. This is probably the most efficient option, but it assumes that the runtime is able to provide a dynamic stack. Runtimes like embassy wouldn’t be able to do this. Moreover, we don’t have any sort of protocol for this sort of thing right now. Introducing a side-stack also starts to “eat away” at some of the appeal of Rust’s async model, which is designed to allocate the “perfect size stack” up front and avoid the need to allocate a “big stack per task”.3

Can async functions used with dyn be “normal”?

One of my initial goals for async functions in traits was that they should feel “as natural as possible”. In particular, I wanted you to be able to use them with dynamic dispatch in just the same way as you would a synchronous function. In other words, I wanted this code to compile, and I would want it to work even if use_dyn were put into another crate (and therefore were compiled with no idea of who is calling it):

fn make_dyn<AI: AsyncIterator>(ai: AI) {
    use_dyn(&mut ai);
}

fn use_dyn(di: &mut dyn AsyncIterator) {
    di.next().await;
}

My hope was that we could make this code work just as it is by selecting some kind of default strategy that works most of the time, and then provide ways for you to pick other strategies for those code where the default strategy is not a good fit. The problem though is that there is no single default strategy that seems “obvious and right almost all of the time”…

Strategy Downside
Box it (with default allocator) requires allocation, not especially efficient
Box it with cache on caller side requires allocation
Inline it into the iterator adds space to AI, doesn’t work for &self
Box it with cache on callee side requires allocation, adds space to AI, doesn’t work for &self
Allocate maximal space can’t necessarily use that across crates, requires extensive interprocedural analysis
Allocate some space, fallback uses allocator, requires extensive interprocedural analysis or else random guesswork
Alloca on the caller side incompatible with async Rust
Side-stack requires cooperation from runtime and allocation

The soul of Rust

This is where we get to the “soul of Rust”. Looking at the above table, the strategy that seems the closest to “obviously correct” is “box it”. It works fine with separate compilation, fits great with Rust’s async model, and it matches what people are doing today in practice. I’ve spoken with a fair number of people who use async Rust in production, and virtually all of them agreed that “box by default, but let me control it” would work great in practice.

And yet, when we floated the idea of using this as the default, Josh Triplett objected strenuously, and I think for good reason. Josh’s core concern was that this would be crossing a line for Rust. Until now, there is no way to allocate heap memory without some kind of explicit operation (though that operation could be a function call). But if we wanted make “box it” the default strategy, then you’d be able to write “innocent looking” Rust code that nonetheless is invoking Box::new. In particular, it would be invoking Box::new each time that next is called, to box up the future. But that is very unclear from reading over make_dyn and use_dyn.

As an example of where this might matter, it might be that you are writing some sensitive systems code where allocation is something you always do with great care. It doesn’t mean the code is no-std, it may have access to an allocator, but you still would like to know exactly where you will be doing allocations. Today, you can audit the code by hand, scanning for “obvious” allocation points like Box::new or vec![]. Under this proposal, while it would still be possible, the presence of an allocation in the code is much less obvious. The allocation is “injected” as part of the vtable construction process. To figure out that this will happen, you have to know Rust’s rules quite well, and you also have to know the signature of the callee (because in this case, the vtable is built as part of an implicit coercion). In short, scanning for allocation went from being relatively obvious to requiring a PhD in Rustology. Hmm.

On the other hand, if scanning for allocations is what is important, we could address that in many ways. We could add an “allow by default” lint to flag the points where the “default vtable” is constructed, and you could enable it in your project. This way the compiler would warn you about the possible future allocation. In fact, even today, scanning for allocations is actually much harder than I made it ought to be: you can easily see if your function allocates, but you can’t easily see what its callees do. You have to read deeply into all of your dependencies and, if there are function pointers or dyn Trait values, figure out what code is potentially being called. With compiler/language support, we could make that whole process much more first-class and better.

In a way, though, the technical arguments are besides the point. “Rust makes allocations explicit” is widely seen as a key attribute of Rust’s design. In making this change, we would be tweaking that rule to be something like ”Rust makes allocations explicit most of the time”. This would be harder for users to understand, and it would introduce doubt as whether Rust really intends to be the kind of language that can replace C and C++4.

Looking to the Rustacean design principles for guidance

Some time back, Josh and I drew up a draft set of design principles for Rust. It’s interesting to look back on them and see what they have to say about this question:

  • ⚙️ Reliable: “if it compiles, it works”
  • 🐎 Performant: “idiomatic code runs efficiently”
  • 🥰 Supportive: “the language, tools, and community are here to help”
  • 🧩 Productive: “a little effort does a lot of work”
  • 🔧 Transparent: “you can predict and control low-level details”
  • 🤸 Versatile: “you can do anything with Rust”

Boxing by default, to my mind, scores as follows:

  • 🐎 Performant: meh. The real goal with performant is that the cleanest code also runs the fastest. Boxing on every dynamic call doesn’t meet this goal, but something like “boxing with caller-side caching” or “have caller allocate space and fall back to boxing” very well might.
  • 🧩 Productive: yes! Virtually every production user of async Rust that I’ve talked to has agreed that having code box by default would (but giving the option to do something else for tight loops) would be a great sweet spot for Rust.
  • 🔧 Transparent: no. As I wrote before, understanding when a call may box now requires a PhD in Rustology, so this definitely fails on transparency.

(The other principles are not affected in any notable way, I don’t think.)

What the “user’s guide from the future” suggests

These considerations led Tyler and I to a different design. In the “User’s Guide From the Future” document from before, you’ll see that it does not accept the running example just as is. Instead, if you were to compile the example code we’ve been using thus far, you’d get an error:

error[E0277]: the type `AI` cannot be converted to a
              `dyn AsyncIterator` without an adapter
 --> src/lib.rs:3:23
  |
3 |     use_dyn(&mut ai);
  |                  ^^ adapter required to convert to `dyn AsyncIterator`
  |
  = help: consider introducing the `Boxing` adapter,
    which will box the futures returned by each async fn
3 |     use_dyn(&mut Boxing::new(ai));
                     ++++++++++++  +

As the error suggests, in order to get the boxing behavior, you have to opt-in via a type that we called Boxing5:

fn make_dyn<AI: AsyncIterator>(ai: AI) {
    use_dyn(&mut Boxing::new(ai));
    //          ^^^^^^^^^^^
}

fn use_dyn(di: &mut dyn AsyncIterator) {
    di.next().await;
}

Under this design, you can only create a &mut dyn AsyncIterator when the caller can verify that the next method returns a type from which a dyn* can be constructed. If that’s not the case, and it’s usually not, you can use the Boxing::new adapter to create a Boxing<AI>. Via some kind of compiler magic that ahem we haven’t fully worked out yet6, you could coerce a Boxing<AI> into a dyn AsyncIterator.

The details of the Boxing type need more work7, but the basic idea remains the same: require users to make some explicit opt-in to the default vtable strategy, which may indeed perform allocation.

How does Boxing rank on the design principles?

To my mind, adding the Boxing adapter ranks as follows…

  • 🐎 Performant: meh. This is roughly the same as before. We’ll come back to this.
  • 🥰 Supportive: yes! The error message guides you to exactly what you need to do, and hopefully links to a well-written explanation that can help you learn about why this is required.
  • 🧩 Productive: meh. Having to add Boxing::new call each time you create a dyn AsyncIterator is not great, but also on-par with other Rust papercuts.
  • 🔧 Transparent: yes! It is easy to see that boxing may occur in the future now.

This design is now transparent. It’s also less productive than before, but we’ve tried to make up for it with supportiveness. “Rust isn’t always easy, but it’s always helpful.”

Improving performance with a more complex ABI

One thing that bugs me about the “box by default” strategy is that the performance is only “meh”. I like stories like Iterator, where you write nice code and you get tight loops. It bothers me that writing “nice” async code yields a naive, middling efficiency story.

That said, I think this is something we could fix in the future, and I think we could fix it backwards compatibly. The idea would be to extend our ABI when doing virtual calls so that the caller has the option to provide some “scratch space” for the callee. For example, we could then do things like analyze the binary to get a good guess as to how much stack space is needed (either by doing dataflow or just by looking at all implementations of AsyncIterator). We could then have the caller reserve stack space for the future and pass a pointer into the callee — the callee would still have the option of allocating, if for example, there wasn’t enough stack space, but it could make use of the space in the common case.

Interestingly, I think that if we did this, we would also be putting some pressure on Rust’s “transparency” story again. While Rust’s leans heavily on optimizations to get performance, we’ve generally restricted ourselves to simple, local ones like inlining; we don’t require interprocedural dataflow in particular, although of course it helps (and LLVM does it). But getting a good estimate of how much stack space to reserve for potential calleees would violate that rule (we’d also need some simple escape analysis, as I describe in Appendix A). All of this adds up to a bit of ‘performance unpredictability’. Still, I don’t see this as a big problem, particularly since the fallback is just to use Box::new, and as we’ve said, for most users that is perfectly adequate.

Picking another strategy, such as inlining

Of course, maybe you don’t want to use Boxing. It would also be possible to construct other kinds of adapters, and they would work in a similar fashion. For example, an inlining adapter might look like:

fn make_dyn<AI: AsyncIterator>(ai: AI) {
    use_dyn(&mut InlineAsyncIterator::new(ai));
    //           ^^^^^^^^^^^^^^^^^^^^^^^^
}

The InlineAsyncIterator<AI> type would add the extra space to store the future, so that when the next method is called, it writes the future into its own fields and then returns it to the caller. Similarly, a cached box adapter might be &mut CachedAsyncIterator::new(ai), only it would use a field to cache the resulting Box.

You may have noticed that the inline/cached adapters include the name of the trait. That’s because they aren’t relying on compiler magic like Boxing, but are instead intended to be authored by end-users, and we don’t yet have a way to be generic over any trait definition. (The proposal as we wrote it uses macros to generate an adapter type for any trait you wish to adapt.) This is something I’d love to address in the future. You can read more about how adapters work here.

Conclusion

OK, so let’s put it all together into a coherent design proposal:

  • You cannot coerce from an arbitrary type AI into a dyn AsyncIterator. Instead, you must select an adaptor:
    • Typically you want Boxing, which has a decent performance profile and “just works”.
    • But users can write their own adapters to implement other strategies, such as InlineAsyncIterator or CachingAsyncIterator.
  • From an implementation perspective:
    • When invoked via dynamic dispatch, async functions return a dyn* Future. The caller can invoke poll via virtual dispatch and invoke the (virtual) drop function when it’s ready to dispose of the future.
    • The vtable created for Boxing<AI> will allocate a box to store the future AI::next() and use that to create the dyn* Future.
    • The vtable for other adapters can use whatever strategy they want. InlineAsyncIterator<AI>, for example, stores the AI::next() future into a field in the wrapper, takes a raw pointer to that field, and creates a dyn* Future from this raw pointer.
  • Possible future extension for better performance:8
    • We modify the ABI for async trait functions (or any trait function using return-position impl trait) to allow the caller to optionally provide stack space. The Boxing adapter, if such stack space is available, will use it to avoid boxing when it can. This would have to be coupled with some compiler analysis to figure out how much to stack space to pre-allocate.

This lets us express virtually any pattern. Its even possible to express side-stacks, if the runtime provides a suitable adapter (e.g., TokioSideStackAdapter::new(ai)), though if side-stacks become popular I would rather consider a more standard means to expose them.

The main downsides to this proposal are:

  • Users have to write Boxing::new, which is a productivity and learnability hit, but it avoids a big hit to transparency. Is that the right call? I’m still not entirely sure, though my heart increasingly says yes. It’s also something we could revisit in the future (e.g., and add a default adapter).
  • If we opt to modify the ABI, we’re adding some complexity there, but in exchange for potentially quite a lot of performance. I would expect us not to do this initially, but to explore it as an extension in the future once we have more data about how important it is.

There is one pattern that we can’t express: “have caller allocate maximal space”. This pattern guarantees that heap allocation is not needed; the best we can do is a heuristic that tries to avoid heap allocation, since we have to consider public functions on crate boundaries and the like. To offer a guarantee, the argument type needs to change from &mut dyn AsyncIterator (which accepts any async iterator) to something narrower. This would also support futures that escape the stack frame (see Appendix A below). It seems likely that these details don’t matter, and that either inline futures or heuristics would suffice, but if not, a crate like stackfuture remains an option.

Comments?

Please leave comments in this internals thread. Thanks!

Appendix A: futures that escape the stack frame

In all of this discussion, I’ve been assuming that the async call was followed closely by an await. But what happens if the future is not awaited, but instead is moved into the heap or other locations?

fn foo(x: &mut dyn AsyncIterator<Item = u32>) -> impl Future<Output = Option<u32>> + _ {
    x.next()
}

For boxing, this kind of code doesn’t pose any problem at all. But if we had allocated space on the stack to store the future, examples like this would be a problem. So long as the scratch space is optional, with a fallback to boxing, this is no problem. We can do an escape analysis and avoid the use of scratch space for examples like this.

Footnotes

  1. Written in Sep 2020, egads! 

  2. I was intrigued to learn that this is what Ada does, and that Ada features like returning dynamically sized types are built on this model. I’m not sure how SPARK and other Ada subsets that target embedded spaces manage that, I’d like to learn more about it. 

  3. Of course, without a side stack, we are left using mechanisms like Box::new to cover cases like dynamic dispatch or recursive functions. This becomes a kind of pessimistically sized segmented stack, where we allocate for each little piece of extra state that we need. A side stack might be an appealing middle ground, but because of cases like embassy, it can’t be the only option. 

  4. Ironically, C++ itself inserts implicit heap allocations to help with coroutines! 

  5. Suggestions for a better name very welcome. 

  6. Pay no attention to the compiler author behind the curtain. 🪄 🌈 Avert your eyes! 

  7. e.g., if you look closely at the User’s Guide from the Future, you’ll see that it writes Boxing::new(&mut ai), and not &mut Boxing::new(ai). I go back and forth on this one. 

  8. I should clarify that, while Tyler and I have discussed this, I don’t know how he feels about it. I wouldn’t call it ‘part of the proposal’ exactly, more like an extension I am interested in. 

Cameron KaiserSeptember patch set for TenFourFox

102 is now the next Firefox Extended Support Release, so it's time for spring cleaning — if you're a resident of the Southern Hemisphere — in the TenFourFox repository. Besides refreshing the maintenance scripts to pull certificate, timezone and HSTS updates from this new source, I also implemented all the relevant security and stability patches from the last gasp of 91ESR (none likely to be exploitable on Power Macs without a direct attack, but many likely to crash them), added an Fx102 user agent choice to the TenFourFox preference pane, updated the ATSUI font blacklist (thanks to Chris T for the report) and updated zlib to 1.2.12, picking up multiple bug fixes and some modest performance improvements. This touches a lot of low-level stuff so updating will require a complete rebuild from scratch (instructions). Sorry about that, it's necessary!

If you're new to building your own copy of TenFourFox, this article from last year is still current with the process and what's out there for alternatives and assistance.

Mozilla Performance BlogA different perspective

Usually, in our articles, we talk about performance from the performance engineer’s perspective, but in this one, I want to take a step back and look at it from another perspective. Earlier this year, I talked to an engineer about including more debugging information in the bugs we are filing for regressions. Trying to make a context out of the discussion, I realized the performance sheriffing process is complex and that many of our engineers have limited knowledge of how we detect regressions, how we identify the patch that introduced it, and how to respond to a notification of a regression.

As a result, I decided to make a recording about how a Sheriff catches a regression and files the bug, and then how the engineer that wrote the source code causing the regression can get the information they need to resolve it. The video below has a simplified version of how the Performance Sheriffs open a performance regression.

In short, if there’s no test gap between the last good and the first regressed revision, a regression will be filed on the bug that caused it and linked to the alert.

Filing a regression – Demo

I caused a regression! Now what?

If you caused a regression then a sheriff will open a regression bug and set the regressor bug’s id to the regressed by field. In the regression description, you’ll find the tests regressed and you’ll be able to view a particular graph or the updated alert. Note: almost always the alert contains more items than the description. The video below will show you how to zoom in and find the regressing data point, see the job, trigger a profiler, and see the taskcluster task for it. There you’ll find the payload, dependencies, artifacts, or parsed log.

Investigating a regression – Demo

The full process of investigating an alert and finding the cause of a regression is much more complex than these examples. It has three phases before, and one after, which are: triaging the alerts, investigating the graphs, and filing the regression bug. The one after is following up and offering support to the author of the regressing bug to understand and/or fix the issue. These phases are illustrated below.

Sheriffing Workflow

Sheriffing Workflow

Improvements

We have made several small improvements to the regression bug template that are worth noting:

  • We added links to the ratio (magnitude) column that opens the graph of each alert item
  • Previously the performance sheriffs set the severity of the regression, but we now allow the triage owners to determine severity based on the data provided
  • We added a paragraph that lets you know you can trigger profiling jobs for the regressed tests before and after the commit, or ask the sheriff to do this for you.
  • Added a cron job that will trigger performance tests for patches that are most likely to change the performance numbers

Work in progress

There are also three impactful projects in terms of performance:

  1. Integrating the side-by-side script to CI, the ultimate goal being to have side-by-side video comparisons generated automatically on regressions. Currently, there’s a local perftest-tools command that does the comparison.
  2. Having the profiler automatically triggered for the same purpose: having more investigation data available when a regression happens.
  3. Developing a more user-friendly performance comparison tool, PerfCompare, to replace Perfherder Compare View.

Mozilla Open Policy & Advocacy BlogMozilla Responds to EU General Court’s Judgment on Google Android

This week, the EU’s General Court largely upheld the decision sanctioning Google for restricting competition on the Android mobile operating system. But, on their own, the judgment and the record fine do not help to unlock competition and choice online, especially when it comes to browsers.

In July 2018, when the European Commission announced its decision, we expressed hope that the result would help to level the playing field for independent browsers like Firefox and provide real choice for consumers. Sadly for billions of people around the world who use browsers every day, this hope has not been realized – yet.

The case may rumble on in appeals for several more years, but Mozilla will continue to advocate for an Internet which is open, accessible, private, and secure for all, and we will continue to build products which advance this vision. We hope that those with the power to improve browser choice for consumers will also work towards these tangible goals.

The post Mozilla Responds to EU General Court’s Judgment on Google Android appeared first on Open Policy & Advocacy.

The Rust Programming Language BlogConst Eval (Un)Safety Rules

In a recent Rust issue (#99923), a developer noted that the upcoming 1.64-beta version of Rust had started signalling errors on their crate, icu4x. The icu4x crate uses unsafe code during const evaluation. Const evaluation, or just "const-eval", runs at compile-time but produces values that may end up embedded in the final object code that executes at runtime.

Rust's const-eval system supports both safe and unsafe Rust, but the rules for what unsafe code is allowed to do during const-eval are even more strict than what is allowed for unsafe code at runtime. This post is going to go into detail about one of those rules.

(Note: If your const code does not use any unsafe blocks or call any const fn with an unsafe block, then you do not need to worry about this!)

A new diagnostic to watch for

The problem, reduced over the course of the comment thread of #99923, is that certain static initialization expressions (see below) are defined as having undefined behavior (UB) at compile time (playground):

pub static FOO: () = unsafe {
    let illegal_ptr2int: usize = std::mem::transmute(&());
    let _copy = illegal_ptr2int;
};

(Many thanks to @eddyb for the minimal reproduction!)

The code above was accepted by Rust versions 1.63 and earlier, but in the Rust 1.64-beta, it now causes a compile time error with the following message:

error[E0080]: could not evaluate static initializer
 --> demo.rs:3:17
  |
3 |     let _copy = illegal_ptr2int;
  |                 ^^^^^^^^^^^^^^^ unable to turn pointer into raw bytes
  |
  = help: this code performed an operation that depends on the underlying bytes representing a pointer
  = help: the absolute address of a pointer is not known at compile-time, so such operations are not supported

As the message says, this operation is not supported: the transmute above is trying to reinterpret the memory address &() as an integer of type usize. The compiler cannot predict what memory address the () would be associated with at execution time, so it refuses to allow that reinterpretation.

When you write safe Rust, then the compiler is responsible for preventing undefined behavior. When you write any unsafe code (be it const or non-const), you are responsible for preventing UB, and during const-eval, the rules about what unsafe code has defined behavior are even more strict than the analogous rules governing Rust's runtime semantics. (In other words, more code is classified as "UB" than you may have otherwise realized.)

If you hit undefined behavior during const-eval, the Rust compiler will protect itself from adverse effects such as the undefined behavior leaking into the type system, but there are few guarantees other than that. For example, compile-time UB could lead to runtime UB. Furthermore, if you have UB at const-eval time, there is no guarantee that your code will be accepted from one compiler version to another.

What is new here

You might be thinking: "it used to be accepted; therefore, there must be some value for the memory address that the previous version of the compiler was using here."

But such reasoning would be based on an imprecise view of what the Rust compiler was doing here.

The const-eval machinery of the Rust compiler (also known as "the CTFE engine") is built upon a MIR interpreter which uses an abstract model of a hypothetical machine as the foundation for evaluating such expressions. This abstract model doesn't have to represent memory addresses as mere integers; in fact, to support fine-grained checking for UB, it uses a much richer datatype for the values that are held in the abstract memory store.

(The aforementioned MIR interpreter is also the basis for Miri, a research tool that interprets non-const Rust code, with a focus on explicit detection of undefined behavior. The Miri developers are the primary contributors to the CTFE engine in the Rust compiler.)

The details of the CTFE engine's value representation do not matter too much for our discussion here. We merely note that earlier versions of the compiler silently accepted expressions that seemed to transmute memory addresses into integers, copied them around, and then transmuted them back into addresses; but that was not what was acutally happening under the hood. Instead, what was happening was that the values were passed around blindly (after all, the whole point of transmute is that it does no transformation on its input value, so it is a no-op in terms of its operational semantics).

The fact that it was passing a memory address into a context where you would expect there to always be an integer value would only be caught, if at all, at some later point.

For example, the const-eval machinery rejects code that attempts to embed the transmuted pointer into a value that could be used by runtime code, like so (playground):

pub static FOO: usize = unsafe {
    let illegal_ptr2int: usize = std::mem::transmute(&());
    illegal_ptr2int
};

Likewise, it rejects code that attempts to perform arithmetic on that non-integer value, like so (playground):

pub static FOO: () = unsafe {
    let illegal_ptr2int: usize = std::mem::transmute(&());
    let _incremented = illegal_ptr2int + 1;
};

Both of the latter two variants are rejected in stable Rust, and have been for as long as Rust has accepted pointer-to-integer conversions in static initializers (see e.g. Rust 1.52).

More similar than different

In fact, all of the examples provided above are exhibiting undefined behavior according to the semantics of Rust's const-eval system.

The first example with _copy was accepted in Rust versions 1.46 through 1.63 because of CTFE implementation artifacts. The CTFE engine puts considerable effort into detecting UB, but does not catch all instances of it. Furthermore, by default, such detection can be delayed to a point far after where the actual problematic expression is found.

But with nightly Rust, we can opt into extra checks for UB that the engine provides, by passing the unstable flag -Z extra-const-ub-checks. If we do that, then for all of the above examples we get the same result:

error[E0080]: could not evaluate static initializer
 --> demo.rs:2:34
  |
2 |     let illegal_ptr2int: usize = std::mem::transmute(&());
  |                                  ^^^^^^^^^^^^^^^^^^^^^^^^ unable to turn pointer into raw bytes
  |
  = help: this code performed an operation that depends on the underlying bytes representing a pointer
  = help: the absolute address of a pointer is not known at compile-time, so such operations are not supported

The earlier examples had diagnostic output that put the blame in a misleading place. With the more precise checking -Z extra-const-ub-checks enabled, the compiler highlights the expression where we can first witness UB: the original transmute itself! (Which was stated at the outset of this post; here we are just pointing out that these tools can pinpoint the injection point more precisely.)

Why not have these extra const-ub checks on by default? Well, the checks introduce performance overhead upon Rust compilation time, and we do not know if that overhead can be made acceptable. (However, recent debate among Miri developers indicates that the inherent cost here might not be as bad as they had originally thought. Perhaps a future version of the compiler will have these extra checks on by default.)

Change is hard

You might well be wondering at this point: "Wait, when is it okay to transmute a pointer to a usize during const evaluation?" And the answer is simple: "Never."

Transmuting a pointer to a usize during const-eval has always been undefined behavior, ever since const-eval added support for transmute and union. You can read more about this in the const_fn_transmute / const_fn_union stabilization report, specifically the subsection entitled "Pointer-integer-transmutes". (It is also mentioned in the documentation for transmute.)

Thus, we can see that the classification of the above examples as UB during const evaluation is not a new thing at all. The only change here was that the CTFE engine had some internal changes that made it start detecting the UB rather than silently ignoring it.

This means the Rust compiler has a shifting notion of what UB it will explicitly catch. We anticipated this: RFC 3016, "const UB", explicitly says:

[...] there is no guarantee that UB is reliably detected during CTFE. This can change from compiler version to compiler version: CTFE code that causes UB could build fine with one compiler and fail to build with another. (This is in accordance with the general policy that unsound code is not subject to stability guarantees.)

Having said that: So much of Rust's success has been built around the trust that we have earned with our community. Yes, the project has always reserved the right to make breaking changes when resolving soundness bugs; but we have also strived to mitigate such breakage whenever feasible, via things like future-incompatible lints.

Today, with our current const-eval architecture, it is not feasible to ensure that changes such as the one that injected issue #99923 go through a future-incompat warning cycle. The compiler team plans to keep our eye on issues in this space. If we see evidence that these kinds of changes do cause breakage to a non-trivial number of crates, then we will investigate further how we might smooth the transition path between compiler releases. However, we need to balance any such goal against the fact that Miri has very a limited set of developers: the researchers determining how to define the semantics of unsafe languages like Rust. We do not want to slow their work down!

What you can do for safety's sake

If you observe the could not evaluate static initializer message on your crate atop Rust 1.64, and it was compiling with previous versions of Rust, we want you to let us know: file an issue!

We have performed a crater run for the 1.64-beta and that did not find any other instances of this particular problem. If you can test compiling your crate atop the 1.64-beta before the stable release goes out on September 22nd, all the better! One easy way to try the beta is to use rustup's override shortand for it:

$ rustup update beta
$ cargo +beta build

As Rust's const-eval evolves, we may see another case like this arise again. If you want to defend against future instances of const-eval UB, we recommend that you set up a continuous integration service to invoke the nightly rustc with the unstable -Z extra-const-ub-checks flag on your code.

Want to help?

As you might imagine, a lot of us are pretty interested in questions such as "what should be undefined behavior?"

See for example Ralf Jung's excellent blog series on why pointers are complicated (parts I, II, III), which contain some of the details elided above about the representation of pointer values, and spell out reasons why you might want to be concerned about pointer-to-usize transmutes even outside of const-eval.

If you are interested in trying to help us figure out answers to those kinds of questions, please join us in the unsafe code guidelines zulip.

If you are interested in learning more about Miri, or contributing to it, you can say Hello in the miri zulip.

Conclusion

To sum it all up: When you write safe Rust, then the compiler is responsible for preventing undefined behavior. When you write any unsafe code, you are responsible for preventing undefined behavior. Rust's const-eval system has a stricter set of rules governing what unsafe code has defined behavior: specifically, reinterpreting (aka "transmuting") a pointer value as a usize is undefined behavior during const-eval. If you have undefined behavior at const-eval time, there is no guarantee that your code will be accepted from one compiler version to another.

The compiler team is hoping that issue #99923 is an exceptional fluke and that the 1.64 stable release will not encounter any other surprises related to the aforementioned change to the const-eval machinery.

But fluke or not, the issue provided excellent motivation to spend some time exploring facets of Rust's const-eval architecture and the interpreter that underlies it. We hope you enjoyed reading this as much as we did writing it.

The Rust Programming Language BlogSecurity advisories for Cargo (CVE-2022-36113, CVE-2022-36114)

This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.

The Rust Security Response WG was notified that Cargo did not prevent extracting some malformed packages downloaded from alternate registries. An attacker able to upload packages to an alternate registry could fill the filesystem or corrupt arbitary files when Cargo downloaded the package.

These issues have been assigned CVE-2022-36113 and CVE-2022-36114. The severity of these vulnerabilities is "low" for users of alternate registries. Users relying on crates.io are not affected.

Note that by design Cargo allows code execution at build time, due to build scripts and procedural macros. The vulnerabilities in this advisory allow performing a subset of the possible damage in a harder to track down way. Your dependencies must still be trusted if you want to be protected from attacks, as it's possible to perform the same attacks with build scripts and procedural macros.

Arbitrary file corruption (CVE-2022-36113)

After a package is downloaded, Cargo extracts its source code in the ~/.cargo folder on disk, making it available to the Rust projects it builds. To record when an extraction is successfull, Cargo writes "ok" to the .cargo-ok file at the root of the extracted source code once it extracted all the files.

It was discovered that Cargo allowed packages to contain a .cargo-ok symbolic link, which Cargo would extract. Then, when Cargo attempted to write "ok" into .cargo-ok, it would actually replace the first two bytes of the file the symlink pointed to with ok. This would allow an attacker to corrupt one file on the machine using Cargo to extract the package.

Disk space exhaustion (CVE-2022-36114)

It was discovered that Cargo did not limit the amount of data extracted from compressed archives. An attacker could upload to an alternate registry a specially crafted package that extracts way more data than its size (also known as a "zip bomb"), exhausting the disk space on the machine using Cargo to download the package.

Affected versions

Both vulnerabilities are present in all versions of Cargo. Rust 1.64, to be released on September 22nd, will include fixes for both of them.

Since these vulnerabilities are just a more limited way to accomplish what a malicious build scripts or procedural macros can do, we decided not to publish Rust point releases backporting the security fix. Patch files for Rust 1.63.0 are available in the wg-security-response repository for people building their own toolchains.

Mitigations

We recommend users of alternate registries to excercise care in which package they download, by only including trusted dependencies in their projects. Please note that even with these vulnerabilities fixed, by design Cargo allows arbitrary code execution at build time thanks to build scripts and procedural macros: a malicious dependency will be able to cause damage regardless of these vulnerabilities.

crates.io implemented server-side checks to reject these kinds of packages years ago, and there are no packages on crates.io exploiting these vulnerabilities. crates.io users still need to excercise care in choosing their dependencies though, as the same concerns about build scripts and procedural macros apply here.

Acknowledgements

We want to thank Ori Hollander from JFrog Security Research for responsibly disclosing this to us according to the Rust security policy.

We also want to thank Josh Triplett for developing the fixes, Weihang Lo for developing the tests, and Pietro Albini for writing this advisory. The disclosure was coordinated by Pietro Albini and Josh Stone.