Firefox NightlyThese Weeks in Firefox: Issue 100

Highlights

    • Firefox 92 was released today!
    • We’re 96% through M1 for Fluent migration! Great work from kpatenio and niklas!
      • [Screenshot]
        • Caption: A graph showing how Fluent strings have overtaken DTD strings over time as the dominant string mechanism in browser.xhtml. As of September 2nd, it shows that there are 732 Fluent strings and 32 DTD strings in browser.xhtml
      • Fluent is our new localization framework
    • We have improvements coming soon for our downloads panel! You can opt in by enabling browser.download.improvements_to_download_panel in about:config.
  • Nightly now has an about:unloads page to show some of the locally collected heuristics being used to decide which tabs to unload on memory pressure. You can also manually unload tabs from here.
    • As part of Fission-related changes, we’ve rearchitected some of the internals of the WebExtensions framework – see Bug 1708243
  • If you notice recent addons-related regressions in Nightly 94 and Beta 93 (e.g. like Bug 1729395, affecting the multi-account-containers addon), please file a bug and needinfo us (rpl or zombie).

Friends of the Firefox team

For contributions from August 25th to September 7th 2021, inclusive.

Resolved bugs (excluding employees)

Fixed more than one bug

  • Ava Katushka
  • Itiel
  • Michael Kohler [:mkohler]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • :gregtatum landed in Firefox 93 a follow up to Bug 1722087 to migrate users away from the old recommended themes that have been removed from the omni.jar – Bug 1723602.
WebExtension APIs
  • extension.getViews now returns existing sidebar extension pages also when called with a `windowId` filter – Bug 1612390 (closed by one of the changes landed as part of Bug 1708243)

Downloads Panel

Fluent

Form Autofill

  • Bug 1687684 – Fix credit card autofill when the site prefills fields
  • Bug 1688209 – Prevent simple hidden fields from being eligible for autofill.

High-Contrast Mode (MSU Capstone project)

  • Molly and Micah have kicked off another semester working with MSU capstone students. They’ll be helping us make a number of improvements to high-contrast mode on Firefox Desktop. See this meta bug to follow along.
  • We’ll be doing a hack weekend on September 11 & 12 where students will get ramped up on their first bugs and tools needed to do Firefox development.

Lint, Docs and Workflow

Password Manager

  • Welcome Serg Galich, he’ll be working on credential management with Tim and Dimi.

Search and Navigation

  • Drew landed some early UI changes, part of Firefox Suggest, in Nightly. In particular, labels have been added to Address Bar groups. A goal of Firefox Suggest is to provide smarter and more useful results, and better grouping, while also improving our understanding of how the address bar results are perceived. More experiments are still ongoing and planned for the short future.
  • Daisuke landed a performance improvement to the address bar tokenizer. Bug 1726837

Mike TaylorTesting Chrome version 100 for fun and profit (but mostly fun I guess)

Great news readers, my self-imposed 6 month cooldown on writing amazing blog posts has expired.

My pal Ali just added a flag to Chromium to allow you to test sites while sending a User-Agent string that claims to be version 100 (should be in version 96+, that’s in the latest Canary if you download or update today):

screenshot of chrome://flags/#force-major-version-to-100

I’ll be lazy and let Karl Dubost do the explaining of the why, in his post “Get Ready For Three Digits User Agent Strings”.

So turn it on and report all kinds of bugs, either at crbug.com/new or webcompat.com/issues/new.

The Mozilla BlogDid you hear about Apple’s security vulnerability? Here’s how to find and remove spyware.

Spyware has been in the news recently with stories like the Apple security vulnerability that allowed devices to be infected without the owner knowing it, and a former editor of The New York Observer being charged with a felony for unlawfully spying on his spouse with spyware. Spyware is a sub-category of malware that’s aimed at surveilling the behavior of human target(s) using a given device where the spyware is running. This surveillance could include but is not limited to logging keystrokes, capturing what websites you are visiting, looking at your locally stored files/passwords, and capturing audio or video within proximity to the device.

How does spyware work?

Spyware, much like any other malware, doesn’t just appear on a device. It often needs to first be installed or initiated. Depending on what type of device, this could manifest in a variety of ways, but here are a few specific examples:

  • You could visit a website with your web browser and a pop-up prompts you to install a browser extension or addon.
  • You could visit a website and be asked to download and install some software you weren’t there to get.
  • You could visit a website that prompts you to access your camera or audio devices, even though the website doesn’t legitimately have that need.
  • You could leave your laptop unlocked and unattended in a public place, and someone could install spyware on your computer.
  • You could share a computer or your password with someone, and they secretly install the spyware on your computer.
  • You could be prompted to install a new and unknown app on your phone.
  • You install pirated software on your computer, but this software additionally contains spyware functionality.

With all the above examples, the bottom line is that there could be software running with a surveillance intent on your device. Once installed, it’s often difficult for a lay person to have 100% confidence that their device can be trusted again, but for many the hard part is first detecting that surveillance software is running on your device.

How to detect spyware on your computer and phone

As mentioned above, spyware, like any malware, can be elusive and hard to spot, especially for a layperson. However, there are some ways by which you might be able to detect spyware on your computer or phone that aren’t overly complicated to check for.

Cameras

On many types of video camera devices, you get a visual indication that the video camera is recording. These are often a hardware controlled light of some kind that indicates the device is active. If you are not actively using your camera and these camera indicator lights are on, this could be a signal that you have software on your device that is actively recording you, and it could be some form of spyware. 

Here’s an example of what camera indicator lights look like on some Apple devices, but active camera indicators come in all kinds of colors and formats, so be sure to understand how your device works. A good way to test is to turn on your camera and find out exactly where these indicator lights are on your devices.

Additionally, you could make use of a webcam cover. These are small mechanical devices that allow users to manually open and shut cameras only when in use. These are generally a very cheap and low-tech way to protect snooping via cameras.

Applications

One pretty basic means to detect malicious spyware on systems is simply reviewing installed applications, and only keeping applications you actively use installed.

On Apple devices, you can review your applications folder and the app store to see what applications are installed. If you notice something is installed that you don’t recognize, you can attempt to uninstall it. For Windows computers, you’ll want to check the Apps folder in your Settings

Web extensions

Many browsers, like Firefox or Chrome, have extensive web extension ecosystems that allow users to customize their browsing experience. However, it’s not uncommon for malware authors to utilize web extensions as a medium to conduct surveillance activities of a user’s browsing activity.

On Firefox, you can visit about:addons and view all your installed web extensions. On Chrome, you can visit chrome://extensions and view all your installed web extensions. You are basically looking for any web extensions that you didn’t actively install on your own. If you don’t recognize a given extension, you can attempt to uninstall it or disable it.

Add features to Firefox to make browsing faster, safer or just plain fun.

Get quality extensions, recommended by Firefox.

How do you remove spyware from your device?

If you recall an odd link, attachment, download or website you interacted with around the time you started noticing issues, that could be a great place to start when trying to clean your system. There are various free online tools you can leverage to help get a signal on what caused the issues you are experiencing. VirusTotal, UrlVoid and HybridAnalysis are just a few examples. These tools can help you determine when the compromise of your system occurred. How they can do this varies, but the general idea is that you give it the file or url you are suspicious of, and it will return a report to you showing what various computer security companies know about the file or url. A point of infection combined with your browser’s search history would give you a starting point of various accounts you will need to double check for signs of fraudulent or malicious activity after you have cleaned your system. This isn’t entirely necessary in order to clean your system, but it helps jumpstart your recovery from a compromise.

There are a couple of paths that can be followed in order to make sure any spyware is entirely removed from your system and give you peace of mind:

Install an antivirus (AV) software from a well-known company and run scans on your system

  • If you have a Windows device, Windows Defender comes pre-installed, and you should double-check that you have it turned on.
  • If you currently have an AV software installed, make sure it’s turned on and that it’s up to date. Should it fail to identify and remove the spyware from your system, then it’s on to one of the following options.

Run a fresh install of your system’s operating system

  • While it might be tempting to backup files you have on your system, be careful and remember that your device was compromised and the file causing the issue could end up back on your system and again compromising it.
  • The best way to do this would be to wipe the hard drive of your system entirely, and then reinstall from an external device.

How can you protect yourself from getting spyware?

There are a lot of ways to help keep your devices safe from spyware, and in the end it can all be boiled down to employing a little healthy skepticism and practicing good basic digital hygiene. These tips will help you stay on the right track:

Be wary. Don’t click on links, open/download attachments from unknown senders. This applies to both messaging apps as well as emails. 

Stay updated. Take the time to install updates/patches. This helps make sure your devices and apps are protected against known issues.

Check legitimacy. If you aren’t sure if a website or email is giving legitimate information, take the time to use your favorite search engine to find the legitimate website. This helps avoid issues with typos potentially leading you to a bad website

Use strong passwords. Ensure all your devices have solid passwords that are not shared. It’s easier to break into a house that isn’t locked.

Delete extras. Remove applications you don’t use anymore. This reduces the total attack surface you are exposing, and has the added bonus of saving space for things you care about.

Use security settings. Enable built in browser security features. By default, Firefox is on the lookout for malware and will alert you to Deceptive Content and Dangerous Software.

The post Did you hear about Apple’s security vulnerability? Here’s how to find and remove spyware. appeared first on The Mozilla Blog.

Marco Castellucciobugbug infrastructure: continuous integration, multi-stage deployments, training and production services

bugbug started as a project to automatically assign a type to bugs (defect vs enhancement vs task, back when we introduced the “type” we needed a way to fill it for already existing bugs), and then evolved to be a platform to build ML models on bug reports: we now have many models, some of which are being used on Bugzilla, e.g. to assign a type, to assign a component, to close bugs detected as spam, to detect “regression” bugs, and so on.

Then, it evolved to be a platform to build ML models for generic software engineering purposes: we now no longer only have models that operate on bug reports, but also on test data, patches/commits (e.g. to choose which tests to run for a given patch and to evaluate the regression riskiness associated to a patch), and so on.

Its infrastructure also evolved over time and slowly became more complex. This post attempts to clarify its overall infrastructure, composed of multiple pipelines and multi-stage deployments.

The nice aspect of the continuous integration, deployment and production services of bugbug is that almost all of them are running completely on Taskcluster, with a common language to define tasks, resources, and so on.

In bugbug’s case, I consider a release as a code artifact (source code at a given tag in our repo) plus the ML models that were trained with that code artifact and the data that was used to train them. This is because the results of a given model are influenced by all these aspects, not just the code as in other kinds of software. Thus, in the remainder of this post, I will refer to “code artifact” or “code release” when talking about a new version of the source code, and to “release” when talking about a set of artifacts that were built with a specific snapshot (version) of the source code and with a specific snapshot of the data.

The overall infrastructure can be seen in this flowchart, where the nodes represent artifacts and the subgraphs represent the set of operations performed on them. The following sections of the blog post will then describe the components of the flowchart in more detail. Flowchart of the bugbug infrastructure

Continuous Integration and First Stage (Training Pipeline) Deployment

Every pull request and push to the repository triggers a pipeline of Taskcluster tasks to:

  • run tests for the library and its linked HTTP service;
  • run static analysis and linting;
  • build Python packages;
  • build the frontend;
  • build Docker images.

Code releases are represented by tags. A push of a tag triggers additional tasks that perform:

  • integration tests;
  • push of Docker images to DockerHub;
  • release of a new version of the Python package on PyPI;
  • update of the training pipeline definition.

After a code release, the training pipeline which performs ML training is updated, but the HTTP service, the frontend and all the production pipelines that depend on the trained ML models (the actual release) are still on the previous version of the code (since they can’t be updated until the new models are trained).

Continuous Training and Second Stage (ML Model Services) Deployment

The training pipeline runs on Taskcluster as a hook that is either triggered manually or on a cron.

The training pipeline consists of many tasks that:

  • retrieve data from multiple sources (version control system, bug tracking systems, Firefox CI, etc.);
  • generation of intermediate artifacts that are used by later stages of the pipeline or by other pipelines or other services;
  • training of ML models using the above (there are also training tasks that depend on other models to be trained and run first to generate intermediate artifacts);
  • check training metrics to ensure there are no short term or long term regressions;
  • run integration tests with the trained models;
  • build Docker images with the trained models;
  • push Docker images with the trained models;
  • update the production pipelines definition.

After a run of the training pipeline, the HTTP service and all the production pipelines are updated to the latest version of the code (if they weren’t already) and to the last version of the trained models.

Production pipelines

There are multiple production pipelines (here’s an example), that serve different objectives, all running on Taskcluster and triggered either on cron or by pulse messages from other services.

Frontend

The bugbug UI lives at https://changes.moz.tools/, and it is simply a static frontend built in one of the production pipelines defined in Taskcluster.

The production pipeline performs a build and uploads the artifact to S3 via Taskcluster, which is then exposed at the URL mentioned earlier.

HTTP Service

The HTTP service is the only piece of the infrastructure that is not running on Taskcluster, but currently on Heroku.

The Docker images for the service are built as part of the training pipeline in Taskcluster, the trained ML models are included in the Docker images themselves. This way, it is possible to rollback to an earlier version of the code and models, should a new one present a regression.

There is one web worker that answers to requests from users, and multiple background workers that perform ML model evaluations. These must be done in the background because of performance reasons (the web worker must answer quickly). The ML evaluations themselves are quick, and so could be directly done in the web worker, but the input data preparation can be slow as it requires interaction with external services such as Bugzilla or a remote Mercurial server.

Paul BoneRunning the AWSY benchmark in the Firefox profiler

The are we slim yet (AWSY) benchmark measures memory usage. Recently when I made a simple change to firefox and expected it might save a bit of memory, it actually increased memory usage on the AWSY benchmark.

We have lots of tools to hunt down memory usage problems. But to see an almost "log" of when garbage collection and cycle collection occurs, the Firefox profiler is amazing.

I wanted to profile the AWSY benchmark to try and understand what was happening with GC scheduling. But it didn’t work out-of-the-box. This is one of those blog posts that I’m writing down so next time this happens, to me or anyone else, although I am selfish. And I websearch for "AWSY and Firefox Profiler" I want this to be the number 1 result and help me (or someone else) out.

The normal instructions

First you need a build with profiling enabled. Put this in your mozconfig

ac_add_options --enable-debug
ac_add_options --enable-debug-symbols
ac_add_options --enable-optimize
ac_add_options --enable-profiling

The instructions to get the profiler to run came from Ted Campbell. Thanks Ted.

Ted’s instructions disabled stack sampling, we didn’t care about that since the data we need comes from profile markers. I can also run a reduced awsy test because 10 entries is enough to create the problem.

export MOZ_PROFILER_STARTUP=1
export MOZ_PROFILER_SHUTDOWN=awsy-profile.json
export MOZ_PROFILER_STARTUP_FEATURES="nostacksampling"
./mach awsy-test --tp6 --headless --iterations 1 --entities 10

But it crashes due to Bug 1710408.

So I can’t use nostacksampling, which would have been nice to save some memory/disk space, never mind.

So I removed that option, then I get profiles that are too short. The profiler records into a circular buffer so if that buffer is too small it’ll discard the earlier information. In this case I want the earlier information because I think something at the beginning is the problem. So I need to add this to get a bigger buffer. The default is 4 million entries (32MB).

export MOZ_PROFILER_STARTUP_ENTRIES=$((200*1024*1024))

But now the profiles are too big and Firefox shutdown times out (over 70 seconds) so the marionette test driver kills Firefox before it can write out the profile.

The solution

So we hack testing/marionette/client/marionette_driver/marionette.py to replace shutdown_timeout with 300 in some places. Setting DEFAULT_SHUTDOWN_TIMEOUT and also self.shutdown_timeout to 300 will do. There’s probably a way to pass a parameter, but I didn’t find it yet. So after making that change and running ./mach build the invocation is now:

export MOZ_PROFILER_STARTUP=1
export MOZ_PROFILER_SHUTDOWN=awsy-profile.json
export MOZ_PROFILER_STARTUP_FEATURES=""
export MOZ_PROFILER_STARTUP_ENTRIES=$((200*1024*1024))
./mach awsy-test --tp6 --headless --iterations 1 --entities 10

And it writes a awsy-profile.json into the root directory of the project).

Hurray!

Data@MozillaThis Week in Glean: Glean & GeckoView

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index.


This is a followup post to Shipping Glean with GeckoView.

It landed!

It took us several more weeks to put everything into place, but we’re finally shipping the Rust parts of the Glean Android SDK with GeckoView and consume that in Android Components and Fenix. And it still all works, collects data and is sending pings! Additionally this results in a slightly smaller APK as well.

This unblocks further work now. Currently Gecko simply stubs out all calls to Glean when compiled for Android, but we will enable recording Glean metrics within Gecko and exposing them in pings sent from Fenix. We will also start work on moving other Rust components into mozilla-central in order for them to use the Rust API of Glean directly. Changing how we deliver the Rust code also made testing Glean changes across these different components a bit more challenging, so I want to invest some time to make that easier again.

Jan-Erik RedigerThis Week in Glean: Glean & GeckoView

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All "This Week in Glean" blog posts are listed in the TWiG index (and on the Mozilla Data blog). This article is cross-posted on the Mozilla Data blog.


This is a followup post to Shipping Glean with GeckoView.

It landed!

It took us several more weeks to put everything into place, but we're finally shipping the Rust parts of the Glean Android SDK with GeckoView and consume that in Android Components and Fenix. And it still all works, collects data and is sending pings! Additionally this results in a slightly smaller APK as well.

This unblocks further work now. Currently Gecko simply stubs out all calls to Glean when compiled for Android, but we will enable recording Glean metrics within Gecko and exposing them in pings sent from Fenix. We will also start work on moving other Rust components into mozilla-central in order for them to use the Rust API of Glean directly. Changing how we deliver the Rust code also made testing Glean changes across these different components a bit more challenging, so I want to invest some time to make that easier again.

The Mozilla BlogThe Great Resignation: New gig? Here are 7 tips to ensure success

If recent surveys and polls ring true, over 46% of the global workforce is considering leaving their employer this year. Despite COVID-19 causing initial turnover due to the related economic downturn, the current phenomenon coined “The Great Resignation” is attributed to the many job seekers choosing to leave their current employment voluntarily. Mass vaccinations and mask mandates have allowed offices to re-open just as job seekers are reassessing work-life balance, making bold moves to take control of where they choose to live and work. 

The “New Normal”

Millions of workers have adjusted to remote-flexible work arrangements, finding success and a greater sense of work-life balance. The question is whether or not employers will permanently allow this benefit post-pandemic.

Jerry Lee, COO/Founder of the career development consultancy, Wonsulting, sees changes coming to the workplace power dynamic.

“In the future of work, employers will have to be much more employee-first beyond monetary compensation,” he said. “There is a shift of negotiating power moving from the employers to the employees, which calls for company benefits and work-life balance to improve.” 

Abbie Duckham, Talent Operations Program Manager at Mozilla, believes the days of companies choosing people are long over. 

“From a hiring lens, it’s no longer about companies choosing people, it’s about people choosing companies,” Duckham said. “People are choosing to work at companies that, yes, value productivity and revenue – but more-so companies that value mental health and understand that every single person on their staff has a different home life or work-life balance.”

Drop the mic and cue the job switch

So, how can recent job switchers or job seekers better prepare for their next big move? The following tips and advice from career and talent sourcing experts can help anyone perform their best while adapting to our current pandemic reality.

Take a vacation *seriously*

When starting a new role many are keen to jump into work right away; however, it’s always important to take a mental break between your different roles before you start another onboarding process,” advises Jonathan Javier, CEO/Founder at Wonsulting. “One way to do this is to plan your vacations ahead of your switch: that trip to Hawaii you always wanted? Plan it right after you end your job. That time you wanted to spend with your significant other? Enjoy that time off.” 

It also never hurts to negotiate a start date that does not prioritize your mental preparedness and well-being.

Out with the old and in with that new-new

When Duckham started at Mozilla, she made it her mission to absorb every bit of the manifesto to better understand Mozilla’s culture. “From there I looked into what we actually do as a company. Setting up a Firefox account was pretty crucial since we are all about dog-fooding here (or as we call it, foxfooding), and then downloading Firefox Nightly, the latest beta-snapshot of the browser as our developers are actively working on it.”

Duckham also implores job-switchers to rebrand themselves. 

“You have a chance to take everything you wanted your last company to know about you and restart,” she said. “Take everything you had imposter syndrome about and flip the switch.”

Network early

“When you join a new company, it’s important to identify the subject matter experts for different functions of your company so you know who you can reach out to if you have any questions or need insights,” Javier said.

Javier also recommends networking with people who have also switched jobs. 

“You can search for and find people who switched from non-tech roles to an in-tech role by simply searching for ‘Past Company’ at a non-tech company and then putting ‘Current Company’ at a tech company on LinkedIn,” he said.

Brain-breaks 

Duckham went as far as giving her digital workspace a refreshing overhaul when she started at Mozilla. 

“I cleaned off my desktop, made folders for storing files, and essentially crafted a blank working space to start fresh from my previous company – effectively tabula rasa-ing my digital workspace did the same for my mental state as I prepared to absorb tons of new processes and practices.”

In that same vein, when you need a bit of a brain-break throughout the work day and that break leads you to social media, Duckham advises downloading Facebook Container, a browser extension that makes it harder for Facebook to track you on the web outside of Facebook.

“Speaking of brain-breaks, if socials aren’t your thing and you’d rather catch up on written curated content from around the web, Pocket is an excellent way to let your mind wander and breathe during the work day so you’re able return to work a little more refreshed,” Duckham added.

Making remote friends and drawing boundary lines

56% of Mozilla employees signed in to work from remote locations all over the world, even before the pandemic. Working asynchronously across so many time zones can be unusual for new teammates. Duckham’s biggest tip for new Mozillians? 

“Be open and a little vulnerable. Do you need to take your kid to school every day, does your dog require a mid-day walk? Chances are your schedule is just as unique as the person in the Zoom window next to you. Be open about the personal time you need to take throughout the day and then build your work schedule around it.” 

But what about building comradery and remote-friendships

“In a traditional work environment, you might run into your colleagues in the break room and have a quick chat. As roles continue to become more remote or hybrid-first, it is important to create opportunities for you to mingle with your colleagues,” Jerry Lee of Wonsulting said. “These small interactions are what builds long-lasting friendships, which in turn allows you to feel more comfortable and productive at work.”

How to leverage pay, flexibility and other benefits even if you aren’t job searching

“The best leverage you can find in this job market – is clearly defining what is important for you and making sure you have that option in your role,” Lee said. 

He’s not wrong. Make sure to consider your current growth opportunities, autonomy, location, work-life flexibility and compensation, of course. For example, if you are looking for a flexible-remote arrangement, Lee suggests clearly articulating what it is you want to your manager using the following talk-track as a guide:

Hey Manager!

I’m looking for ways to better incorporate my work into my personal life, and I’ve realized one important factor for me is location flexibility. I’m looking to move around a bit in the next few years but would love to continue the work I have here.

What can we do to make this happen?

Once you make your request, you’ll need to work with your manager to ensure your productivity and impact improves or at least remains the same.

Finally, it’s always helpful to remind yourself that every ‘big’ career move is the result of several smaller moves. If you’re looking to make a switch or simply reaccessing your current work-life balance, Javier recommends practicing vision boarding. “I do this by drawing my current state and what I want my future state to look like,” said Javier. “Even if your drawings are not subpar, you’ll be able to visualize what you want to accomplish in the future and make it into reality.”

As the Great Resignation continues, it is important to keep in mind that getting a new job is just the start of the journey. There are important steps that you can do, and Firefox and Pocket can help, to make sure that you feel ready for your next career adventure.

Firefox browser logo

Get Firefox

Get the browser that protects what’s important

About our experts

Jonathan Javier is the CEO/Founder of Wonsulting, whose mission is to “turn underdogs into winners”. He’s also worked in Operations at Snap, Google, and Cisco coming from a non-target school/non-traditional background. He works on many initiatives, providing advice and words of wisdom on LinkedIn and through speaking engagements. In total, he has led 210+ workshops in 9 different countries including the Mena ICT Forum in Jordan, Resume/Personal Branding at Cisco, LinkedIn Strategy & Operations Offsite, Great Place To Work, Talks at Google, TEDx, and more. He’s been featured on Forbes, Fox News, Business Insider, The Times, LinkedIn News, Yahoo! News, Jobscan, and Brainz Magazine as a top job search expert and amassed 1M+ followers on LinkedIn, Instagram, TikTok as well as 30+ million impressions monthly on his content.

Jerry Lee is the COO/Founder of Wonsulting and an ex-Senior Strategy & Operations Manager at Google & used to lead Product Strategy at Lucid. He is from Torrance, California and graduated summa cum laude from Babson College. After graduating, Jerry was hired as the youngest analyst in his organization by being promoted multiple times in 2 years to his current position. After he left Google, he was the youngest person to lead a strategy team at Lucid. Jerry partners with universities & organizations (220+ to date) to help others land into their dream careers. He has 250K+ followers across LinkedIn, TikTok & Instagram and has reached 40M+ professionals. In addition, his work is featured on Forbes, Newsweek, Business Insider, Yahoo! News, LinkedIn & elected as the 2020 LinkedIn Top Voice for Tech. 

Abbie Duckham is the current Talent Operations Program Manager at Mozilla. She has been with the company since 2016, working out of the San Francisco Office, and now her home office in Oakland.

The post The Great Resignation: New gig? Here are 7 tips to ensure success appeared first on The Mozilla Blog.

Niko MatsakisRustacean Principles, continued

RustConf is always a good time for reflecting on the project. For me, the last week has been particularly “reflective”. Since announcing the Rustacean Principles, I’ve been having a number of conversations with members of the community about how they can be improved. I wanted to write a post summarizing some of the feedback I’ve gotten.

The principles are a work-in-progress

Sparking conversation about the principles was exactly what I was hoping for when I posted the previous blog post. The principles have mostly been the product of Josh and I iterating, and hence reflect our experiences. While the two of us have been involved in quite a few parts of the project, for the document to truly serve its purpose, it needs input from the community as a whole.

Unfortunately, for many people, the way I presented the principles made it seem like I was trying to unveil a fait accompli, rather than seeking input on a work-in-progress. I hope this post makes the intention more clear!

The principles as a continuation of Rust’s traditions

Rust has a long tradition of articulating its values. This is why we have a Code of Conduct. This is why we wrote blog posts like Fearless Concurrency, Stability as a Deliverable and Rust Once, Run Anywhere. Looking past the “engineering side” of Rust, aturon’s classic blog posts on listening and trust (part 1, part 2, part 3) did a great job of talking about what it is like to be on a Rust team. And who could forget the whole “fireflowers” debate?1

My goal with the Rustacean Principles is to help coalesce the existing wisdom found in those classic Rust blog posts into a more concise form. To that end, I took initial inspiration from how AWS uses tenets, although by this point the principles have evolved into a somewhat different form. I like the way tenets use short, crisp statements that identify important concepts, and I like the way assigning a priority ordering helps establish which should have priority. (That said, one of Rust’s oldest values is synthesis: we try to find ways to resolve constraints that are in tension by having our cake and eating it too.)

Given all of this backdrop, I was pretty enthused by a suggestion that I heard from Jacob Finkelman. He suggested adapting the principles to incorporate more of the “classic Rust catchphrases”, such as the “no new rationale” rule described in the first blog post from aturon’s series. A similar idea is to incorporate the lessons from RFCs, both successful and unsuccessful (this is what I was going for in the case studies section, but that clearly needs to be expanded).

The overall goal: Empowerment

My original intention was to structure the principles as a cascading series of ideas:

  • Rust’s top-level goal: Empowerment
    • Principles: Dissecting empowerment into its constituent pieces – reliable, performant, etc – and analyzing the importance of those pieces relative to one another.
      • Mechanisms: Specific rules that we use, like type safety, that engender the principles (reliability, performance, etc.). These mechanisms often work in favor of one principle, but can work against others.

wycats suggested that the site could do a better job of clarifying that empowerment is the top-level, overriding goal, and I agree. I’m going to try and tweak the site to make it clearer.

A goal, not a minimum bar

The principles in “How to Rustacean” were meant to be aspirational: a target to be reaching for. We’re all human: nobody does everything right all the time. But, as Matklad describes, the principles could be understood as setting up a kind of minimum bar – to be a team member, one has to show up, follow through, trust and delegate, all while bringing joy? This could be really stressful for people.

The goal for the “How to Rustacean” section is to be a way to lift people up by giving them clear guidance for how to succeed; it helps us to answer people when they ask “what should I do to get onto the lang/compiler/whatever team”. The internals thread had a number of good ideas for how to help it serve this intended purpose without stressing people out, such as cuviper’s suggestion to use fictional characters like Ferris in examples, passcod’s suggestion of discussing inclusion, or Matklad’s proposal to add something to the effect of “You don’t have to be perfect” to the list. Iteration needed!

Scope of the principles

Some people have wondered why the principles are framed in a rather general way, one that applies to all of Rust, instead of being specific to the lang team. It’s a fair question! In fact, they didn’t start this way. They started their life as a rather narrow set of “design tenets for async” that appeared in the async vision doc. But as those evolved, I found that they were starting to sound like design goals for Rust as a whole, not specifically for async.

Trying to describe Rust as a “coherent whole” makes a lot of sense to me. After all, the experience of using Rust is shaped by all of its facets: the language, the libraries, the tooling, the community, even its internal infrastructure (which contributes to that feeling of reliability by ensuring that the releases are available and high quality). Every part has its own role to play, but they are all working towards the same goal of empowering Rust’s users.2

There is an interesting question about the long-term trajectory for this work. In my mind, the principles remain something of an experiment. Presuming that they prove to be useful, I think that they would make a nice RFC.

What about “easy”?

One final bit of feedback I heard from Carl Lerche is surprise that the principles don’t include the word “easy”. This not an accident. I felt that “easy to use” was too subjective to be actionable, and that the goals of productive and supportive were more precise. However, I do think that for people to feel empowered, it’s important for them not feel mentally overloaded, and Rust can definitely have the problem of carrying a high mental load sometimes.

I’m not sure the best way to tweak the “Rust empowers by being…” section to reflect this, but the answer may lie with the Cognitive Dimensions of Notation. I was introduced to these from Felienne Herman’s excellent book The Programmer’s Brain; I quite enjoyed this journal article as well.

The idea of the CDN is to try and elaborate on the ways that tools can be easier or harder to use for a particular task. For example, Rust would likely do well on the “error prone” dimension, in that when you make changes, the compiler generally helps ensure they are correct. But Rust does tend to have a high “viscosity”, because making local changes tends to be difficult: adding a lifetime, for example, can require updating data structures all over the code in an annoying cascade.

It’s important though to keep in mind that the CDN will vary from task to task. There are many kinds of changes one can make in Rust with very low viscosity, such as adding a new dependency. On the other hand, there are also cases where Rust can be error prone, such as mixing async runtimes.

Conclusion

In retrospect, I wish I had introduced the concept of the Rustacean Principles in a different way. But the subsequent conversations have been really great, and I’m pretty excited by all the ideas on how to improve them. I want to encourage folks again to come over to the internals thread with their thoughts and suggestions.

  1. Love that web page, brson

  2. One interesting question: I do think that some tools may vary the prioritization of different aspects of Rust. For example, a tool for formal verification is obviously aimed at users that particularly value reliability, but other tools may have different audiences. I’m not sure yet the best way to capture that, it may well be that each tool can have its own take on the way that it particularly empowers. 

The Mozilla BlogMozilla VPN adds advanced privacy features: Custom DNS servers and Multi-hop

Your online privacy remains our top priority, and we know that one of the first things to secure your privacy when you go online is to get on a Virtual Private Network (VPN), an encrypted connection that serves as a tunnel between your computer and VPN server. Today, we’re launching the latest release of our Mozilla VPN, our fast and easy-to-use VPN service, with two new advanced privacy features that offer additional layers of privacy. This includes your choice of Domain Name System (DNS) servers whether it’s the default we’ve provided, our suggested ad blocking, tracker blocking or ad plus tracker blocking DNS server, or an alternative one, plus the multi-hop feature which allows you to add two different servers to give you twice the amount of encryption. Today’s Mozilla VPN release is available on Windows, Mac, Linux and Android platforms (it will soon be available on iOS later this week).

Here are today’s Mozilla VPN Features:

Uplevel your privacy with Mozilla VPN’s Custom DNS server feature

Traditionally when you go online your traffic is routed through your Internet Service Provider’s (ISP) DNS servers who may be keeping records of your online activities. DNS, which stands for Domain Name System, is like a phone book for domains, which are the websites that you visit. One of the advantages to using a VPN is shielding your online activity from your ISP by using your trusted VPN service provider’s DNS servers. There are a variety of DNS servers, from ones that offer additional features like tracker blocking, ad blocking or a combination of both tracker and ad blocking, or local DNS servers that have those benefits along with speed. 

Now, with today’s Custom DNS server, we put you in control of choosing your DNS server that fits your needs. You can find this feature in your Network Settings under Advanced DNS Settings. From there, you can choose from the default DNS server, enter your local DNS server, or choose from the recommended list of DNS servers available to you. 

<figcaption>Choose from the recommended list of DNS servers available to you</figcaption>

Double up your VPN service with Mozilla’s VPN Multi-hop feature

We’re introducing our Multi-hop feature which is also known as doubling up your VPN because instead of using one VPN server you can use two VPN servers. Here’s how it works, first your online activity is routed through one VPN server. Then, by selecting the Multi-Hop feature, your online activity will get routed a second time through an extra VPN server which is known as your exit server. Essentially, you will have two VPN servers which are known as the entry VPN server and exit VPN server. This new powerful privacy feature appeals to those who think twice about their privacy, like political activists, journalists writing sensitive topics, or anyone who’s using a public wi-fi and wants that added peace of mind by doubling-up their VPN servers.

To turn on this new feature, go to your Location, then choose Multi-hop. From there, you can choose your entry server location and your exit server location. The exit server location will be your main VPN server. We will also list your two recent Multi-hop connections so you can reuse them in the future. 

<figcaption>Choose your entry server location and your exit server location</figcaption>
<figcaption>Your two recent Multi-hop connections will also be listed and available to reuse in the future</figcaption>

How we innovate and build features for you with Mozilla VPN

Developed by Mozilla, a mission-driven company with a 20-year track record of fighting for online privacy and a healthier internet, we are committed to innovate and bring new features to the Mozilla VPN. Mozilla periodically works with third-party organizations to complement our internal security programs and help improve the overall security of our products. Mozilla recently published an independent security audit of its Mozilla VPN from Cure53, an unbiased cybersecurity firm based in Berlin with more than 15 years of running software testing and code auditing. Here is a link to the blog post and the security audit for more details. 

We know that it’s more important than ever for you to be safe, and for you to know that what you do online is your own business. By subscribing to Mozilla VPN, users support both Mozilla’s product development and our mission to build a better web for all. Check out the Mozilla VPN and subscribe today from our website.

For more on Mozilla VPN:

Mozilla VPN Completes Independent Security Audit by Cure53

Celebrating Mozilla VPN: How we’re keeping your data safe for you

Latest Mozilla VPN features keep your data safe

Mozilla Puts Its Trusted Stamp on VPN

The post Mozilla VPN adds advanced privacy features: Custom DNS servers and Multi-hop appeared first on The Mozilla Blog.

The Mozilla BlogGet where you’re going faster, with Firefox Suggest

Today, people have to work too hard to find what they want online, sifting through and steering clear of content, clutter and click-bait not worthy of their time. Over time, navigation on the internet has become increasingly centralized and optimized for clicks and scrolling, not for getting people to where they want to go or what they are looking for quickly. 

We’d like to help change this, and we think Firefox is a good place to start.

Today we’re announcing our first step towards doing that with a new feature called Firefox Suggest.

Firefox Suggest is a new discovery feature that is built directly into the browser. Firefox Suggest acts as a trustworthy guide to the better web, surfacing relevant information and sites to help people accomplish their goals. Check it out here:

Relevant, reliable answers: 

Firefox already helps people search their browsing history and tabs and use their preferred search engine directly from Firefox’s Awesome Bar. 

Firefox Suggest will enhance this by including other sources of information such as Wikipedia, Pocket articles, reviews and credible content from sponsored, vetted partners and trusted organizations. 

For instance, suppose someone types “Costa Rica” into the Awesome Bar, they might see a result from Wikipedia:

<figcaption>Firefox users can find suggestions from Wikipedia</figcaption>

Firefox Suggest also contains sponsored suggestions from vetted partners. For instance, if someone types in “vans”, we might show a sponsored result for Vans shoes on eBay:

<figcaption>Firefox users can find sponsored suggestions from vetted partners</figcaption>

We are also developing contextual suggestions. These aim to enhance and speed up your searching experience. To deliver contextual suggestions, Firefox will need to send Mozilla new data, specifically, what you type into the search bar, city-level location data to know what’s nearby and relevant, as well as whether you click on a suggestion and which suggestion you click on.

In your control:

As always, we believe people should be in control of their web experience, so Firefox Suggest will be a customizable feature. 

We’ll begin offering contextual suggestions to a percentage of people in the U.S. as an opt-in experience. 

<figcaption>Opt-in prompt for smarter, contextual suggestions</figcaption>

Find out more about the ways you can customize this experience here.

Unmatched privacy: 

We believe online ads can work without advertisers needing to know everything about you. So when people choose to enable smarter suggestions, we will collect only the data that we need to operate, update and improve the functionality of Firefox Suggest and the overall user experience based on our Lean Data and Data Privacy Principles. We will also continue to be transparent about our data and data collection practices as we develop this new feature.

A better web. 

The internet has so much to offer, and we want to help people get the best out of it faster and easier than ever before.

Firefox is the choice for people who want to experience the web as a purpose driven and independent company envisions it. We create software for people that provides real privacy, transparency and valuable help with navigating today’s internet. This is another step in our journey to build a better internet.

The post Get where you’re going faster, with Firefox Suggest appeared first on The Mozilla Blog.

Support.Mozilla.OrgWhat’s up with SUMO – September 2021

Hey SUMO folks,

September is going to be the last month for Q3, so let’s see what we’ve been up to for the past quarter.

Welcome on board!

  1. Welcome to SUMO family for Bithiah, mokich1one, handisutrian, and Pomarańczarz. Bithiah has been pretty active on contributing to the support forum for a while now, while Mokich1one, Handi, and Pomarańczarz are emerging localization contributors respectively for Japanese, Bahasa Indonesia, and Polish.

Community news

  • Read our post about the advanced customization in the forum and KB here and let us know if you still have any questions!
  • Please join me to welcome Abby into the Customer Experience Team. Abby is our new Content Manager who will be in charge of our Knowledge Base as well as Localization effort. You can learn more about Abby soon.
  • Learn more about Firefox 92 here.
  • Can you imagine what’s gonna happen when we reach version 100? Learn more about the experiment we’re running in Firefox Nightly here and see how you can help!
  • Are you a fan of Firefox Focus? Join our foxfooding campaign for focus that is coming. You can learn more about the campaign here.
  • No Kitsune update for this month. Check out SUMO Engineering Board instead to see what the team is currently doing.

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in August!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
Aug 2021 8,462,165 +2.47%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Thomas8
  3. Michele Rodaro
  4. K_alex
  5. Pierre Mozinet

KB Localization

Top 10 locale based on total page views

Locale Aug 2021 pageviews (*) Localization progress (per Sep, 7)(**)
de 8.57% 99%
zh-CN 6.69% 100%
pt-BR 6.62% 63%
es 5.95% 44%
fr 5.43% 91%
ja 3.93% 57%
ru 3.70% 100%
pl 1.98% 100%
it 1.81% 86%
zh-TW 1.45% 6%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Milupo
  2. Michele Rodaro
  3. Jim Spentzos
  4. Soucet
  5. Artist

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Aug 2021 3523 75.59% 17.40% 66.67%

Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Cor-el
  3. Jscher2000
  4. Seburo
  5. Sfhowes

Social Support

Channel Aug 2021
Total conv Conv interacted
@firefox 2967 341
@FirefoxSupport 386 270

Top contributors in Aug 2021

  1. Christophe Villeneuve
  2. Andrew Truong
  3. Pravin

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

Firefox mobile

Other products / Experiments

  • Mozilla VPN V2.5 Expected to release 09/15
  • Fx Search experiment:
    • From Sept 6, 2021 1% of the Desktop user base will be experimenting with Bing as the default search engine. The study will last into early 2022, likely wrapping up by the end of January.
    • Common response:
      • Forum: Search study – September 2021
      • Conversocial clipboard: “Mozilla – Search study sept 2021”
      • Twitter: Hi, we are currently running a study that may cause some users to notice that their default search engine has changed. To revert back to your search engine of choice, please follow the steps in the following article → https://mzl.la/3l5UCLr
  • Firefox Suggest + Data policy update (Sept 16 + Oct 5)
    • September 16th, the Mozilla Privacy Policy will be updated to supplement the roll out of FX Suggest online mode. Currently, FX Suggest is utilizing offline mode which limits the data collected. Online mode will collect additional anonymized information after users opt-in to this feature. Users can opt-out of this experience by following the instructions here.

Shout-outs!

  • Kudos for Julie for her work in the Knowledge Base lately. She’s definitely adding a new color in our KB world with her video and article improvement.
  • Thanks to those who contributed to the FX Desktop Topics Discussion
    • If you have input or questions please post them to the thread above

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to   add them in our next edition.

Useful links:

Niko MatsakisCTCFT 2021-09-20 Agenda

The next “Cross Team Collaboration Fun Times” (CTCFT) meeting will take place next Monday, on 2021-09-20 (in your time zone)! This post covers the agenda. You’ll find the full details (along with a calendar event, zoom details, etc) on the CTCFT website.

Agenda

  • Announcements
  • Interest group panel discussion

We’re going to try something a bit different this time! The agenda is going to focus on Rust interest groups and domain working groups, those brave explorers who are trying to put Rust to use on all kinds of interesting domains. Rather than having fixed presentations, we’re going to have a panel discussion with representatives from a number of Rust interest groups and domain groups, led by AngelOnFira. The idea is to open a channel for communication about how to have more active communication and feedback between interest groups and the Rust teams (in both directions).

Afterwards: Social hour

After the CTCFT this week, we are going to try an experimental social hour. The hour will be coordinated in the #ctcft stream of the rust-lang Zulip. The idea is to create breakout rooms where people can gather to talk, hack together, or just chill.

Data@MozillaData and Firefox Suggest

Introduction

Firefox Suggest is a new feature that displays direct links to content on the web based on what users type into the Firefox address bar. Some of the content that appears in these suggestions is provided by partners, and some of the content is sponsored.

In building Firefox Suggest, we have followed our long-standing Lean Data Practices and Data Privacy Principles. Practically, this means that we take care to limit what we collect, and to limit what we pass on to our partners. The behavior of the feature is straightforward–suggestions are shown as you type, and are directly relevant to what you type.

We take the security of the datasets needed to provide this feature very seriously. We pursue multi-layered security controls and practices, and strive to make as much of our work as possible publicly verifiable.

In this post, we wanted to give more detail about what data is needed to provide this feature, and about how we handle it.

Changes with Firefox Suggest

The address bar experience in Firefox has long been a blend of results provided by partners (such as the user’s default search provider) and information local to the client (such as recently visited pages). For the first time, Firefox Suggest augments these data sources with search completions from Mozilla.

Firefox Suggest data flow diagram

In its current form, Firefox Suggest compares searches against a list of allowed terms that is local to the client. When the search text matches a term on the allowed list, a completion suggestion may be shown alongside the local and default search engine suggestions.

Data Collected by Mozilla

Mozilla collects the following information to power Firefox Suggest when users have opted in to contextual suggestions.

  • Search queries and suggest impressions: Firefox Suggest sends Mozilla search terms and information about engagement with Firefox Suggest, some of which may be shared with partners to provide and improve the suggested content.
  • Clicks on suggestions: When a user clicks on a suggestion, Mozilla receives notice that suggested links were clicked.
  • Location: Mozilla collects city-level location data along with searches, in order to properly serve location-sensitive queries.

How Data is Handled and Shared

Mozilla approaches handling this data conservatively. We take care to remove data from our systems as soon as it’s no longer needed. When passing data on to our partners, we are careful to only provide the partner with the minimum information required to serve the feature.

A specific example of this principle in action is the search’s location. The location of a search is derived from the Firefox client’s IP address. However, the IP address can identify a person far more precisely than is necessary for our purposes. We therefore convert the IP address to a more general location immediately after we receive it, and we remove the IP address from all datasets and reports downstream. Access to machines and (temporary, short-lived) datasets that might include the IP address is highly restricted, and limited only to a small number of administrators. We don’t enable or allow analysis on data that includes IP addresses.

We’re excited to be bringing Firefox Suggest to you. See the product announcement to learn more!

This Week In RustThis Week in Rust 408

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is qcell, with a type that works like a compile-time RefCell.

Thanks to Soni L. for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

278 pull requests were merged in the last week

Rust Compiler Performance Triage

Fairly busy week, with some large improvements on several benchmarks. Several larger rollups landed, in part due to recovery from a temporary CI outage, and continued CI trouble since then. This is likely the cause for the somewhat unusual presence of rollups in our results.

Triage done by @simulacrum. Revision range: 69c4aa290..9f85cd6

2 Regressions, 2 Improvements, 4 Mixed; 2 of them in rollups

31 comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs

No RFCs are currently in the final comment period.

Tracking Issues & PRs
New RFCs

No new RFCs were proposed this week.

Upcoming Events

Online

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Gouach

Indeed

Enso

SmartThings

DEMV Systems

Kollider

Polar Sync

SecureDNA

Kraken

Parity Technologies

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Edition!

Niko and Daphne Matsakis on YouTube

Thanks to mark-i-m for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Talospace ProjectFirefox 92 on POWER

Firefox 92 is out. Alongside some solid DOM and CSS improvements, the most interesting bug fix I noticed was a patch for open alerts slowing down other tabs in the same process. In the absence of a JIT we rely heavily on Firefox's multiprocessor capabilities to make the most of our multicore beasts, and this apparently benefits (among others, but in particular) the Google sites we unfortunately have to use in these less-free times. I should note for the record that on this dual-8 Talos II (64 hardware threads) I have dom.ipc.processCount modestly increased to 12 from the default of 8 to take a little more advantage of the system when idle, which also takes down fewer tabs in the rare cases when a content process bombs out. The delay in posting this was waiting for the firefox-appmenu patches, but I decided to just build it now and add those in later. The .mozconfigs and LTO-PGO patches are unchanged from Firefox 90/91.

Meanwhile, in OpenPOWER JIT progress, I'm about halfway through getting the Wasm tests to pass, though I'm currently hung up on a memory corruption bug while testing Wasm garbage collection. It's our bug; it doesn't happen with the C++ interpreter, but unfortunately like most GC bugs it requires hitting it "just right" to find the faulty code. When it all passes, we'll pull everything up to 91ESR for the MVP, and you can try building it. If you want this to happen faster, please pitch in and help.

The Mozilla BlogMatrix 4, Blue’s Clues, #StarTrekDay and More — Everything That’s Old is New Again in This Week’s Top Shelf

At Mozilla, we believe part of making the internet we want is celebrating the best of the internet, and that can be as simple as sharing a tweet that made us pause in our feed. Twitter isn’t perfect, but there are individual tweets that come pretty close.

Each week in Top Shelf, we will be sharing the tweets that made us laugh, think, Pocket them for later, text our friends, and want to continue the internet revolution each week.

Here’s what made it to the Top Shelf for the week of September 6, 2021, in no particular order.

{Nostalgia has entered the chat}

This week saw people online reacting to pop-culture references that are making a comeback. As one person put it: “It’s the 90s again, baby!” And while 1990 was NOT, in fact, 10 years ago, it looks like our childhood is back in full force!

And now, for the Top Shelf Best of — :

Best “Response to Big Tech” Tweet

<figcaption>Best “Keeping it Real About Journalism” Tweet</figcaption>

Best “Right in the Feels” Tweet

The post Matrix 4, Blue’s Clues, #StarTrekDay and More — Everything That’s Old is New Again in This Week’s Top Shelf appeared first on The Mozilla Blog.

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 92-93)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 92 and 93 Nightly release cycles.

👷🏽‍♀️ JS features

⚡ WebAssembly

  • We’ve done some work towards Memory64 support.
  • The final JS API for Wasm exceptions has been implemented.
  • We added support for WebAssembly.Function from the js-types proposal.
  • We changed unaligned floating point accesses on 32-bit ARM to not use signal handlers.
  • Wasm code is now much faster and uses less memory when the debugger is used.
  • memory.fill and memory.copy are now optimized with SIMD instructions.
  • We now print better error messages to the console for asm.js

❇️ Stencil

Stencil is our project to create an explicit interface between the frontend (parser, bytecode emitter) and the rest of the VM, decoupling those components. This lets us improve web-browsing performance, simplify a lot of code and improve bytecode caching.

  • We’ve rewritten our implementation of self-hosted code (builtins implemented in JS) to be based on the stencil format instead of cloning from a special zone. This has resulted in significant memory and performance improvements.
  • We’re making changes to function delazification to later allow doing this off-thread.
  • We hardened XDR decoding more against memory/disk corruption.

🌍 Unified Intl implementation

Work is underway to unify the Intl (Internalization) code in SpiderMonkey and the rest of Gecko as a shared mozilla::intl component. This results in less code duplication and will make it easier to migrate from the ICU library to ICU4X in the future.

The past weeks Intl.Collator and Intl.RelativeTimeFormat have been ported to the new mozilla::intl code.

🗂 ReShape

ReShape is a project to optimize and simplify our object layout and property representation after removing TI. This will help us fix some long-standing issues related to performance, memory usage and code complexity.

  • We converted uses of object private slots to reserved slots and then removed private slots completely. This allowed us to optimize reserved slots.
  • We changed function objects to use reserved slots instead of a custom C++ layout.
  • We saved some memory by storing only the shape instead of an object for object literals.
  • We changed the shape teleporting optimization to avoid a performance cliff and to be simpler.
  • We changed global objects to use a C++ class instead of hundreds of reserved slots.
  • We optimized object allocation, especially for plain objects, array objects and functions because these are so common.

🧹 Garbage Collection

  • We now avoid marking and sweeping arenas for permanent atoms.
  • We simplified the GC threshold code. This resulted in a number of performance improvement alerts.
  • We simplified the GC allocation code for strings.
  • We made some changes to the way slice budgets are calculated to reduce jank caused by long GC pauses.
  • We fixed an issue with JIT code discarding heuristics that caused frequent OOMs in automation on 32-bit platforms.

📚 Miscellaneous

  • We tidied up our meta bugs in Bugzilla. We now have a tree of meta bugs.
  • We optimized Map and Set operations in the JITs.
  • We fixed a number of correctness issues with super, class return values, private methods and date parsing.
  • We now auto-generate more LIR boilerplate code.
  • A new contributor, sanketh, added an option to use fdlibm for more Math functions to get consistent results across platforms and to avoid fingerprinting.
  • We removed a lot of unnecessary includes.

Firefox NightlyThese Weeks in Firefox: Issue 99

Highlights

Friends of the Firefox team

Introductions/Shout-Outs

  • Huge welcome to new hiresKatherine Patenio and Niklas Baumgardner
    • Katherine previously worked on Review Board through a student project
    • Niklas previously worked on Firefox’s Picture-in-Picture feature through a student project
    • Both will be working on driving the DTD -> Fluent migration to completion as their first project

Resolved bugs (excluding employees)

Fixed more than one bug

  • Antonin LOUBIERE
  • Ava Katushka
  • Kajal Sah

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Bug 1717760 (initKeyEvent on KeyboardEvent should return undefined) has regressed the ability of auto-filling input fields for some extensions in Firefox 93, this has impacted password manager extensions in recent nightly builds:
    • Bitwarden has fixed the issue on the extension side – Bug 1724925
    • 1password classic is also impacted – Bug 1725232
    • We may be putting off unshipping KeyboardEvent.initKeyEvent for the extensions content scripts as a short term fix on the Firefox side – Bug 1727024. Thanks to Masayuki and :smaug for looking into that.
  • Fixed a couple of issues with custom prefs set for the xpcshell tests (Bug 1723198, Bug 1723536), not an issue specific to the extensions tests but we identified it while investigating a backout due to an unexpected android-only failure (Bug 1722966 comment 12).
WebExtension APIs
  • Fixed an issue with restoring private tabs discarded earlier during their creation (not specifically an addon issue, but it could be mainly triggered by addons using the browser.tabs.discard API) – Bug 1727024

Fission

  • Nightly users now are at 63% with Fission enabled
  • Beta users now are at 33% with Fission enabled

Form Autofill

Lint, Docs and Workflow

Nimbus / Experiments

Password Manager

Performance

Proton/MR1

  • All of the blockers for putting most of the strings (minus menubar strings) in sentence case for en-US appears to be fixed! We’re checking around to see if there are any leftovers that we somehow missed. If you find any, please mark them as blocking this bug.

Search and Navigation

  • Daisuke fixed a regression where jar: urls were not visited from the Address Bar – Bug 1726305
  • Daisuke fixed a visual bug in the separate Search Bar where search engine buttons were sometimes cut – Bug 1722507
  • Gijs made the Address Bar “not secure” chiclet visible with most locales and added a tooltip – Bug 1724212
  • Harry fixed a regression where some results were not shown if history results were disabled in Address Bar settings – Bug 1725652
  • Mark switched the Search Service tests to use IOUtils – Bug 1726565

Screenshots

The Mozilla BlogDid internet friends fill the gaps left by social distance?

March 2020 brought to the world a scenario we only imagined possible in dystopian novels. Once bustling cities and towns were desolate. In contrast, the highways and byways of the internet were completely congested with people grasping for human connection, and internet friends became more important than ever. 

Since then, there have been countless discussions about how people have fared with keeping in touch with others during the COVID-19 pandemic — like how families have endured while being separated by continents without the option to travel, and how once solid friendships have waxed and waned without brunches and cocktail hours. 

However, the internet has served more like a proverbial town square than ever before, with many having found themselves using online spaces to create and cultivate internet friends more over the last year and a half than ever before. As the country starts hesitantly opening, the looming question overall is, what will these online relationships look like when COVID-19 is no more? 

 For Will F. Coakley, a deputy constable from Austin, Texas, the highs of her online friend groups she made on Zoom and Marco Polo have already dissipated. 

“My COVID circle is no more,” she said. “I’m 38, so people my age often have spouses and children.” 

Coakley found online platforms to be a refreshing reprieve from her demanding profession that served at the frontlines of the pandemic. Just as she was getting accustomed to ‘the new normal,’ her routines once again changed, with many online friends falling out of touch as cities and towns began to experiment with opening. 

Coakley has not met anyone from her COVID circle in person, and any further communication is uncertain, boding even worse than the potential dissolutions of real-life friendships reported on throughout the year. 

“In a perfect world, we would hope that things opening up would mean that you could start to meet up with your online friends in person. However, many people will experience a transition in their social circle as they start allocating more of their emotional resources to in-person interactions.” said Kyler Shumway, PsyD, a clinical psychologist and author of The Friendship Formula: How to Say Goodbye to Loneliness and Discover Deeper Connection. “You may spend less time with your online people and more time with coworkers, friends, and family that are in your immediate area.” 

Looking back, many social apps themselves similarly spiked during the time of the lockdown and are now seeing use fall. The invite-only, social networking app Clubhouse launched in March 2020. It quickly gained popularity amid the height of the pandemic, having amassed 600,000 registered users by December 2020 and 8.1 million downloads by mid- February 2021.

The original fervor over Clubhouse has waned as fewer people are cooped up indoors. While many people still use the platform for various professional purposes or niche hobbies, its day-to-day usership has dropped significantly. 

Olivia B. Othman, 38, a Project Assistant in Wuppertal, Germany developed friendships online during the pandemic through Clubhouse as well as through a local app called Spontacts. She has been able to meet people in person as her area has begun to open and said she found the experience to be liberating.

While Othman already had experience with developing close personal relationships online, the pandemic prompted a unique perspective for her, encouraging her to invest in new devices for better communication.

Overall, she has fared well with sustaining her online friendships. 

“I have dropped some [people] but also found good people among them,” Othman said.

Othman wasn’t alone in turning to apps for friendship and human connection. Facebook and Instagram were among the most downloaded apps in 2020, according to the Business of Apps.  Facebook had 2.85 billion monthly active users as of the first quarter of 2021, compared to 2.60 billion during the first quarter of 2020. While Instagram went from 1 billion monthly active users in the first quarter of 2020 to 1.07 billion monthly active users as of the first quarter of 2021. And Twitter grew from 186 million users to 199 million users since the pandemic started, according to Twitter’s first quarter earnings report..

Apps such as TikTok and YouTube were popular outlets for creating online friendship in 2020; however, their potential for replicating the emotional fulfillment that comes from interpersonal relationships is limited, as Shumway said, “the unmet need remains unmet.”  

“Many online spaces offer synthetic connections. Instead of spending time with a friend playing a game online or going out to grab coffee, a person might be tempted to watch one of their favorite Youtubers or scroll through TikTok for hours on end,” Shumway said. “These kinds of resources provide a felt sense of relationship – you feel like you’re part of something. But then when you turn off your screen, those feelings of loneliness will come right back.”

“Relationships that may have formed over the past year through online interactions, my sense is that those will continue to last even as people start to reconnect in person. Online friendships have their limits, but they are friendships nonetheless,” Shumway said. 

Zach Fox, 29, a software engineer has maintained long-distance friendships thanks in large part to online gaming, an important social connection that carried on from before the pandemic. 

“We would text chat with each other most of the time, and use voice chat when playing video games together,” he said.

While he is excited about seeing friends and family again as restrictions lift, Fox feels his online friendships just as strong. 

“I feel closer to some of my online friends than I do to some of my ‘offline’ friends,” he said. “With several exceptions, such as my relationship with my fiancée, I tend to favor online friendships because I have the opportunity to be present with and spend quality time with my online friends more often than I can spend time with IRL friends.”

Ultimately, many are finding less anxiousness and more excitement in the prospect of returning to the real world, even if some interaction may be awkward. Both Coakley and Othman stated they favored in-person meetings to online interactions, despite having enjoyed the times they had with many online friends.

“As you make these choices, make sure to communicate openly and honestly. Rather than letting a friendship slowly decay through ghosting, consider being real and explaining to them what is happening for you. You might share that you are spending more time with people in person and that you have less energy for online time with them,” Shumway said.  

As things change Coakley has found other connections to keep herself grounded. Her mental health team comprises a counselor she has sessions with online but has only met in person once. She admitted to being more enthusiastic about her counselor who she meets with in person regularly. 

“I hate having a screen between them and me. It’s more intimate and a shared experience in person,” she said.

Despite not keeping up with her pandemic friends, Coakley does have other online friends that she knew before the pandemic with whom she has close connections. 

“At the very least, my [online] connections with [folks and with] Black women have developed into actual friendships, and we have plans on meeting in person,” she said. “We’ve become an integral part of each other’s lives. Like family, almost.”

The post Did internet friends fill the gaps left by social distance? appeared first on The Mozilla Blog.

The Rust Programming Language BlogAnnouncing Rust 1.55.0

The Rust team is happy to announce a new version of Rust, 1.55.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.55.0 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.55.0 on GitHub.

What's in 1.55.0 stable

Cargo deduplicates compiler errors

In past releases, when running cargo test, cargo check --all-targets, or similar commands which built the same Rust crate in multiple configurations, errors and warnings could show up duplicated as the rustc's were run in parallel and both showed the same warning.

For example, in 1.54.0, output like this was common:

$ cargo +1.54.0 check --all-targets
    Checking foo v0.1.0
warning: function is never used: `foo`
 --> src/lib.rs:9:4
  |
9 | fn foo() {}
  |    ^^^
  |
  = note: `#[warn(dead_code)]` on by default

warning: 1 warning emitted

warning: function is never used: `foo`
 --> src/lib.rs:9:4
  |
9 | fn foo() {}
  |    ^^^
  |
  = note: `#[warn(dead_code)]` on by default

warning: 1 warning emitted

    Finished dev [unoptimized + debuginfo] target(s) in 0.10s

In 1.55, this behavior has been adjusted to deduplicate and print a report at the end of compilation:

$ cargo +1.55.0 check --all-targets
    Checking foo v0.1.0
warning: function is never used: `foo`
 --> src/lib.rs:9:4
  |
9 | fn foo() {}
  |    ^^^
  |
  = note: `#[warn(dead_code)]` on by default

warning: `foo` (lib) generated 1 warning
warning: `foo` (lib test) generated 1 warning (1 duplicate)
    Finished dev [unoptimized + debuginfo] target(s) in 0.84s

Faster, more correct float parsing

The standard library's implementation of float parsing has been updated to use the Eisel-Lemire algorithm, which brings both speed improvements and improved correctness. In the past, certain edge cases failed to parse, and this has now been fixed.

You can read more details on the new implementation in the pull request description.

std::io::ErrorKind variants updated

std::io::ErrorKind is a #[non_exhaustive] enum that classifies errors into portable categories, such as NotFound or WouldBlock. Rust code that has a std::io::Error can call the kind method to obtain a std::io::ErrorKind and match on that to handle a specific error.

Not all errors are categorized into ErrorKind values; some are left uncategorized and placed in a catch-all variant. In previous versions of Rust, uncategorized errors used ErrorKind::Other; however, user-created std::io::Error values also commonly used ErrorKind::Other. In 1.55, uncategorized errors now use the internal variant ErrorKind::Uncategorized, which we intend to leave hidden and never available for stable Rust code to name explicitly; this leaves ErrorKind::Other exclusively for constructing std::io::Error values that don't come from the standard library. This enforces the #[non_exhaustive] nature of ErrorKind.

Rust code should never match ErrorKind::Other and expect any particular underlying error code; only match ErrorKind::Other if you're catching a constructed std::io::Error that uses that error kind. Rust code matching on std::io::Error should always use _ for any error kinds it doesn't know about, in which case it can match the underlying error code, or report the error, or bubble it up to calling code.

We're making this change to smooth the way for introducing new ErrorKind variants in the future; those new variants will start out nightly-only, and only become stable later. This change ensures that code matching variants it doesn't know about must use a catch-all _ pattern, which will work both with ErrorKind::Uncategorized and with future nightly-only variants.

Open range patterns added

Rust 1.55 stabilized using open ranges in patterns:

match x as u32 {
      0 => println!("zero!"),
      1.. => println!("positive number!"),
}

Read more details here.

Stabilized APIs

The following methods and trait implementations were stabilized.

The following previously stable functions are now const.

Other changes

There are other changes in the Rust 1.55.0 release: check out what changed in Rust, Cargo, and Clippy.

Contributors to 1.55.0

Many people came together to create Rust 1.55.0. We couldn't have done it without all of you. Thanks!

Dedication

Anna Harren was a member of the community and contributor to Rust known for coining the term "Turbofish" to describe ::<> syntax. Anna recently passed away after living with cancer. Her contribution will forever be remembered and be part of the language, and we dedicate this release to her memory.

Hacks.Mozilla.OrgTime for a review of Firefox 92

Release time comes around so quickly! This month we have quite a few CSS updates, along with the new Object.hasOwn() static method for JavaScript.

This blog post provides merely a set of highlights; for all the details, check out the following:

CSS Updates

A couple of CSS features have moved from behind a preference and are now available by default: accent-color and size-adjust.

accent-color

The accent-color CSS property sets the color of an element’s accent. Accents appear in elements such as a checkbox or radio input. It’s default value is auto which represents a UA-chosen color, which should match the accent color of the platform. You can also specify a color value. Read more about the accent-color property here.

size-adjust

The size-adjust descriptor for @font-face takes a percentage value which acts as a multiplier for glyph outlines and metrics. Another tool in the CSS box for controlling fonts, it can help to harmonize the designs of various fonts when rendered at the same font size. Check out some examples on the size-adjust descriptor page on MDN.

And more…

Along with both of those, the break-inside property now has support for values avoid-page and avoid-column, the font-size-adjust property accepts two values and if that wasn’t enough system-ui as a generic font family name for the font-family property is now supported.

break-inside property on MDN

font-size-adjust property on MDN

font-family property on MDN

Object.hasOwn arrives

A nice addition to JavaScript is the Object.hasOwn() static method. This returns true if the specified property is a direct property of the object (even if that property’s value is null or undefined). false is returned if the specified property is inherited or not declared. Unlike the in operator, this method does not check for the specified property in the object’s prototype chain.

Object.hasOwn() is recommended over Object.hasOwnProperty() as it works for objects created using Object.create(null) and with objects that have overridden the inherited hasOwnProperty() method.

Read more about Object.hasOwn() on MDN

The post Time for a review of Firefox 92 appeared first on Mozilla Hacks - the Web developer blog.

Will Kahn-GreeneMozilla: 10 years

It's been a long while since I wrote Mozilla: 1 year review. I hit my 10-year "Moziversary" as an employee on September 6th. I was hired in a "doubling" period of Mozilla, so there are a fair number of people who are hitting 10 year anniversaries right now. It's interesting to see that even though we're all at the same company, we had different journeys here.

I started out as a Software Engineer or something like that. Then I was promoted to Senior Software Engineer and then Staff Software Engineer. Then last week, I was promoted to Senior Staff Software Engineer. My role at work over time has changed significantly. It was a weird path to get to where I am now, but that's probably a topic for another post.

I've worked on dozens of projects in a variety of capacities. Here's a handful of the ones that were interesting experiences in one way or another:

  • SUMO (support.mozilla.org): Mozilla's support site

  • Input: Mozilla's feedback site, user sentiment analysis, and Mozilla's initial experiments with Heartbeat and experiments platforms

  • MDN Web Docs: documentation, tutorials, and such for web standards

  • Mozilla Location Service: Mozilla's device location query system

  • Buildhub and Buildhub2: index for build information

  • Socorro: Mozilla's crash ingestion pipeline for collecting, processing, and analyzing crash reports for Mozilla products

  • Tecken: Mozilla's symbols server for uploading and downloading symbols and also symbolicating stacks

  • Standup: system for reporting and viewing status

  • FirefoxOS: Mozilla's mobile operating system

I also worked on a bunch of libraries and tools:

  • siggen: library for generating crash signatures using the same algorithm that Socorro uses (Python)

  • Everett: configuration library (Python)

  • Markus: metrics client library (Python)

  • Bleach: sanitizer for user-provided text for use in an HTML context (Python)

  • ElasticUtils: Elasticsearch query DSL library (Python)

  • mozilla-django-oidc: OIDC authentication for Django (Python)

  • Puente: convenience library for using gettext strings in Django (Python)

  • crashstats-tools: command line tools for accessing Socorro APIs (Python)

  • rob-bugson: Firefox addon that adds Bugzilla links to GitHub PR pages (JS)

  • paul-mclendahand: tool for combining GitHub PRs into a single branch (Python)

  • Dennis: gettext translated strings linter (Python)

I was a part of things:

I've given a few presentations 1:

1

I thought there were more, but I can't recall what they might have been.

I've left lots of FIXME notes everywhere.

I made some stickers:

/images/soloist_2017_handdrawn.thumbnail.png

"Soloists" sticker (2017)

/images/ted_sticker.thumbnail.png

"Ted maintained this" sticker (2019)

I've worked with a lot of people and created some really warm, wonderful friendships. Some have left Mozilla, but we keep in touch.

I've been to many work weeks, conferences, summits, and all hands trips.

I've gone through a few profile pictures:

/images/profile_2011.thumbnail.jpg

Me in 2011

/images/profile_2013.thumbnail.jpg

Me in 2013

/images/profile_2016.thumbnail.jpg

Me in 2016 (taken by Erik Rose in London)

/images/profile_2021.thumbnail.jpg

Me in 2021

I've built a few desks, though my pictures are pretty meagre:

/images/standing_desk_rough_sketch.thumbnail.jpg

Rough sketch of a standing desk

/images/standing_desk_1.thumbnail.jpg

Standing desk and a stool I built

/images/desk_2021.thumbnail.jpg

My current chaos of a desk

I've written lots of blog posts on status, project retrospectives, releases, initiatives, and such. Some of them are fun reads still.

It's been a long 10 years. I wonder if I'll be here for 10 more. It's possible!

This Week In RustThis Week in Rust 407

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

Sadly, we had no nominations this week. Still, in the spirit of not leaving you without some neat rust code, I give you gradient, a command line tool to extract gradients from SVG, display and manipulate them.

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

300 pull requests were merged in the last week

Rust Compiler Performance Triage

A busy week, with lots of mixed changes, though in the end only a few were deemed significant enough to report here.

Triage done by @pnkfelix. Revision range: fe379..69c4a

3 Regressions, 1 Improvements, 3 Mixed; 0 of them in rollups 57 comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New RFCs

Upcoming Events

Online
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Formlogic

OCR Labs

ChainSafe

Subspace

dcSpark

Kraken

Kollider

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

In Rust, soundness is never just a convention.

@H2CO3 on rust-users

Thanks to Riccardo D'Ambrosio for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Niko MatsakisRustacean Principles

As the web site says, Rust is a language empowering everyone to build reliable and efficient software. I think it’s precisely this feeling of empowerment that people love about Rust. As wycats put it recently to me, Rust makes it “feel like things are possible that otherwise feel out of reach”. But what exactly makes Rust feel that way? If we can describe it, then we can use that description to help us improve Rust, and to guide us as we design extensions to Rust.

Besides the language itself, Rust is also an open-source community, one that prides itself on our ability to do collaborative design. But what do we do which makes us able to work well together? If we can describe that, then we can use those descriptions to help ourselves improve, and to instruct new people on how to better work within the community.

This blog post describes a project I and others have been working on called the Rustacean principles. This project is an attempt to enumerate the (heretofore implicit) principles that govern both Rust’s design and the way our community operates. The principles are still in draft form; for the time being, they live in the nikomatsakis/rustacean-principles repository.

How the principles got started

The Rustacean Principles were suggested by Shane during a discussion about how we can grow the Rust organization while keeping it true to itself. Shane pointed out that, at AWS, mechanisms like tenets and the leadership principles are used to communicate and preserve shared values.1 The goal at AWS, as in the Rust org, is to have teams that operate independently but which still wind up “itching in the same direction”, as aturon so memorably put it.

Since that initial conversation, the principles have undergone quite some iteration. The initial effort, which I presented at the CTCFT on 2021-06-21, were quite closely modeled on AWS tenets. After a number of in-depth conversations with both joshtriplett and aturon, though, I wound up evolving the structure quite a bit to what you see today. I expect them to continue evolving, particularly the section on what it means to be a team member, which has received less attention.

Rust empowers by being…

The principles are broken into two main sections. The first describes Rust’s particular way of empowering people. This description comes in the form of a list of properties that we are shooting for:

These properties are frequently in tension with one another. Our challenge as designers is to find ways to satisfy all of these properties at once. In some cases, though, we may be forced to decide between slightly penalizing one goal or another. In that case, we tend to give the edge to those goals that come earlier in the list over those that come later. Still, while the ordering is important, it’s important to emphasize that for Rust to be successful we need to achieve all of these feelings at once.

Each of the properties has a page that describes it in more detail. The page also describes some specific mechanisms that we use to achieve this property. These mechanisms take the form of more concrete rules that we apply to Rust’s design. For example, the page for reliability discusses type safety, consider all cases, and several other mechanisms. The discussion gives concrete examples of the tradeoffs at play and some of the techniques we have used to mitigate them.

One thing: these principles are meant to describe more than just the language. For example, one example of Rust being supportive are the great error messages, and Cargo’s lock files and dependency system are geared towards making Rust feel reliable.

How to Rustacean

Rust has been an open source project since its inception, and over time we have evolved and refined the way that we operate. One key concept for Rust are the governance teams, whose members are responsible for decisions regarding Rust’s design and maintenance. We definitely have a notion of what it means “to Rustacean” – there are specific behaviors that we are looking for. But it has historically been really challenging to define them, and in turn to help people to achieve them (or to recognize when we ourselves are falling short!). The next section of this site, How to Rustacean, is a first attempt at drafting just such a list. You can think of it like a companion to the Code of Conduct: whereas the CoC describes the bare minimum expected of any Rust participant, the How to Rustacean section describes what it means to excel.

This section of the site has undergone less iteration than the “Rust empowerment” section. The idea is that each of these principles has a dedicated page that elaborates on the principle and gives examples of it in action. The example of Raising an objection about a design (from Show up) is the most developed and a good one to look at to get the idea. One interesting bit is the “goldilocks” structure2, which indicates what it means to “show up” too little but also what it means to “show up” too much.

How the principles can be used

For the principles to be a success, they need to be more than words on a website. I would like to see them become something that we actively reference all the time as we go about our work in the Rust org.

As an example, we were recently wrestling with a minor point about the semantics of closures in Rust 2021. The details aren’t that important (you can read them here, if you like), but the decision ultimately came down to a question of whether to adapt the rules so that they are smarter, but more complex. I think it would have been quite useful to refer to these principles in that discussion: ultimately, I think we chose to (slightly) favor productivity at the expense of transparency, which aligns well with the ordering on the site. Further, as I noted in my conclusion, I would personally like to see some form of explicit capture clause for closures, which would give users a way to ensure total transparency in those cases where it is most important.

The How to Rustacean section can be used in a number of ways. One thing would be cheering on examples of where someone is doing a great job: Mara’s issue celebrating all the contributions to the 2021 Edition is a great instance of paying it forward, for example, and I would love it if we had a precise vocabulary for calling that out.

Another time these principles can be used is when looking for new candidates for team membership. When considering a candidate, we can look to see whether we can give concrete examples of times they have exhibited these qualities. We can also use the principles to give feedback to people about where they need to improve. I’d like to be able to tell people who are interested in joining a Rust team, “Well, I’ve noticed you do a great job of showing up, but your designs tend to get mired in complexity. I think you should work on start somewhere.”

“Hard conversations” where you tell someone what they can do better are something that mangers do (or try to do…) in companies, but which often get sidestepped or avoided in an open source context. I don’t claim to be an expert, but I’ve found that having structure can help to take away the “sting” and make it easier for people to hear and learn from the feedback.3

What comes next

I think at this point the principles have evolved enough that it makes sense to get more widespread feedback. I’m interested in hearing from people who are active in the Rust community about whether they reflect what you love about Rust (and, if not, what might be changed). I also plan to try and use them to guide both design discussions and questions of team membership, and I encourage others in the Rust teams to do the same. If we find that they are useful, then I’d like to see them turned into an RFC and ultimately living on forge or somewhere more central.

Questions?

I’ve opened an internals thread for discussion.

Footnotes

  1. One of the first things that our team did at Amazon was to draft its own tenets; the discussion helped us to clarify what we were setting out to do and how we planned to do it. 

  2. Hat tip to Marc Brooker, who suggested the “Goldilocks” structure, based on how the Leadership Principles are presented in the AWS wiki. 

  3. Speaking of which, one glance at my queue of assigned PRs make it clear that I need to work on my follow through

Data@MozillaThis Week in Glean: Data Reviews are Important, Glean Parser makes them Easy

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index).

At Mozilla we put a lot of stock in Openness. Source? Open. Bug tracker? Open. Discussion Forums (Fora?)? Open (synchronous and asynchronous).

We also have an open process for determining if a new or expanded data collection in a Mozilla project is in line with our Privacy Principles and Policies: Data Review.

Basically, when a new piece of instrumentation is put up for code review (or before, or after), the instrumentor fills out a form and asks a volunteer Data Steward to review it. If the instrumentation (as explained in the filled-in form) is obviously in line with our privacy commitments to our users, the Data Steward gives it the go-ahead to ship.

(If it isn’t _obviously_ okay then we kick it up to our Trust Team to make the decision. They sit next to Legal, in case you need to find them.)

The Data Review Process and its forms are very generic. They’re designed to work for any instrumentation (tab count, bytes transferred, theme colour) being added to any project (Firefox Desktop, mozilla.org, Focus) and being collected by any data collection system (Firefox Telemetry, Crash Reporter, Glean). This is great for the process as it means we can use it and rely on it anywhere.

It isn’t so great for users _of_ the process. If you only ever write Data Reviews for one system, you’ll find yourself answering the same questions with the same answers every time.

And Glean makes this worse (better?) by including in its metrics definitions almost every piece of information you need in order to answer the review. So now you get to write the answers first in YAML and then in English during Data Review.

But no more! Introducing glean_parser data-review and mach data-review: command-line tools that will generate for you a Data Review Request skeleton with all the easy parts filled in. It works like this:

  1. Write your instrumentation, providing full information in the metrics definition.
  2. Call python -m glean_parser data-review <bug_number> <list of metrics.yaml files> (or mach data-review <bug_number> if you’re adding the instrumentation to Firefox Desktop).
  3. glean_parser will parse the metrics definitions files, pull out only the definitions that were added or changed in <bug_number>, and then output a partially-filled-out form for you.

Here’s an example. Say I’m working on bug 1664461 and add a new piece of instrumentation to Firefox Desktop:

fog.ipc:
  replay_failures:
    type: counter
    description: |
      The number of times the ipc buffer failed to be replayed in the
      parent process.
    bugs:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1664461
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1664461
    data_sensitivity:
      - technical
    notification_emails:
      - chutten@mozilla.com
      - glean-team@mozilla.com
    expires: never

I’m sure to fill in the `bugs` field correctly (because that’s important on its own _and_ it’s what glean_parser data-review uses to find which data I added), and have categorized the data_sensitivity. I also included a helpful description. (The data_reviews field currently points at the bug I’ll attach the Data Review Request for. I’d better remember to come back before I land this code and update it to point at the specific comment…)

Then I can simply use mach data-review 1664461 and it spits out:

!! Reminder: it is your responsibility to complete and check the correctness of
!! this automatically-generated request skeleton before requesting Data
!! Collection Review. See https://wiki.mozilla.org/Data_Collection for details.

DATA REVIEW REQUEST
1. What questions will you answer with this data?

TODO: Fill this in.

2. Why does Mozilla need to answer these questions? Are there benefits for users?
   Do we need this information to address product or business requirements?

TODO: Fill this in.

3. What alternative methods did you consider to answer these questions?
   Why were they not sufficient?

TODO: Fill this in.

4. Can current instrumentation answer these questions?

TODO: Fill this in.

5. List all proposed measurements and indicate the category of data collection for each
   measurement, using the Firefox data collection categories found on the Mozilla wiki.

Measurement Name | Measurement Description | Data Collection Category | Tracking Bug
---------------- | ----------------------- | ------------------------ | ------------
fog_ipc.replay_failures | The number of times the ipc buffer failed to be replayed in the parent process.  | technical | https://bugzilla.mozilla.org/show_bug.cgi?id=1664461


6. Please provide a link to the documentation for this data collection which
   describes the ultimate data set in a public, complete, and accurate way.

This collection is Glean so is documented
[in the Glean Dictionary](https://dictionary.telemetry.mozilla.org).

7. How long will this data be collected?

This collection will be collected permanently.
**TODO: identify at least one individual here** will be responsible for the permanent collections.

8. What populations will you measure?

All channels, countries, and locales. No filters.

9. If this data collection is default on, what is the opt-out mechanism for users?

These collections are Glean. The opt-out can be found in the product's preferences.

10. Please provide a general description of how you will analyze this data.

TODO: Fill this in.

11. Where do you intend to share the results of your analysis?

TODO: Fill this in.

12. Is there a third-party tool (i.e. not Telemetry) that you
    are proposing to use for this data collection?

No.

As you can see, this Data Review Request skeleton comes partially filled out. Everything you previously had to mechanically fill out has been done for you, leaving you more time to focus on only the interesting questions like “Why do we need this?” and “How are you going to use it?”.

Also, this saves you from having to remember the URL to the Data Review Request Form Template each time you need it. We’ve got you covered.

And since this is part of Glean, this means this is already available to every project you can see here. This isn’t just a Firefox Desktop thing.

Hope this saves you some time! If you can think of other time-saving improvements we could add once to Glean so every Mozilla project can take advantage of, please tell us on Matrix.

If you’re interested in how this is implemented, glean_parser’s part of this is over here, while the mach command part is here.

:chutten

(( This is a syndicated copy of the original post. ))

Chris H-CThis Week in Glean: Data Reviews are Important, Glean Parser makes them Easy

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index).

At Mozilla we put a lot of stock in Openness. Source? Open. Bug tracker? Open. Discussion Forums (Fora?)? Open (synchronous and asynchronous).

We also have an open process for determining if a new or expanded data collection in a Mozilla project is in line with our Privacy Principles and Policies: Data Review.

Basically, when a new piece of instrumentation is put up for code review (or before, or after), the instrumentor fills out a form and asks a volunteer Data Steward to review it. If the instrumentation (as explained in the filled-in form) is obviously in line with our privacy commitments to our users, the Data Steward gives it the go-ahead to ship.

(If it isn’t _obviously_ okay then we kick it up to our Trust Team to make the decision. They sit next to Legal, in case you need to find them.)

The Data Review Process and its forms are very generic. They’re designed to work for any instrumentation (tab count, bytes transferred, theme colour) being added to any project (Firefox Desktop, mozilla.org, Focus) and being collected by any data collection system (Firefox Telemetry, Crash Reporter, Glean). This is great for the process as it means we can use it and rely on it anywhere.

It isn’t so great for users _of_ the process. If you only ever write Data Reviews for one system, you’ll find yourself answering the same questions with the same answers every time.

And Glean makes this worse (better?) by including in its metrics definitions almost every piece of information you need in order to answer the review. So now you get to write the answers first in YAML and then in English during Data Review.

But no more! Introducing glean_parser data-review and mach data-review: command-line tools that will generate for you a Data Review Request skeleton with all the easy parts filled in. It works like this:

  1. Write your instrumentation, providing full information in the metrics definition.
  2. Call python -m glean_parser data-review <bug_number> <list of metrics.yaml files> (or mach data-review <bug_number> if you’re adding the instrumentation to Firefox Desktop).
  3. glean_parser will parse the metrics definitions files, pull out only the definitions that were added or changed in <bug_number>, and then output a partially-filled-out form for you.

Here’s an example. Say I’m working on bug 1664461 and add a new piece of instrumentation to Firefox Desktop:

fog.ipc:
  replay_failures:
    type: counter
    description: |
      The number of times the ipc buffer failed to be replayed in the
      parent process.
    bugs:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1664461
    data_reviews:
      - https://bugzilla.mozilla.org/show_bug.cgi?id=1664461
    data_sensitivity:
      - technical
    notification_emails:
      - chutten@mozilla.com
      - glean-team@mozilla.com
    expires: never

I’m sure to fill in the `bugs` field correctly (because that’s important on its own _and_ it’s what glean_parser data-review uses to find which data I added), and have categorized the data_sensitivity. I also included a helpful description. (The data_reviews field currently points at the bug I’ll attach the Data Review Request for. I’d better remember to come back before I land this code and update it to point at the specific comment…)

Then I can simply use mach data-review 1664461 and it spits out:

!! Reminder: it is your responsibility to complete and check the correctness of
!! this automatically-generated request skeleton before requesting Data
!! Collection Review. See https://wiki.mozilla.org/Data_Collection for details.

DATA REVIEW REQUEST
1. What questions will you answer with this data?

TODO: Fill this in.

2. Why does Mozilla need to answer these questions? Are there benefits for users?
   Do we need this information to address product or business requirements?

TODO: Fill this in.

3. What alternative methods did you consider to answer these questions?
   Why were they not sufficient?

TODO: Fill this in.

4. Can current instrumentation answer these questions?

TODO: Fill this in.

5. List all proposed measurements and indicate the category of data collection for each
   measurement, using the Firefox data collection categories found on the Mozilla wiki.

Measurement Name | Measurement Description | Data Collection Category | Tracking Bug
---------------- | ----------------------- | ------------------------ | ------------
fog_ipc.replay_failures | The number of times the ipc buffer failed to be replayed in the parent process.  | technical | https://bugzilla.mozilla.org/show_bug.cgi?id=1664461


6. Please provide a link to the documentation for this data collection which
   describes the ultimate data set in a public, complete, and accurate way.

This collection is Glean so is documented
[in the Glean Dictionary](https://dictionary.telemetry.mozilla.org).

7. How long will this data be collected?

This collection will be collected permanently.
**TODO: identify at least one individual here** will be responsible for the permanent collections.

8. What populations will you measure?

All channels, countries, and locales. No filters.

9. If this data collection is default on, what is the opt-out mechanism for users?

These collections are Glean. The opt-out can be found in the product's preferences.

10. Please provide a general description of how you will analyze this data.

TODO: Fill this in.

11. Where do you intend to share the results of your analysis?

TODO: Fill this in.

12. Is there a third-party tool (i.e. not Telemetry) that you
    are proposing to use for this data collection?

No.

As you can see, this Data Review Request skeleton comes partially filled out. Everything you previously had to mechanically fill out has been done for you, leaving you more time to focus on only the interesting questions like “Why do we need this?” and “How are you going to use it?”.

Also, this saves you from having to remember the URL to the Data Review Request Form Template each time you need it. We’ve got you covered.

And since this is part of Glean, this means this is already available to every project you can see here. This isn’t just a Firefox Desktop thing. 

Hope this saves you some time! If you can think of other time-saving improvements we could add once to Glean so every Mozilla project can take advantage of, please tell us on Matrix.

If you’re interested in how this is implemented, glean_parser’s part of this is over here, while the mach command part is here.

:chutten

Cameron KaiserTenFourFox FPR32 SPR4 available

TenFourFox Feature Parity Release 32 Security Parity Release 4 "32.4" is available for testing (downloads, hashes). There are, as before, no changes to the release notes nor anything notable about the security patches in this release. Assuming no major problems, FPR32.4 will go live Monday evening Pacific time as usual. The final official build FPR32.5 remains scheduled for October 5, so we'll do a little look at your options should you wish to continue building from source after that point later this month.

The Mozilla BlogDoomscrolling, ads in texting, Theranos NOT Thanos, and more are the tweets on this week’s #TopShelf

At Mozilla, we believe part of making the internet we want is celebrating the best of the internet, and that can be as simple as sharing a tweet that made us pause in our feed. Twitter isn’t perfect, but there are individual tweets that come pretty close.

Each week in Top Shelf, we will be sharing the tweets that made us laugh, think, Pocket them for later, text our friends, and want to continue the internet revolution each week.

Here’s what made it to the Top Shelf for the week of August 30, 2021, in no particular order.

Pocket Joy List Project

The Pocket Joy List Project

The stories, podcasts, poems and songs we always come back to

The post Doomscrolling, ads in texting, Theranos NOT Thanos, and more are the tweets on this week’s #TopShelf appeared first on The Mozilla Blog.

Firefox Add-on ReviewsuBlock Origin—everything you need to know about the ad blocker

Rare is the browser extension that can satisfy both passive and power users. But that’s an essential part of uBlock Origin’s brilliance—it is an ad blocker you could recommend to your most tech forward friend as easily as you could to someone who’s just emerged from the jungle lost for the past 20 years. 

If you install uBlock Origin and do nothing else, right out of the box it will block nearly all types of internet advertising—everything from big blinking banners to search ads and video pre-rolls and all the rest. However if you want extremely granular levels of content control, uBlock Origin can accommodate via advanced settings. 

We’ll try to split the middle here and walk through a few of the extension’s most intriguing features and options…

Does using uBlock Origin actually speed up my web experience? 

Yes. Not only do web pages load faster because the extension blocks unwanted ads from loading, but uBlock Origin utilizes a uniquely lightweight approach to content filtering so it imposes minimal impact on memory consumption. It is generally accepted that uBlock Origin offers the most performative speed boost among top ad blockers. 

But don’t ad blockers also break pages? 

Occasionally that can occur, where a page breaks if certain content is blocked or some websites will even detect the presence of an ad blocker and halt passage. 

Fortunately this doesn’t happen as frequently with uBlock Origin as it might with other ad blockers and the extension is also extremely effective at bypassing anti-ad blockers (yes, an ongoing battle rages between ad tech and content blocking software). But if uBlock Origin does happen to break a page you want to access it’s easy to turn off content blocking for specific pages you trust or perhaps even want to see their ads.

<figcaption>Hit the blue on/off button if you want to suspend content blocking on any page.</figcaption>

Show us a few tips & tricks

Let’s take a look at some high level settings and what you can do with them. 

  • Lightning bolt button enables Element Zapper, which lets you temporarily remove page elements by simply mousing over them and clicking. For example, this is convenient for removing embedded gifs or for hiding disturbing images you may encounter in some news articles.
  • Eye dropper button enables Element Picker, which lets you permanently remove page elements. For example, if you find Facebook Stories a complete waste of time, just activate Element Picker, mouse over/click the Stories section of the page, select “Create” and presto—The End of Facebook Stories.    

The five buttons on this row will only affect the page you’re on.

  • Pop-up button blocks—you guessed it—pop-ups
  • Film button blocks large media elements like embedded video, audio, or images
  • Eye slash button disables cosmetic filtering, which is on by default and elegantly reformats your pages when ads are removed, but if you’d prefer to see pages laid out as they were intended (with just empty spaces instead of ads) then you have that option
  • “Aa” button blocks remote fonts from loading on the page
  • “</>” button disables JavaScript on the page

Does uBlock Origin protect against malware? 

In addition to using various advertising block lists, uBlock Origin also leverages potent lists of known malware sources, so it automatically blocks those for you as well. To be clear, there is no software that can offer 100% malware protection, but it doesn’t hurt to give yourself enhanced protections like this. 

All of the content block lists are actively maintained by volunteers who believe in the mission of providing users with more choice and control over the content they see online. “uBlock Origin stands uncompromisingly for all users’ best interests, it’s not monetized, and its development and maintenance is driven only by volunteers who share the same view,” says uBlock Origin founder and developer Raymond Hill. “As long as I am the maintainer of [uBlock Origin], this will not change.”

We could go into a lot more detail about uBlock Origin—how you can create your own custom filter lists, how you can set it to block only media of a certain size, cloud storage sync, and so on—but power users will discover these delights on their own. Hopefully we’ve provided enough insight here to help you make an informed choice about exploring uBlock Origin, whether it be your first ad blocker or just the latest. 

If you’d like to check out other amazing ad blocker options, please see What’s the best ad blocker for you?

Mark MayoCelebrating 10k KryptoSign users with an on-chain lottery feature!

TL;DR: we’re adding 3 new features to KryptoSign today!

  • CSV downloads of a document’s signers
  • Document Locking (prevent further signing)
  • Document Lotteries (pick a winner from list of signers)

Why? Well, you folks keep abusing this simple Ethereum-native document signing tool to run contests for airdrops and pre-sales, so we thought we’d make your lives a bit easier! :)

up and to the right graph showing exponential growth of KS

We launched KryptoSign in May this year as tool for Kai, Bart, and I to do the lightest possible “contract signing” using our MetaMask wallets. Write down a simple scope of work with someone, both parties sign with their wallet to signal they agree. When the job is complete, their Ethereum address is right there to copy-n-paste into a wallet to send payment. Quick, easy, delightful. :)

But as often happens, users started showing up and using it for other things. Like guestbooks. And then guestbooks became a way to sign up users for NFT drops as part of contests and pre-sales, and so on. The organizer has everyone sign a KS doc, maybe link their Discord or Twitter, and then picks a winner and sends a NFT/token/etc. to their address in the signature block. Cool.

As these NFT drops started getting really hot the feature you all wanted was pretty obvious: have folks sign a KS document as part of a pre-sales window, and have KS pick the winner automatically. Because the stakes on things like hot NFT pre-sales are high, we decided to implement the random winner using Chainlink’s VRF — verifiable random functions — which means everyone involved in a KryptoSign lottery can independently confirm how the random winner was picked. Transparency is nice!

The UI for doing this is quite simple, as you’d hope and expect from KryptoSign. There’s an action icon on the document now:

screenshot of menu option to pick a winner from the signers of a document

When you’re ready to pick a winner, it’s pretty easy. Lock the document, and hit the button:

Of note, to pick a winner we’re collecting to 0.05 ETH from you to cover the cost of the 2 LINK required to invoke the VRF on mainnet. You don’t need your own LINK and all the gas-incurring swapping that would imply. Phew! The user approves a single transaction with their wallet (including gas to interact with the smart contract) and they’re done.

Our initial users really wanted the on-chain trust of a VRF, and are willing to pay for it so their communities can trust the draw, but for other use cases you have in mind, maybe it’s overkill? Let us know! We’ll continue to build upon KryptoSign as long as people find useful things to do with it.

Finally, big props to our team who worked through some rough patches with calling the Chainlink VRF contract. Blockchain is weird, yo! This release saw engineering contributions from Neo Cho, Ryan Ouyang, and Josh Peters. Thanks!

— Mark


Celebrating 10k KryptoSign users with an on-chain lottery feature! was originally published in Block::Block on Medium, where people are continuing the conversation by highlighting and responding to this story.

The Mozilla BlogMozilla VPN Completes Independent Security Audit by Cure53

Today, Mozilla published an independent security audit of its Mozilla VPN, which provides encryption and device-level protection of your connection and information when you are on the Web, from Cure53, an unbiased cybersecurity firm based in Berlin with more than 15 years of running software testing and code auditing. Mozilla periodically works with third-party organizations to complement our internal security programs and help improve the overall security of our products. During the independent audit, there were two medium and one high severity issues that were discovered. We have addressed these in this blog post and published the security audit report.

Since our launch last year, Mozilla VPN, our fast and easy-to-use Virtual Private Network service, has expanded to seven countries including Austria, Belgium, France, Germany, Italy, Spain and Switzerland adding to a total of 13 countries where Mozilla VPN is available. We also expanded our VPN service offerings and it’s now available on Windows, Mac, Linux, Android and iOS platforms. Lastly, our list of languages that we support continues to grow, and to date we support 28 languages. 

Developed by Mozilla, a mission-driven company with a 20-year track record of fighting for online privacy and a healthier internet, we are committed to innovate and bring new features to the Mozilla VPN through feedback from our community. This year, the team has been working on additional security and customization features which will soon be available to our users. 

We know that it’s more important than ever for you to feel safe, and for you to know that what you do online is your own business. Check out the Mozilla VPN and subscribe today from our website.

For more on Mozilla VPN:

Celebrating Mozilla VPN: How we’re keeping your data safe for you

Latest Mozilla VPN features keep your data safe

Mozilla Puts Its Trusted Stamp on VPN

The post Mozilla VPN Completes Independent Security Audit by Cure53 appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 406

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is cargo-llvm-cov, a cargo subcommand for LLVM-based code coverage.

Thanks to Jacob Pratt for the suggestion.

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

296 pull requests were merged in the last week

Rust Compiler Performance Triage

A very busy week with relatively even amounts of regressions and improvements (albeit with improvements outweighing regressions). The largest win was the use of profile-guided optimization (PGO) builds on x86_64 linux builds which brings fairly large improvements in real-world crates. There were 2 regressions that caused fairly large (~3.5%) regressions in real-world crates which need to be investigated.

Triage done by @rylev. Revision range: 33fdb..fe379

5 Regressions, 4 Improvements, 5 Mixed; 0 of them in rollups 56 comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New RFCs

No new RFCs were proposed this week.

Upcoming Events

Online
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

NZXT

Polar Sync

Subspace Labs

Kollider

Kraken

TrueLayer

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Anyway: the standard library docs say "check the nomicon"
then the nomicon says "here is some advice and ultimately we don't know, maybe check UCG"
then UCG says "ultimately we don't know it's probably like this but there's no RFC yet"
then Ralf says "probably it should be allowed if the layout matches".

Lokathor on the Rust Zulip

Thanks to Riccardo D'Ambrosio for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Mozilla Security BlogMozilla VPN Security Audit

To provide transparency into our ongoing efforts to protect your privacy and security on the Internet, we are releasing a security audit of Mozilla VPN that Cure53 conducted earlier this year.

The scope of this security audit included the following products:

  • Mozilla VPN Qt5 App for macOS
  • Mozilla VPN Qt5 App for Linux
  • Mozilla VPN Qt5 App for Windows
  • Mozilla VPN Qt5 App for iOS
  • Mozilla VPN Qt5 App for Android

Here’s a summary of the items discovered within this security audit that were medium or higher severity:

  • FVP-02-014: Cross-site WebSocket hijacking (High)
    • Mozilla VPN client, when put in debug mode, exposes a WebSocket interface to localhost to trigger events and retrieve logs (most of the functional tests are written on top of this interface). As the WebSocket interface was used only in pre-release test builds, no customers were affected.  Cure53 has verified that this item has been properly fixed and the security risk no longer exists.
  • FVP-02-001: VPN leak via captive portal detection (Medium)
    • Mozilla VPN client allows sending unencrypted HTTP requests outside of the tunnel to specific IP addresses, if the captive portal detection mechanism has been activated through settings.  However, the captive portal detection algorithm requires a plain-text HTTP trusted endpoint to operate. Firefox, Chrome, the network manager of MacOS and many applications have a similar solution enabled by default. Mozilla VPN utilizes the Firefox endpoint.  Ultimately, we have accepted this finding as the user benefits of captive portal detection outweigh the security risk.
  • FVP-02-016: Auth code could be leaked by injecting port (Medium)
    • When a user wants to log into Mozilla VPN, the VPN client will make a request to https://vpn.mozilla.org/api/v2/vpn/login/windows to obtain an authorization URL. The endpoint takes a port parameter that will be reflected in a <img> element after the user signs into the web page. It was found that the port parameter could be of an arbitrary value. Further, it was possible to inject the @ sign, so that the request will go to an arbitrary host instead of localhost (the site’s strict Content Security Policy prevented such requests from being sent). We fixed this issue by improving the port number parsing in the REST API component. The fix includes several tests to prevent similar errors in the future.

If you’d like to read the detailed report from Cure53, including all low and informational items, you can find it here.

More information on the issues identified in this report can be found in our MFSA2021-31 Security Advisory published on July 14th, 2021.

The post Mozilla VPN Security Audit appeared first on Mozilla Security Blog.

Mozilla Privacy BlogMozilla Mornings on the Digital Markets Act: Key questions for Parliament

On 13 September, Mozilla will host the next installment of Mozilla Mornings – our regular event series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

For this installment, we’re checking in on the Digital Markets Act. Our panel of experts will discuss the key outstanding questions as the debate in Parliament reaches its fever pitch.

Speakers

Andreas Schwab MEP
IMCO Rapporteur on the Digital Markets Act
Group of the European People’s Party

Mika Shah
Co-Acting General Counsel
Mozilla

Vanessa Turner
Senior Advisor
BEUC

With opening remarks by Raegan MacDonald, Director of Global Public Policy, Mozilla

Moderated by Jennifer Baker
EU technology journalist

 

Logistical details

Monday 13 September, 17:00 – 18:00 CEST

Zoom Webinar

Register *here*

Webinar login details to be shared on day of event

The post Mozilla Mornings on the Digital Markets Act: Key questions for Parliament appeared first on Open Policy & Advocacy.

Niko MatsakisNext CTCFT Meeting: 2021-09-20

Hold the date! The next Cross Team Collaboration Fun Times meeting will be 2021-09-20. We’ll be using the “Asia-friendly” time slot of 21:00 EST.

What will the talks be about?

A detailed agenda will be announced in a few weeks. Current thinking however is to center the agenda on Rust interest groups and domain working groups, those brave explorers who are trying to put Rust to use on all kinds of interesting domains, such as game development, cryptography, machine learning, formal verification, and embedded development. If you run an interest group and I didn’t list your group here, perhaps you want to get in touch! We’ll be talking about how these groups operate and how we can do a better job of connecting interest groups with the Rust org.

Will there be a social hour?

Absolutely! The social hour has been an increasingly popular feature of the CTCFT meeting. It will take place after the meeting (22:00 EST).

How can I get this on my calendar?

The CTCFT meetings are announced on this google calendar.

Wait, what about August?

Perceptive readers will note that there was no CTCFT meeting in August. That’s because I and many others were on vacation. =)

Firefox Add-on ReviewsBoost your writing skills with a browser extension

Whatever kind of writing you do—technical documentation, corporate communications, Harry Potter-vampire crossover fan fiction—it likely happens online. Here are some great browser extensions that will benefit anyone who writes on the web. Get grammar help, productivity tools, and other strong writing aids… 

LanguageTool

It’s like having your own copy editor with you wherever you write on the web. Language Tool – Grammar and Spell Checker will make you a better writer in 25+ languages. 

More than just a spell checker, LanguageTool also…

  • Recognizes common misuses of similar sounding words (e.g. there/their or your/you’re)
  • Works on social media sites and email
  • Offers alternate phrasing and style suggestions for brevity and clarity

Dictionary Anywhere

Need a quick word definition? With Dictionary Anywhere just double-click any word you find on the web and get an instant pop-up definition. 

You can even save and download words and their definitions for later offline reference. 

<figcaption>Dictionary Anywhere — no more navigating away from a page just to get a word check.</figcaption>

Dark Background and Light Text

Give your eyes a break, writers. Dark Background and Light Text makes staring at blinking words all day a whole lot easier on your lookers. 

Really simple to use out of the box. Once installed, the extension’s default settings automatically flip the colors of every web page you visit. But if you’d like more granular control of color settings, just click the extension’s toolbar button to access a pop-up menu that lets you customize color schemes, set page exceptions for sites you don’t want colors inverted, and more simple controls. 

<figcaption>Dark Background and Light Text goes easy on the eyes.</figcaption>

Clippings

If your online writing requires the repeated use of certain phrases (for example, work email templates or customer support responses), Clippings can be a huge time saver. 

Key features…

  • Create a practically limitless library of saved phrases
  • Paste your clippings anywhere via context menu
  • Organize batches of clippings with folders and color coded labels
  • Shortcut keys for power users
  • Extension supported in English, Dutch, French, German, and Portuguese (Brazil)
<figcaption>Clippings handles bulk cutting/pasting. </figcaption>

We hope one of these extensions helps your words pop off the screen. Some writers may also be interested in this collection of great productivity extensions for optimizing written project plans. Feel free to explore thousands of other potentially useful extensions on addons.mozilla.org

The Mozilla BlogWhy are hyperlinks blue?

The internet has ingrained itself into every aspect of our lives, but there’s one aspect of the digital world that I bet you take for granted. Did you ever notice that many links, specifically hyperlinks, are blue? When a co-worker casually asked me why links are blue, I was stumped. As a user experience designer who has created websites since 2001, I’ve always made my links blue. I have advocated for the specific shade of blue, and for the consistent application of blue, yes, but I’ve never stopped and wondered, why are links blue? It was just a fact of life. Grass is green and hyperlinks are blue. Culturally, we associate links with the color blue so much that in 2016, when Google changed its links to black, it created quite a disruption

But now, I find myself all consumed by the question, WHY are links blue? WHO decided to make them blue? WHEN was this decision made, and HOW has this decision made such a lasting impact? 

I turned to my co-workers to help me research, and we started to find the answer. Mosaic, an early browser released by Marc Andreessen and Eric Bina on January 23, 1993, had blue hyperlinks. To truly understand the origin and evolution of hyperlinks though, I took a journey through technology history and interfaces to explore how links were handled before color monitors, and how interfaces and hyperlinks rapidly evolved once color became an option.

The ancestors of the blue hyperlink

By looking at these pre-color hyperlink solutions, we can see how hyperlinks evolved over time and how these early innovations impact usability on the web today.

1964 – Project Xanadu

Project Xanadu connected two pages of information for the first time in history. Links were visual lines between pages.

1983 – HyperTIES system

This system introduces color, as it used cyan hyperlinks on a black background. HyperTies was used as an “electric journal.” This may be an ancestor of our blue hyperlink we know and love today, but I do not believe that this is the first instance of the blue hyperlink since this color is cyan, and not dark blue.

1985 – Windows 1.0

Windows 1.0 brought a full color graphic interface. The links and buttons are still black, similar to Apple’s interface at the time. What I do find interesting, however, is that this is the first instance of our dark blue used in a layout. The dark blue is heavily used in the headings and on borders around modals.

Another interesting thing about Windows 1.0 that still appears in modern websites is the underlined hyperlink. This is the first example of an underline being used to indicate a hyperlink that I have been able to find.

To make Windows 1.0 even more interesting, we see the introduction of a hover state. The hallmarks of modern interaction design were alive and well in 1985.

1987 – HyperCard

Released by Apple for the Macintosh, this program used hyperlinks between pages and apps. While aesthetically beautiful, this version did not use color in its hyperlinks.

1987 – WorldWideWeb (WWW)

WWW was the first browser created by Tim Berners-Lee while working at CERN. It started out as black and white, with underlines under hyperlinks, which are still used today on modern websites, and are a great solution for colorblindness.

The hunt for who made it blue

We’ve now been able to narrow down the time frame for the blue hyperlink’s origin. WWW, the first browser, was created in 1987 and was black and white. We know that Mosaic was released on January 23, 1993 and was credited as being the first browser with blue hyperlinks. So far, we have been unable to find blue being used for hyperlinks in any interface before 1987, but as color monitors become more available and interfaces start to support color, things change quickly. The next few years will see massive innovation and experimentation in color and hyperlink management.

1990 – Windows 3.0 

Windows 3 included support for 16 colors, however the text links were still black links on a white background, which turned to white text on a black background when selected.

1991 – Gopher Protocol

Gopher Protocol was created at the University of Minnesota for searching and retrieving documents.  Its original design featured green text on a black background.

1991 – HyperCard (Color)

Apple brought color to its HyperCards, but notably, the text links were still black and not blue. However, some UI elements did have blue accents when interacted upon which is incredibly important as it shows the slow shift of blue being used as an interaction color.

October 5, 1991 – Linux Kernel

Linux used white text on a black background.

1992 – ViolaWWW

In the ViolaWWW browser, the text links are underlined, and the background color is gray, like we would see is Mosaic’s initial release. However, the text links are black.

April 6, 1992 – Windows 3.1

Microsoft has been using dark blue for interfaces since 1985, but starting in 1990 they also began using it for interaction. Here Microsoft uses the “hyperlink blue” for active states when a user clicks on different drives, folders and icons. This is incredibly important because it shows the slow evolution of this blue from being a layout color to being an interactive color, preceding the time when blue would have been added to Mosaic by almost exactly a year.

January 16, 1992 – June 21, 1992 – Linux Kernel

In 1992, Linux Kernel gained support for color in their console.

Who did it first?

January, 1993 – Mosaic

The first beta version of Mosaic was created for the X Window System for the University of Illinois. The original interface was black and white and did not have blue hyperlinks, but had black hyperlinks with a bordered outline. According to the X System user guide, the hyperlinks were underlined or highlighted.

April 12, 1993 – Mosaic Version 0.13

In the changelog for Mosaic for version 0.13, there is one bullet that is of great importance to us:

Changed default anchor representations: blue and single solid underline for unvisited, dark purple and single dashed underline for visited.

Release Notes

In the immortal words of Jeff Goldblum’s Ian Malcom character in Jurassic Park, “Well, there it is.” 

April 21, 1993 – Mosaic Version 1

Mosaic Launched for the X Window System. I was unable to find screenshots of what the interface looked like for this release, but according to the release notes, the visited color was changed to be a “Better visited anchor color for non-SGI’s”.

June 8, 1993 – Cello Beta

Cello was created at Cornell Law School so that lawyers could access their legal website from Windows computers. My team mate, Molly, was able to download the 0.1 beta for me, and we were shocked by what we found:

There it was! Our hyperlink style, except it wasn’t a hyperlink, it was the heading. Our “link blue” had never shown up in user interfaces before 1993, and suddenly it appears in two instances within two short months of each other in two separate browsers at two different universities being built at the same time.

September, 1993  – Mosaic Ports

By September, a port of Mosaic was released to the Macintosh 7.1 operating system. I was able to locate a screenshot of this version which included a blue hyperlink which is the first visual evidence of the color blue being used to denote a hyperlink.

What came after the blue link?

June 1993 – Unix GUI – Common Desktop Environment

Common Desktop Environment is a GUI for the UNIX operating system, the same operating system used to build Mosaic. This interface featured black text with an underline for hyperlinks.

1994 – Cello Version 1

Cello is out of beta, but now features a yellow background, and has kept it’s link-blue underlined headers and still has black hyperlinks with a border.

October 13, 1994 – Netscape Navigator

Created by Marc Andreessen and James H. Clark, Netscape used the same visual language of Mosaic: blue hyperlinks and a gray background.

July 1995 – Internet Explorer 1.0

In 1995, Microsoft produced Internet Explorer, and no surprise, it also featured blue hyperlinks and a gray background. Internet Explorer was packaged with Windows 95, which was the first time that a browser came with an operating system. Around this time, the browser wars began, but the look and feel of hyperlinks had been firmly established.

November 9, 2004 – Firefox 1.0

Mozilla Firefox was released, and also featured blue hyperlinks, which are in use to this day. These images are from Netscape 1.22 and Firefox Nightly today.

So why blue hyperlinks?

What happened in 1993 to suddenly make hyperlinks blue? No one knows, but I have some theories.

I often hear that blue was chosen as the hyperlink color for color contrast. Well, even though the W3C wasn’t created until 1994, and so the standards for which we judge web accessibility weren’t yet defined, if we look at the contrast between black as a text color, and blue as a link color, there is a contrast ratio of 2.3:1, which would not pass as enough color contrast between the blue hyperlink and the black text. 

Instead, I like to imagine that Cello and Mosaic were both inspired by the same trends happening in user interface design at the time. My theory is that Windows 3.1 had just come out a few months before the beginning of both projects, and this interface was the first to use blue prominently as a selection color, paving the way for blue to be used as a hyperlink color. 

Additionally, we know that Mosaic was inspired by ViolaWWW, and kept the same gray background and black text that they used for their interface. Reviewing Mosaic’s release notes, we see in release 0.7 black text with underlines appearing as the preferred way of conveying hyperlinks, and we can infer that was still the case until something happened around mid April right before when blue hyperlinks made their appearance in release 0.13. In fact, conveying links as black text with underlines had been the standard since 1985 with Microsoft 1, which some once claimed Microsoft had stolen from Apple’s Lisa’s look and feel.

I think the real reason why we have blue hyperlinks is simply because color monitors were becoming more popular around this time. Mosaic as a product also became popular, and blue hyperlinks went along for the ride. Mosaic came out during an important time where support for color monitors was shifting; the standard was for hyperlinks to use black text with some sort of underline, hover state or border. Mosaic chose to use blue, and they chose to port their browser for multiple operating systems. This helped Mosaic become the standard browser for internet use, and helped solidify its user interface as the default language for interacting with the web.

When Netscape and Internet Explorer were created, the blue hyperlink was already synonymous with the web and interaction. The blue link was now browser-agnostic and well on its way to becoming a symbol of what it means to use the internet.

Rhapsody in #0000FF  

It has been almost 30 years since Mosaic put the now ubiquitous blue in its release notes, but it is no longer the early 1990s. While it is quite fun to discover the secrets of how browsers are made, here in the present, we have accepted it as gospel truth that links can and should only be blue because these early pioneers said it should be so. 

When the hyperlink was created, limited colors were available. Today we have almost every color option, so what should be the default color and state of links on the internet? When given every opportunity to deviate from tradition, do we do so for the sake of progress, or should we keep the blue because it’s an established visual pattern?

If you are to change the link color, here are my lists of requirements for the perfect color when choosing a link color:

  • Optimal text accessibility with the background color and surrounding text. Your design decisions shouldn’t be the reason a user can’t access content on a page.
  • Interactive states should always be styled in your stylesheets. Examples include: touch, visited, hover, active and focus. 
  • Links and buttons should be large enough to tap or click. You can’t be sure how people are interacting with your content on devices by using styluses, fingers, mice or trackpads. It’s your job to make sure your links are easy to navigate and have enough space around them.

In closing, should all links be blue? Maybe so, or maybe not. There has been a long path of visual elements used to denote hyperlinks, and the color blue is just one of many elements that have come to represent a hyperlink. Links are about connecting information together. Always make sure that a hyperlink stands out from the rest of the surrounding content. Sometimes that means you need an underline, or a background color, or maybe just maybe, you need the color blue.

Major thanks and credit to my colleagues Asa Dotzler, Molly Howell, M.J. Kelly, Michael Hoye, and Damiano DeMonte for help with research and inspiration for this article.

The post Why are hyperlinks blue? appeared first on The Mozilla Blog.

Eitan IsaacsonHTML AQI Gauge

I needed a meter to tell me what the air quality is like outside. Now I know!

If you need one as well, or if you are looking for an accessible gauge for anything else, here you go.

You can also mess with it on Codepen.

AQI

Dennis SchubertWebCompat Tale: Touching Clickable Things

Did you know your finger is larger than one pixel? I mean, sure, your physical finger should always be larger than one pixel, unless your screen has a really low resolution. But did you know that when using Firefox for Android, your finger is actually 6x7 millimeters large? Now you do!

Unlike a pixel-perfect input device like a mouse or even a laptop’s trackpad, your finger is weird. Not only is it all soft and squishy, it also actively obstructs your view when touching things on the screen. When you use a web browser and want to click on a link, it is surprisingly difficult to hit it accurately with the center of your fingertip, which is what your touchscreen driver sends to the browser. To help you out, your friendly Firefox for Android helps you out by slightly enlarging the “touch point”.

Usually, this works fine and is completely transparent to users. Sometimes, however, it breaks things.

Here is an example of a clever CSS-only implementation of a menu with collapsible sub-navigation that I extracted from an actual Web Compatibility bug report I looked at earlier. Please do not actually use this, this is broken by design to make a point. :) Purely visual CSS declarations have been omitted for brevity.

Source:

<style>
  #menu-demo li ul {
    display: none;
  }

  #menu-demo li:hover ul {
    display: block;
  }
</style>
<section id="menu-demo">
  <ul>
    <li><a href="#menu-demo">One</a></li>
    <li>
      <span>Two with Subnav</span>
      <ul>
        <li><a href="#menu-demo">Two &gt; One</a></li>
        <li><a href="#menu-demo">Two &gt; Two</a></li>
      </ul>
    </li>
    <li><a href="#menu-demo">Three</a></li>
  </ul>
</section>

Result:

Now, just imagine that on Desktop, this is a horizontal menu and not a vertical list, but I’m too lazy to write media queries right now. It works fine on Desktop. However, if you try this in Firefox for Android, you will find that it’s pretty much impossible to select the second entry, and you will just hit “One” or “Three” most of the time.

To understand what’s going on here, we have to talk about two things: the larger “finger radius” I explained earlier, and the rules by which Firefox detects the element the user probably wanted to click on.

Touch Point expansion

The current touch point expansion settings, as set by the ui.mouse.radius.* preferences in about:config, are: 5mm to the top; 3mm to the left; 3mm to the right; 2mm to the bottom. There probably is a good reason why the top/bottom expansion is asymmetric, and I assume this has something to do with viewing angles or how your finger is shaped, but I actually don’t know.

To visualize this, I prepared a little annotated screenshot of how this “looks like” on my testing Android device:

A screenshot of the live menu demo from above. A red dot in the middle of "Two with Subnav" marks the position where the user placed the middle of their finger, a blue border marks the outline of the area Firefox considers "touched". The blue outline spans well into the "One" menu item.

The red dot marks the center of the touch point, the blue outline marks the area as expanded by Firefox for Android. As you can see, the expanded touch area covers part of the previous menu item, “One”. If you’d try to touch lower on the item, then the bottom expansion will start to cover parts of the “Three” item. In this example, you have a 9px window to actually hit “Two with Subnav”. On my device, that’s roughly 0.9mm. Good luck with that!

With this expansion in mind, you might wonder why you’re not hitting the wrong items all the time. Fair question.

“Clickable” elements

Firefox doesn’t just click on every element inside this expanded area. Instead, Firefox tries to find the “clickable element closest to the original touch point”. If all three <li>s contained links, then this wouldn’t be an issue: links are clickable elements, and “Two with Subnav” would, without a doubt, be the closest. However, in this example, it’s not a link, and then the rules are a little bit more complicated.

Things Firefox considers “clickable” for the purpose of finding the right element:

  • <a>s.
  • <button>, <input>, <select>, <textarea>, <label>, and <iframe>.
  • Elements with JavaScript listeners for:
    • click, mousedown, mouseup
    • touchstart, touchend
    • pointerdown, pointerup
  • Elements with contenteditable="true".
  • Elements with role="button".
  • Elements with cursor: pointer assigned via CSS.

Unfortunately, none of the rules above are true for the “Two with Subnav” element in the example above. And this means that the “closest clickable element” to the touch point here is, well, “One”. And so, Firefox dispatches the click event to that one.

Matching any of the conditions, even simply changing the cursor via CSS, would provide the browser with enough context to do “the right thing” here.

Conclusion

This issue, once again, is one of those cases where I do not yet have a satisfying outcome. I wrote a message to the site’s authors, but given the site is based on a Joomla template from 2013, I do not have high hopes here. As for changes inside Firefox, we could treat elements with :hover styling and mouseover listeners as “clickable”, and I filed a bug to suggest as much, but I’m not yet convinced this is the right thing to do. From what I can tell, neither Chrome nor Safari do a similar expansion, so just dropping it from Firefox is another idea. But I kinda like the way it makes things better 99.9% of the time.

In any case, this serves as yet another reminder of why having semantically correct markup is important. Not only do attributes like role="button" on clickable elements help out anyone relying on accessibility features and tooling, browsers also depend on these kinds of hints. Use the tools you have, there’s a reason why the role attribute is part of the web. :)

Update from 2021-08-25

Good news! The developer responsible for the site has responded, and they did fix the issue by adding role="button" to the links. Huge success!

This Week In RustThis Week in Rust 405

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is kube-leader-election, a crate to implement leader election for Kubernetes workloads.

Thanks to hendrikmaus for the self-suggestion.

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

293 pull requests were merged in the last week

Rust Compiler Performance Triage

A few regressions but largely an improvement this week, mostly due to the upgrade to LLVM 13.

Triage done by @simulacrum. Revision range: aa8f27b..33fdb79

2 Regressions, 1 Improvements, 2 Mixed; 0 of them in rollups

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New RFCs

Upcoming Events

Online
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Apple

Wingback

PolarFox Network

Stealth Startup

Dusk Network

ChainSafe

Bitfury

Kollider

NZXT

Parity Technologies

Subspace Labs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Code doesn't deal with resources until it does. Similarly with everything else that forces you to reason about control flow - you don't care about thread management until you do, you don't care about action logs until you do, you don't care about performance until you do... and from the other side, code doesn't need to be exception-safe until it does. The trouble with this kind of "magic" language feature is that correctness becomes non-compositional: you can take two working pieces of code and put them together and get something that doesn't work.

Mickey Donaghy on Hacker News

Thanks to Stephan Sokolow for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Mozilla Performance BlogPerformance Sheriff Newsletter (July 2021)

In July there were 105 alerts generated, resulting in 20 regression bugs being filed on average 6.6 days after the regressing change landed.

Welcome to the July 2021 edition of the performance sheriffing newsletter. Here you’ll find the usual summary of our sheriffing efficiency metrics, followed by some details on how we’re growing the test engineering team. If you’re interested (and if you have access) you can view the full dashboard.

Sheriffing efficiency

  • All alerts were triaged in an average of 1.4 days
  • 88% of alerts were triaged within 3 days
  • Valid regressions were associated with bugs in an average of 2.7 days
  • 89% of valid regressions were associated with bugs within 5 days

Sheriffing Efficiency (July 2021)

It’s initially disappointing to see the alert to bug increase last month, however after some investigation it appears that a single alert has thrown this out. With fewer alerts (as we’ve seen over the last two months), any that exceed our targets have an increased impact on our average response times. In this case, it was this alert for a 2% glvideo regresion. The backfill bot did trigger backfills for this job, however the culprit commit still wasn’t clear. Even after over 250 retriggers, the sheriff was unable to determine a culprit. Perhaps a better way to measure the effectiveness of the auto backfills is to look at the average time from alert to bug where we meet the threshold, to filter out alerts that are particularly challenging for our sheriffs.

Join the team!

I’m excited to share that after the performance test engineering team is currently hiring! We have ambitious plans to modernise and unify our performance test frameworks, automate more of our regression sheriffing workflows, increase the accuracy and sensitivity of our regression detection, and support the culture of performance at Mozilla. By growing the team we hope to accelerate these efforts and to ensure every Firefox release performs better than the last.

We’re looking for candidates with 2-5 years software development experience. Whilst not a requirement, these roles would suit individuals with experience or interest in performance, profiling, data analysis, machine learning, TCP/IP, and web technologies. Experience with Python and JavaScript would also be beneficial as these will be used extensively in the role.

If you’re interested in helping us to build the future of performance testing at Mozilla, and to have a direct impact on the performance of Firefox, then please take a look over the following job descriptions:

Note that the first role is based in Toronto as we have a number of team members in this location, and currently feel that hiring in this location would provide a better opportunity for the successful candidate. The senior role is remote US and Canada.

You can learn more about the team on our wiki page, and if you’re interested in joining us, you can apply directly from the job descriptions above. If these positions don’t interest you, but you like the idea of working to keep the internet open and accessible to all, take a look over our other open positions.

Summary of alerts

Each month I’ll highlight the regressions and improvements found.

Note that whilst I usually allow one week to pass before generating the report, there are still alerts under investigation for the period covered in this article. This means that whilst I believe these metrics to be accurate at the time of writing, some of them may change over time.

I would love to hear your feedback on this article, the queries, the dashboard, or anything else related to performance sheriffing or performance testing. You can comment here, or find the team on Matrix in #perftest or #perfsheriffs.

The dashboard for July can be found here (for those with access).

Henri SivonenThe Text Encoding Submenu Is Gone

For context, please see Character Encoding Menu in 2014, Text Encoding Menu in 2021, and A Look at Encoding Detection and Encoding Menu Telemetry from Firefox 86.

Firefox 91 was released two weeks ago. This is the first release that does not have a Text Encoding submenu. Instead, the submenu has been replaced with a single menu item called to Repair Text Encoding. It performs the action that was previously performed by the item Automatic in the Text Encoding submenu: It runs chardetng with UTF-8 as a permitted outcome and ignoring the top-level domain.

The Repair Text Encoding menu item is in the View menu, which is hidden by default on Windows and Linux. The action is also available as an optional toolbar button (invoke the context menu on empty space in the toolbar and choose Customize Toolbar…). On Windows and Linux, you can invoke the menu item from the keyboard by pressing the v key while holding the alt key and then pressing the c key. (The keys may vary with the localization.)

What Problem Does “Repair Text Encoding” Solve?

  1. Sometimes the declared encoding is wrong, and the Web Platform would become more brittle if we started second-guessing the declared encoding automatically without user action.

    The typical case is that university faculty have created content over the years that is still worthwhile to read, and the old content is in a legacy encoding. However, independently of the faculty, the administrator has either explicitly or as a side effect of server software update caused the server configuration to claim UTF-8 server-wide even though this is wrong for old content. When the context is in the Latin script, the result is still readable. When the content is in a non-Latin script, the result is completely unreadable (without this feature).

  2. For non-Latin scripts, unlabeled UTF-8 is completely unreadable. Fixing this problem without requiring user action and also without making the Web Platform more brittle is a hard problem. There is a separate write-up on that topic alone. This problem might get solved one day in a way that does not involve user action but not today.

Why Remove the Other Submenu Items?

  • Supporting the specific manually-selectable encodings caused significant complexity in the HTML parser when trying to support the feature securely (i.e. not allowing certain encodings to be overridden). With the current approach, the parser needs to know of one flag to force chardetng, which the parser has to be able to run in other situations anyway, to run. Previously, the parser needed to keep track of a specific manually-specified encoding alongside the encoding information for the Web sources.

    Indeed, when implementing support for declaring the encoding via the bogo XML declaration, the above-mentioned complexity got in the way, and I wish I had replaced the menu with a single item before implementing the bogo XML declaration support. Now, I wanted to get rid of the complexity before aligning meta charset handling with WebKit and Blink.

  • Elaborate UI surface for a niche feature risks the whole feature getting removed, which is bad if the feature is still relevant to (a minority of) users. (Case in point: The Proton UI refresh removed the Text Encoding menu entry point from the hamburger menu.)

  • Telemetry showed users making a selection from the menu when the encoding of the page being overridden had come from a previous selection from the menu. This suggested that users aren’t that good at choosing correctly manually.

Why Not Remove the Whole Thing?

Chrome removed their menu altogether as part of what they called Project Eraser. (Amusingly, this lead to a different department of Google publishing a support article about using other browsers to access this functionality.) Mobile versions of Chrome, Safari, and Firefox don’t have the menu, either. So why not just follow Chrome?

Every time something in this area breaks intentionally or accidentally, feedback from Japan shows up relatively quickly. That’s the main reason why I believe users in Japan still care about having the ability to override the encoding of misconfigured pages. (That’s without articulating any particular numeric telemetry threshold for keeping the feature. However, telemetry confirms that the feature is relevant to the largest number of distinct telemetry submitters, both in absolute numbers and in region-total-relative numbers, in Japan.)

If we removed the feature, we’d remove a reason for these users to stay with Firefox. Safari and Gnome Web still have more elaborate encoding override UI built in (the list of encodings in both is questionably curated but the lists satisfy the Japanese use cases), and there are extensions for Chrome.

Shouldn’t This Be an Extension?

The built-in UI action in Firefox is more discoverable, more usable, and safer against the user getting baited into self-XSS than the Chrome extensions. Retaining the safety properties but moving the UI to an extension would increase implementation complexity while reducing discoverability—i.e. would help fewer users at a higher cost.

Removing the engine feature and leaving to an extension to rewrite headers of the HTTP responses (as in Chrome) would:

  • Give Chrome an advantage on day one by the extension(s) for Chrome actually already existing.
  • Fail to help the users who don’t discover an extension.
  • Regress usability by about a decade due to the extension UI being unaware of what’s going on inside the engine.
  • Remove self-XSS protections.

The Talospace ProjectOpenPOWER Firefox JIT update

As of this afternoon, the Baseline Interpreter-only form of the OpenPOWER JIT (64-bit little-endian) now passes all of the JIT tests except for the Wasm ones, which are being actively worked on. Remember, this is just the first of the three phases and we need all three for the full benefit, but it already yields a noticeable boost in my internal tests over the C++ interpreter. The MVP is Baseline Interpreter and Wasm, so once it passes the Wasm tests as well, it's time to pull it current with 91ESR. You can help.

Data@MozillaThis Week in Glean: Why choosing the right data type for your metric matters

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

One of my favorite tasks that comes up in my day to day adventure at Mozilla is a chance to work with the data collected by this amazing Glean thing my team has developed. This chance often arises when an engineer needs to verify something, or a product manager needs a quick question answered. I am not a data scientist (and I always include that caveat when I provide a peek into the data), but I do understand how the data is collected, ingested, and organized and I can often guide people to the correct tools and techniques to find what they are looking for.

In this regard, I often encounter challenges in trying to read or analyze data that is related to another common task I find myself doing: advising engineering teams on how we intend Glean to be used and what metric types would best suit their needs. A recent example of this was a quick Q&A for a group of mobile engineers who all had similar questions. My teammate chutten and I were asked to explain the differences between Counter Metrics and Event Metrics, and try and help them understand the situations where each of them were the most appropriate to use. It was a great session and I felt like the group came away with some deeper understanding of the Glean principles. But, after thinking about it afterwards, I realized that we do a lot of hand-wavy things when explaining why not to do things. Even in our documentation, we aren’t very specific about the overhead of things like Event Metrics. For example, from the Glean documentation section regarding “Choosing a Metric Type” in a warning about events:

“Important: events are the most expensive metric type to record, transmit, store and analyze, so they should be used sparingly, and only when none of the other metric types are sufficient for answering your question.”

This is sufficiently scary to make me think twice about using events! But what exactly do we mean by “they are the most expensive”? What about recording, transmitting, storing, and analyzing makes them “expensive”? Well, that’s what I hope to dive into a little deeper with some real numbers and examples, rather than using scary hand-wavy words like “expensive” and “should be used sparingly”. I’ll mostly be focusing on events here, since they contain the “scariest” warning. So, without further ado, let’s take a look at some real comparisons between metric types, and what challenges someone looking at that data may encounter when trying to answer questions about it or with it.

Our claim is that events are expensive to record, store and transmit; so let’s start by examining that a little closer. The primary API surface for the Event Metric Type in Glean is the record() function. This function also takes an optional collection of “extra” information in a key-value shape, which is supposed to be used to record additional state that is important to the event. The “extras”, along with the category, name, and (relative) timestamp, makes up the data that gets recorded, stored, and eventually transmitted to the ingestion pipeline for storage in the data warehouse.

Since Glean is built with Rust and then provides SDKs in various target languages, one of the first things we have to do is serialize the data from the shiny target language object that Glean generates into something we can pass into the Rust that is at the heart of Glean. It is worth noting that the Glean JavaScript SDK does this a little differently, but the same ideas should apply about events. A similar structure is used to store the data and then transmit it to the telemetry endpoint when the Events Ping is assembled. A real-world example of what this serialized event, coming from Fenix’s “Entered URL” event would look like this JSON:

{
"category": "events",
"extra": {
"autocomplete": "false"
},
"name": "entered_url",
"timestamp": 33191
}

A similar amount of data would be generated every time the metric was recorded, stored and transmitted. So, if the user entered in 10 URLs, then we would record this same thing 10 times, each with a different relative timestamp. To take a quick look at how this affects using this data for analysis: if I only needed to know how many users interacted with this feature and how often, I would have to count each event with this category and name for every user. To complicate the analysis a bit further, Glean doesn’t transmit events one at a time, it collects all events during a “session” (or if it hits 500 events recorded) and transmits them as an array within an Event Ping. This Event Ping then becomes a single row in the data, and nested in a column we find the array of events. In order to even count the events, I would need to “unnest” them and flatten out the data. This involves cross joining each event in the array back to the parent ping record in order to even get at the category, name, timestamp and extras. We end up with some SQL that looks like this (WARNING: this is just an example. Don’t try this, it could be expensive and shouldn’t work because I left out the filter on the submission date):

SELECT *
FROM fenix
CROSS JOIN UNNEST (events) AS event

For an average day in Fenix we see 75-80 million Event Pings from clients on our release version, with an average of a little over 8 events per ping. That adds up to over 600 million events per day, and just for Fenix! So when we do this little bit of SQL flattening of the data structure, we end up manipulating over a half a billion records for a single day, and that adds up really quickly if you start looking at more than one day at a time. This can take a lot of computer horsepower, both in processing the query and in trying to display the results in some visual representation. Now that I have the events flattened out, I can finally filter for the category and name of the event I am looking for and count how many of that specific event is present. Using the Fenix event “entered_url” from above, I end up with something like this to count the number of clients and events:

SELECT
COUNT(DISTINCT client_info.client_id) AS client_count,
COUNT(*) AS event_count,
DATE(submission_timestamp) AS event_date
FROM
fenix.events
CROSS JOIN
UNNEST(events.events) AS event -- Yup, event.events, naming=hard
WHERE
submission_timestamp >= ‘2021-08-12’
AND event.category = ‘events’
AND event.name = ‘entered_url’
GROUP BY
event_date
ORDER BY
event_date

Our query engine is pretty good, this only takes about 8 seconds to process and it has narrowed down the data it needs to scan to a paltry 150 GB, but this is a very simple analysis of the data involved. I didn’t even dig into the “extra” information, which would require yet another level of flattening through UNNESTing the “extras” array that they are stored in in each individual event.

As you can see, this explodes pretty quickly into some big datasets for just counting things. Don’t get me wrong, this is all very useful if you need to know the sequence of events that led the client to entering a URL, that’s what events are for after all. To be fair, our lovely Data Engineering folks have taken the time and trouble to create views where these events are already unnested, and so I could have avoided doing it manually and instead use the automatically flattened dataset. I wanted to better illustrate the additional complexity that goes on downstream from events and working with the “raw” data seemed the best way to do this.

If we really just need to know how many clients interact with a feature and how often, then a much lighter weight alternative recommended by the Glean team would be a Counter Metric. To return to what the data representation of this looks like, we can look at an internal Glean metric that counts the number of times Fenix enters the foreground per day (since the metrics ping is sent once per day). It looks like this:

"counter": {
"glean.validation.foreground_count": 1
}

No matter how many times we add() to this metric, it will always take up that same amount of space right there, only the value would change. So, we don’t end up with one record per event, but a single value that represents the count of the interactions. When I go to query this and find out how many clients this involved and how many times the app moved to the foreground of the device, I can do something like this in SQL (without all the UNNESTing):

SELECT
COUNT(DISTINCT client_info.client_id) AS client_count,
SUM(m.metrics.counter.glean_validation_foreground_count) AS foreground_count,
DATE(submission_timestamp) AS event_date
FROM
org_mozilla_firefox.metrics AS m
WHERE
submission_timestamp >= '2021-08-12'
GROUP BY
event_date
ORDER BY
event_date

This runs in just under 7 seconds, but the query only has to scan about 5 GB of data instead of the 150 GB we saw with the event. And, for comparison, there were only about 8 million of those entered_url events per day compared to 80 million foreground occurrences per day. Even with many more incidents, the amount of data scanned by the query that used the Counter Metric Type to count things scanned 1/30th the amount of data. It is also fairly obvious which query is easier to understand. The foreground count is just a numeric counter value stored in a single row in the database along with all of the other metrics that are collected and sent on the daily metrics ping, and it ultimately results in selecting a single column value. Rather than having to unnest arrays and then counting them, I can simply SUM the values stored in the column for the counter to get my result.

Events do serve a beautiful purpose, like building an onboarding funnel to determine how well we retain users and what onboarding path results in that. We can’t do that with counters because they don’t have the richness to be able to show the flow of interactions through the app. Counters also serve a purpose, and can answer questions about the usage of a feature with very little overhead. I just hope that as you read this, you will consider what questions you need to answer and remember that there is probably a well-suited Glean Metric Type just for your purpose, and if there isn’t, you can always request a new metric type! The Glean Team wants you to get the most out of your data while being true to our lean data practices, and we are always available to discuss which metric type is right for your situation if you have any questions.

Support.Mozilla.OrgWhat’s up with SUMO – August 2021

Hey SUMO folks,

Summer is here. Despite the current situation of the world, I hope you can still enjoy a bit of sunshine and the breezing air wherever you are. And while vacations are planned, SUMO is still busy with lots of projects and releases. So let’s get the recap started!

Welcome on board!

  1. Welcome Julie and Rowan! Thank you for diving into the KB world.

Community news

  • One of our goal for Q3 this year is to revamp the onboarding experience for contributor that is focused on the /get-involved page. To support this work, we’re currently conducting a survey to understand how effective is the current onboarding information we provide. Please fill out the survey if you haven’t and share it to your community and fellow contributors!
  • No Kitsune update for this month. Check out SUMO Engineering Board instead to see what the team is currently doing.

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in July!
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats

KB

KB pageviews (*)

Month Page views Vs previous month
Jul 2021 8,237,410 -10.81%
* KB pageviews number is a total of KB pageviews for /en-US/ only

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Michele Rodaro
  3. Pierre Mozinet
  4. Romado33
  5. K_alex

KB Localization

Top 10 locale based on total page views

Locale Apr 2021 pageviews (*) Localization progress (per Jul, 9)(**)
de 8.62% 99%
zh-CN 6.92% 100%
pt-BR 6.32% 64%
es 6.22% 45%
fr 5.70% 91%
ja 4.13% 55%
ru 3.61% 99%
it 2.08% 100%
pl 2.00% 84%
zh-TW 1.44% 6%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

  1. Milupo
  2. Soucet
  3. Jim Spentzos
  4. Michele Rodaro
  5. Mark Heijl

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jul 2021 3175 72.13% 15.02% 81.82%

Top 5 forum contributors in the last 90 days: 

  1. FredMcD
  2. Cor-el
  3. Jscher2000
  4. Seburo
  5. Sfhowes

Social Support

Channel Jul 2021
Total conv Conv interacted
@firefox 2967 341
@FirefoxSupport 386 270

Top 5 contributors in Q1 2021

  1. Christophe Villeneuve
  2. Andrew Truong
  3. Pravin

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

Firefox mobile

  • FX for Android V91 (August 10)
  • FX for iOS V36 (August 10)
    • Fixes: Tab preview not showing in tab tray

Other products / Experiments

  • Mozilla VPN V2.5 (September 8)
    • Multi-hop: Using multiple VPN servers. VPN server chaining method gives extra security and privacy.
    • Support for Local DNS: If there is a need, you can set a custom DNS server when the Mozilla VPN is on.
    • Getting help if you cannot sign in: ‘get support’ improvements.

Upcoming Releases

  • FX Desktop 92, FX Android 92, FX iOS V37 (September 7)
  • Updates to FX Focus (October)

Shout-outs!

  • Thanks to Felipe Koji for his great work on Social Support.
  • Thanks to Seburo for constantly championing support for Firefox mobile.

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Useful links:

Hacks.Mozilla.OrgSpring cleaning MDN: Part 2

An illustration of a blue coloured dinosaur sweeping with a broom

Illustration by Daryl Alexsy

 

The bags have been filled up with all the things we’re ready to let go of and it’s time to take them to the charity shop.

Archiving content

Last month we removed a bunch of content from MDN. MDN is 16 years old (and yes it can drink in some countries), all that time ago it was a great place for all of Mozilla to document all of their things. As MDN evolved and the web reference became our core content, other areas became less relevant to the overall site. We have ~11k active pages on MDN, so keeping them up to date is a big task and we feel our focus should be there.

This was a big decision and had been in the works for over a year. It actually started before we moved MDN content to GitHub. You may have noticed a banner every now and again, saying certain pages weren’t maintained. Various topics were removed including all Firefox (inc. Gecko) docs, which you can now find here. Mercurial, Spidermonkey, Thunderbird, Rhino and XUL were also included in the archive.

So where is the content now?

It’s saved – it’s in this repo. We haven’t actually deleted it completely. Some of it is being re-hosted by various teams and we have the ability to redirect to those new places. It’s saved in both it’s rendered state and the raw wiki form. Just. In. Case.

The post Spring cleaning MDN: Part 2 appeared first on Mozilla Hacks - the Web developer blog.

Cameron KaiserUnplanned Floodgap downtime

Floodgap is down due to an upstream circuit cut and TenFourFox users may get timeouts when checking versions. All Floodgap services including web, gopher and E-mail are affected. The telco is on it, but I have no ETA for repair. If the downtime will be prolonged, I may host some services temporarily on a VPS.

William LachancePython dependency gotchas: always go to the source

Getting back into the swing of things at Mozilla after my extended break. I’m currently working on enhancing and extending Looker support for Glean-based applications, which eventually led me back to working on bigquery-etl, our framework for creating derived datasets in our data lake.

I spent some time working on improving the initial developer experience of bigquery-etl early this year, so I figured it would be no problem to get going again despite an extended hiatus from it (I think it’s probably been ~2–3 months since I last touched it). Unfortunately the first thing I got after creating a fresh virtual environment (to pick up the new dependency updates) was this exciting looking error:

wlach@antwerp bigquery-etl % ./bqetl --help
Traceback (most recent call last):
  ...
  File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/google/cloud/bigquery_v2/types/__init__.py", line 16, in <module>
    from .encryption_config import EncryptionConfiguration
  File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/google/cloud/bigquery_v2/types/encryption_config.py", line 26, in <module>
    class EncryptionConfiguration(proto.Message):
  File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/proto/message.py", line 200, in __new__
    file_info = _file_info._FileInfo.maybe_add_descriptor(filename, package)
  File "/Users/wlach/src/bigquery-etl/venv/lib/python3.9/site-packages/proto/_file_info.py", line 42, in maybe_add_descriptor
    descriptor=descriptor_pb2.FileDescriptorProto(
TypeError: descriptor to field 'google.protobuf.FileDescriptorProto.name' doesn't apply to 'FileDescriptorProto' object

What I did

Since we have pretty decent continuous integration at Mozilla, when I see an error like this I am usually pretty sure it’s some kind of strange interaction between my local development environment and whatever dependencies we’ve specified for the repository in question. Usually these problems are pretty easy to solve.

First thing I tried was to type the error into Google, to see if this had come up for anyone else before. I tried several variations of TypeError: descriptor to field and FileDescriptorProto and nothing really turned up. This strategy almost always turns up something. When it doesn’t it usually indicates that something pretty strange is happening.

To see if this was a strange problem particular to us, I asked on our internal channel but no one had offhand seen or heard of this error either. One of my colleagues (who had a working setup on a Mac, the same environment I was using) suggested I set up pyenv to isolate my development environment, which was a good idea but did not seem to solve the problem: both Python 3.8 and 3.9 installed via pyenv ran into the exact same issue.

After flailing around trying a number of other failed approaches (maybe I need to upgrade the version of virtualenv that we’re using?), I broke down and looked harder at the error itself. It seemed to be some kind of typing error in Google’s protobuf library, which google-cloud-bigquery is calling. If this sort of thing was happening to everyone, we probably would have seen it happening more broadly. So my guess, again, was that it was happening due to an obscure interaction between some variable on my machine and this particular combination of dependencies.

At this point, I systematically went through our set of python dependencies to see what might be the matter. For the most part, I found nothing surprising or suspicious. google-api-core was at the latest version, as was google-cloud-bigquery. However, I did notice that the version of protobuf we were using was a little older (3.15.8 when the latest “official” version on pypi was 3.17.3).

It seemed like a longshot that the problem was there, but it seemed like upgrading the dependency was worth a try just in case. So I bumped the version of protobuf to the latest version in my local checkout (pip install protobuf==3.17.3)…

… and sure enough, after doing so, the problem was fixed and ./bqetl --help started working again:

wlach@antwerp bigquery-etl % ./bqetl --help
Usage: bqetl [OPTIONS] COMMAND [ARGS]...

  CLI tools for working with bigquery-etl.

...

After doing so, I did up a quick pull request and the problem is now fixed, at least for me.

It’s a bit unfortunate that dependabot (which we have configured for this repository) didn’t send an update for protobuf, which would have fixed this problem earlier.1 It seems like it’s not completely reliable for python packages, for whatever reason: I have also noticed this problem with mozregression.

I suspect (though can’t confirm) that the problem here is a backwards-incompatible change made to either protobuf or one of the packages that uses it. However, the nature of the incompatibility seems subtle: bigquery-etl works fine with the old set of dependencies we run in continuous integration and it appears to only come up in specific circumstances (i.e. mine). Unfortunately, I need to get back to what I was actually planning to work on and don’t have time to unwind the rather set of complex interactions going on here. Maybe later!

What I would have done differently

This kind of illustrates (again) to me that while some shortcuts and heuristics can save a bunch of time and mental effort (Googling things all the time is basically standard practice in the industry at this point), sometimes you really just need to start a little closer at the problem to find a solution. I was hesitant to do this in this case because I’m never sure where those kinds of rabbit holes are going to take me (e.g. I spent several days debugging a bad interaction between Kubernetes and our airflow cluster in late 2019 with not much to show for the effort), but often all it takes is understanding the general shape of the problem to move you to a quick solution.

Other lessons

Here’s a couple of other things this experience reinforced for me (these are more subjective, take them or leave them):

  • Local development environments are kind of a waste of time. The above work took me several hours and it’s going to result in ~zero user-visible improvements for anyone outside of Mozilla Data Engineering. I’m excited about the potential productivity improvements that might come from using tools like GitHub Codespaces.
  • While I can’t confirm this was the source of the problem in this particular case, in general backwards compatibility on every level is super important when your software has broad reach and doubly so if it’s a widely-used dependency of other software (and is thus hard to reason about in isolation). In these cases, what seems like a trivial change (e.g. improving the type signatures inside a Python library) can squander many hours of people’s time if you’re not careful. Backwards-incompatible changes, however innocuous they may seem, should always invoke a major version bump.
  • Likewise, bugs in software that have broad usage (like dependabot) can have big downstream impacts. If dependabot’s version bumping for python was more reliable, we likely wouldn’t have had this problem. The glass-half-full interpretation of this is that fixing these types of issues would have an outsized benefit for the commons.
  1. As an aside, the main reason we use dependabot and aggressively update packages like google-api-core is due to a bug in pip

Mozilla Localization (L10N)L10n Report: August 2021 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

New content and projects

What’s new or coming up in Firefox desktop

In terms of new content, it’s been a pretty calm period for Firefox after the MR1 release, with less than 50 strings added over the last 6 weeks. We expect that to change in the coming weeks, starting with a few clean-ups that didn’t land in time for MR1, and brand new features.

These are the relevant deadlines for the next month:

  • Firefox 91 shipped last Tuesday (August 10), and we welcomed a new locale with it: Scots.
  • The deadline to localize Firefox 92 is August 29 (release will happen on September 7), while Firefox 93 just started its life cycle in Nightly.

A reminder that Firefox 91 is also the new ESR, and will be supported for about 1 year. We plan to update localizations for 91 ESR in a few weeks, to improve coverage and pick up some bug fixes.

What’s new or coming up in mobile

We have exciting news coming up on the mobile front. In case you haven’t heard yet, we just brought back Focus for iOS and Focus for Android to Pontoon for localization. We are eager to bring back these products to a global audience with updated translations!

Both Focus for Android and Focus for iOS should have all strings in by August 17th. L10n deadline for both localizing and testing your work is September 6th. One difference you will notice is that iOS strings will be trickling in regularly – vs what we usually do for Firefox for iOS where you get all strings in one bulk.

Concerning Firefox for Android and Firefox for iOS: both projects are going to start landing strings for the next release, which promises to be a very interesting one. More info to come soon, please stay tuned on Matrix and Discourse for this!

What’s new or coming up in web projects

mozilla.org

A set of VPN pages were landed recently.  As the Mozilla VPN product expands to more markets, it would be great to get these pages localized. Do plan to take some time and work as a team to complete 4000+ words of new content. The pages contain some basic information on what distinguishes Mozilla’s VPN from others on the market. You will find it useful to spread the words and promote the product in your language.

There will be a couple of new projects on the horizon. Announcements will be made through  Discourse and Matrix.

Newly published localizer facing documentation

Events

Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Opportunities

International Translation Day

Call for community translator or manager as a panelist to represent the Mozilla l10n community:

As part of Translation Day 2021, the WordPress Polyglots team is organizing a handful of global events (in English) from Sept. 17 – 30, 2021. The planning team is still deciding on the format and dates for these events, but they will be virtual/online and accessible to anyone who’s interested. One of the events the team is putting together is a panel discussion between contributors from multiple open source or community-led translation projects. If you or anyone in your community would be interested in talking about your experience as a community translator and how translations work in your community or project, you would be a great fit!

Check out what the organizer and the communities were able to accomplish last year and what they are planning for this year. The panel discussion would involve localization contributors like you from other open source communities, sharing their experiences on the tools, process and creative ways to collaborate during the pandemic. We hope some of you can take the opportunity to share and learn.

Even if you are not able to participate in the event, maybe you can organize a virtual meeting within the community, meet and greet and celebrate this special day together.

Friends of the Lion

  • Congratulations to Temitope Olajide from the Yoruba l10n community, for your excellent job completing the Terminology project! Image by Elio Qoshi

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

  • If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Aaron KlotzAll Good Things...

Today is my final day as an employee of Mozilla Corporation.

My first patch landed in Firefox 19, and my final patch as an employee has landed in Nightly for Firefox 93.

I’ll be moving on to something new in a few weeks’ time, but for now, I’d just like to say this:

My time at Mozilla has made me into a better software developer, a better leader, and more importantly, a better person.

I’d like to thank all the Mozillians whom I have interacted with over the years for their contributions to making that happen.

I will continue to update this blog with catch-up posts describing my Mozilla work, though I am unsure what content I will be able to contribute beyond that. Time will tell!

Until next time…

Firefox NightlyThese Weeks in Firefox: Issue 98

Highlights

Friends of the Firefox team

For contributions from July 28th to August 10th 2021, inclusive.

Introductions/Shout-Outs

Resolved bugs (excluding employees)

Fixed more than one bug

  • Antonin Loubiere
  • Ava Katushka
  • Kajal Sah
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Removed an old about:config pref (extensions.webextPermissionPrompts) that was making AMO fall back to a behavior not officially supported from a long time if changed, we introduced the pref while the extensions permission prompt was being implemented and it was now only used in a few tests (without serving any other purpose) – Bug 1720845
WebExtensions Framework
  • Fixed an issue related to the extension devtools panel being disabled when the same extension requests an unrelated optional permission (the users affected could still reenable the devtools panel from the DevTools Settings panel UI without reinstalling the extension, but not all users are aware of that API and so we uplifted the fix also on Firefox 91) – Bug 1722145
  • Fixed a regression related to more strict validations for Manifest Version 3 manifest keys used in Manifest Version 2 extensions, some extension developers did submit to AMO extensions with manifest.json files including the Manifest Version 3 manifest keys (which are unsupported in practice) along with the Manifest Version 2 keys, by leveraging the fact that we do only warn for unexpected top level manifest keys. The fix (landed in Firefox 92 and uplifted to Firefox 91) does ensure that we only warn on the unexpected manifest keys even when they are defined for Manifest Version 3 extensions – Bug 1722966
  • Thanks to Florian, starting from Firefox 92 the Firefox Profiler will show markers for the extensions API calls and API events. This is going to be really helpful to more easily investigate perf issues related to the WebExtensions APIs (and also for non-perf issues)! – Bug 1716343

Screenshot example of the new Extension API profiler markers.

Screenshot example of the new Extension API profiler markers(bugzilla comment with the profile link)

Downloads Panel

  • Lots of updates from Outreachy intern, Ava (available behind the browser.download.improvements_to_download_panel pref):
    • The “Save file as” dialog shows instead of the “UnknownContentType” dialog when the user has Firefox configured to always ask where to save files (bug 1719901)
    • Fixed an issue where saving files directly to disk would immediately open the file when it’s finished downloading (bug 1719900)
    • Updated telemetry for opening a file via clicking an in-progress download from the Downloads Panel (bug 1718782)
    • Files opened by an application are saved to the user configured directory (bug 1714107)
    • Continuing work on UX for preventing download spam (bug 1711049)

Form Autofill

Password Manager

Performance

Performance Tools

  • Firefox Profiler now supports importing dhat profiles. Example profile: https://share.firefox.dev/2VLegU9
  • Firefox Profiler creates more compact URLs for profiles with lots of tracks now.
  • You can use the “*” button on your keyboard to expand all call nodes in the call tree.
  • Now screenshots will let you know when a window is being destroyed. Example profile: https://share.firefox.dev/3iGwFJl

Screenshot example of a second and third screenshots window being destroyed at specific times.

Screenshot example of the new Extension API profiler markers(bugzilla comment with the profile link)

  • An issue related to the activity graph that makes it blurry after resize has been fixed.
  • Enabled 4 more locales: el, es-CL, ia and zh-CN. We have 10 locales in total that are enabled in the production of profiler.firefox.com now! Thanks to everyone who contributed to localization work!

Proton/MR1

Search and Navigation

  • Drew worked on a patch to experimentally allow the heuristic result to be hidden (Bug 1723158). This enables future experimental work with surfacing relevant “Top Hit” results.

Below the fold

  • [gijs] Referencing chrome: or resource: URIs that aren’t packaged will now cause crashes in tests, which will get you backed out if landing new instances of those problems. This should help automation catch dangling references when images/CSS/JS files get removed, typos when adding new stuff, etc.
    • Obviously this only helps if you have automated tests exercising the code in question, which you should!
  • [mconley] Check out the Firefox Engineering Show & Tell that jaws and AnnyG organized! Lots of good stuff there.

Mozilla ThunderbirdThunderbird 91 Available Now

The newest stable release of Thunderbird, version 91, is available for download on our website now. Existing Thunderbird users will be updated to the newest version in the coming weeks.

Thunderbird 91 is our biggest release in years with a ton of new features, bug fixes and polish across the app. This past year had its challenges for the Thunderbird team, our community and our users. But in the midst of a global pandemic, the important role that email plays in our lives became even more obvious. Our team was blown away by the support we received in terms of donations and open source contributions and we extend a big thanks to everyone who helped out Thunderbird in the lead up to this release.

There are a ton of changes in the new Thunderbird, you can see them all in the release notes. In this post we’ll focus on the most notable and visible ones.

Multi-Process Support (Faster Thunderbird)

Thunderbird has gotten faster with multi-process support. The new multi-process Thunderbird takes better advantage of the processor in your computer by splitting up the application into multiple smaller processes instead of running as one large one. That’s a lot of geekspeak to say that Thunderbird 91 will feel like it got a speed boost.

New Account Setup

One of the most noticeable changes for Thunderbird 91 is the new account setup wizard. The new wizard not only features a better look, but does auto-discovery of calendars and address books and allows most users to set them up with just a click. After setting up an account, the wizard also points users at additional (optional) things to do – such as adding a signature or setting up end-to-end encryption.

Account Setup Wizard

The New Account Setup Wizard

Attachments Pane + Drag-and-Drop Overlay

The attachments pane has been moved to the bottom of the compose window for better visibility of filenames as well as being able to see many at once. We’ve also added an overlay that appears when you drag-and-drop a file into the compose window asking how you would like to handle the file in that email (such as putting a picture in-line in your message or simply attaching it to the email).

The Thunderbird compose window with the attachment pane at the bottom.

Compose window with bottom attachment pane.

The new attachment drag-and-drop overlay.

The new attachment drag-and-drop overlay.

PDF Viewer

Thunderbird now has a built-in PDF viewer, which means you can read and even do some editing on PDFs sent to you as attachments. You can do all this without ever leaving Thunderbird, allowing you to return to your inbox without missing a beat.

The PDF Viewer in Thunderbird 91

The PDF Viewer in Thunderbird 91

UI Density Control

Depending on how you use Thunderbird and whether you are using it on a large desktop monitor or a small laptop touchscreen, you may want the icons and text of the interface to be larger and more spread out or very compact. In Thunderbird 91 under the View -> Density in the menu, you can select the UI density for the entire application. Three options are available: compact – which puts everything closer together, normal – the experience you are accustomed to in Thunderbird, and touch – that makes icons bigger and separates elements.

Play around with this new level of control and find what works best for you!

UI density control option

UI density control options

Calendar Sidebar Improvements

Managing multiple calendars has been made easier with the calendar sidebar improvements in this release. There is a quick enable button for disabled calendars, as well as a show/hide icon for easily toggling what calendars are visible. There is also a lock indicator for read-only calendars. Additionally, although not a sidebar improvement, there are now better color accents to highlight the current day in the calendar.

The improved calendar sidebar.

Improved Calendar sidebar

Better Dark Theme

Thunderbird’s Dark Theme got even better in this release. In the past some windows and dialogues looked a bit out of place if you had Thunderbird’s dark theme selected. Now almost every dialogue and window in Thunderbird is fully styled to respect the user’s color scheme preferences.

Dark Theme Screenshot

Dark Theme

Other Notable Mentions

You really have to scroll through the release notes as there are a lot of little changes that make Thunderbird 91 feel really polished. Some other notable mentions are:

The Talospace ProjectFirefox 91 on POWER fur the fowk

Firefox 91 is out. Yes, it further improves cookie isolation and cleanup, has faster paint scheduling (noticeably, in some cases), and new JavaScript and DOM support. But for my money, the biggest news is the Scots support: aye, laddie, noo ye kin stravaig the wab lik Robert Burns did. We've waited tae lang fur this.

Anyway, Firefox 91 builds oot o the kist oa, er, Firefox 91 builds out of the box on OpenPOWER using the same .mozconfigs for Firefox 90; I made a wee change to the PGO-LTO patch since I messed up the diff the last time and didn't notice. The crypto issues in Fx90 are fixed in this release.

Meanwhile, the OpenPOWER JIT is now passing all but a handful of the basic tests in Baseline Interpreter mode, and some amount of Wasm, though this isn't nearly as far along. Ye kin hulp.

Firefox NightlyThese Weeks in Firefox: Issue 97

Highlights

  • Background updater has started rolling out on Release!
    • This means users on Windows with Firefox installed that haven’t opened it in a while will get the latest Firefox when they start it!
  • The WebExtensions team has landed more work related to supporting Manifest Version 3:
    • The team landed some more changes related to the Extensions background service workers – Bug 1638097, Bug 1638099 (part of the ongoing work related to the Manifest Version 3 WebExtensions)
    • introduced support for the host_permissions manifest.json key – Bug 1693385
  • Some great accessibility fixes have recently landed:
    • Morgan (from the Accessibility Team) fixed a problem with macOS VoiceOver not properly announcing switch-to-tab results – Bug 1716828
    • Harry fixed the appearance of the address and search bars in High Contrast mode on macOS –  Bug 1711261

Friends of the Firefox team

For contributions from July 13th to July 27th 2021, inclusive.

Introductions/Shout-Outs

Resolved bugs (excluding employees)
Fixed more than one bug
  • Jonas Jenwald [:Snuffleupagus]
  • Kajal Sah
  • karim
New contributors (🌟 = first patch)

 

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Fixed an issue that was preventing an extension content script to successfully create websockets on webpages using a upgrade-insecure-requests CSP directive  – Bug 1676024
  • Prevent QuotaManager from selecting extension origins as least active origins to evict data when the free disk space is below certain thresholds – Bug 1720487 (landed in Nightly 92 and uplifted to Beta 91)

 

WebExtension APIs
  • Starting from Firefox 92, an extension will be able to access url, title and faviconUrl when the extension does have the required host permissions (and without requesting the broader “tabs” permission) – Bug 1690613. Thanks Karim for contributing this enhancement.
  • Also in Firefox >= 92, the downloads API support downloading with cookies from a given container (as well as searching and erase downloads by cookieStoreId) – Bug 1669566. Thanks again to Karim for contributing this additional enhancement!

Downloads Panel

  • Ava is exploring how we can detect potential download spam triggered by websites and providing some sort of UX for users to either “allow” or “block” these download requests (Bug 1711049)

Fission

  • The Fission team is working on a “process count experiment” to try increasing the number of content processes per site
  • The Fission team is also working on launching a “tab unloader experiment” to try unloading background tabs that haven’t been used in the while (and aren’t running audio, video or WebRTC, and other heuristics) to reclaim memory and processing power

Form Autofill

Installer & Updater

macOS Spotlight

  • Improved dark mode support landed in Release 91!
  • A fix will land soon for an issue where the window stoplight buttons were positioned incorrectly in RTL locales (bug 1419375).
  • Work continues on reducing video power usage and improving memory pressure notifications.

Search and Navigation

  • Harry changed the new tab page search bar hand-off to enter Search Mode when the address bar would not usually execute a search, or return search suggestions – Bug 1713827

Screenshots

  • Kajal has landed patches that create the foundation for the browser component switch
  • Kajal also migrated the existing icons into the new folder
  • Sfoster and Kajal have been working on creating a tab dialog overlay and actors to push the project into the next stage, and prototyping the overlay.

Hacks.Mozilla.OrgHopping on Firefox 91

Hopping on Firefox 91

August is already here, which means so is Firefox 91! This release has a Scottish locale added and, if the ‘increased contrast’ setting is checked, auto enables High Contrast mode on macOS.

Private browsing windows have an HTTPS-first policy and will automatically attempt to make all connections to websites secure. Connections will fall back to HTTP if the website does not support HTTPS.

For developers Firefox 91 supports the Visual Viewport API and adds some more additions to the Intl.DateTimeFormat object.

This blog post provides merely a set of highlights; for all the details, check out the following:

Visual Viewport API

Implemented back in Firefox 63, the Visual Viewport API was behind the pref dom.visualviewport.enabled in the desktop release. It is now no longer behind that pref and enabled by default, meaning the API is now supported in all major browsers.

There are two viewports on the mobile web, the layout viewport and the visual viewport. The layout viewport covers all the elements on a page and the visual viewport represents what is actually visible on screen. If a keyboard appears on screen, the visual viewport dimensions will shrink, but the layout viewport will remain the same.

This API gives you information about the size, offset and scale of the visual viewport and allows you to listen for resize and scroll events. You access it via the visualViewport property of the window interface.

In this simple example the resize event is listened for and when a user zooms in, hides an element in the layout, so as not to clutter the interface.

const elToHide = document.getElementById('to-hide');

var viewport = window.visualViewport;

function resizeHandler() {

   if (viewport.scale > 1.3)
     elToHide.style.display = "none";
   else
     elToHide.style.display = "block";
}

window.visualViewport.addEventListener('resize', resizeHandler);

New formats for Intl.DateTimeFormat

A couple of updates to the Intl.DateTimeFormat object include new timeZoneName options for formatting how a timezone is displayed. These include the localized GMT formats shortOffset and longOffset, and generic non-location formats shortGeneric and longGeneric. The below code shows all the different options for the timeZoneName and their format.

var date = Date.UTC(2021, 11, 17, 3, 0, 42);
const timezoneNames = ['short', 'long', 'shortOffset', 'longOffset', 'shortGeneric', 'longGeneric']

for (const zoneName of timezoneNames) {

  // Do something with currentValue
  var formatter = new Intl.DateTimeFormat('en-US', {
    timeZone: 'America/Los_Angeles',
    timeZoneName: zoneName,
  });

console.log(zoneName + ": " + formatter.format(date) );

}

// expected output:
// > "short: 12/16/2021, PST"
// > "long: 12/16/2021, Pacific Standard Time"
// > "shortOffset: 12/16/2021, GMT-8"
// > "longOffset: 12/16/2021, GMT-08:00"
// > "shortGeneric: 12/16/2021, PT"
// > "longGeneric: 12/16/2021, Pacific Time"

You can now format date ranges as well with the new formatRange() and formatRangeToParts() methods. The former returns a localized and formatted string for the range between two Date objects:

const options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric' };

const startDate = new Date(Date.UTC(2007, 0, 10, 10, 0, 0));
const endDate = new Date(Date.UTC(2008, 0, 10, 11, 0, 0));

const dateTimeFormat = new Intl.DateTimeFormat('en', options1);
console.log(dateTimeFormat.formatRange(startDate, endDate));

// expected output: Wednesday, January 10, 2007 – Thursday, January 10, 2008

And the latter returns an array containing the locale-specific parts of a date range:

const startDate = new Date(Date.UTC(2007, 0, 10, 10, 0, 0)); // > 'Wed, 10 Jan 2007 10:00:00 GMT'
const endDate = new Date(Date.UTC(2007, 0, 10, 11, 0, 0));   // > 'Wed, 10 Jan 2007 11:00:00 GMT'

const dateTimeFormat = new Intl.DateTimeFormat('en', {
  hour: 'numeric',
  minute: 'numeric'
});
const parts = dateTimeFormat.formatRangeToParts(startDate, endDate);

for (const part of parts) {

  console.log(part);

}

// expected output (in GMT timezone):
// Object { type: "hour", value: "2", source: "startRange" }
// Object { type: "literal", value: ":", source: "startRange" }
// Object { type: "minute", value: "00", source: "startRange" }
// Object { type: "literal", value: " – ", source: "shared" }
// Object { type: "hour", value: "3", source: "endRange" }
// Object { type: "literal", value: ":", source: "endRange" }
// Object { type: "minute", value: "00", source: "endRange" }
// Object { type: "literal", value: " ", source: "shared" }
// Object { type: "dayPeriod", value: "AM", source: "shared" }

Securing the Gamepad API

There have been a few updates to the Gamepad API to fall in line with the spec. It is now only available in secure contexts (HTTPS) and is protected by Feature Policy: gamepad. If access to gamepads is disallowed, calls to Navigator.getGamepads() will throw an error and the gamepadconnected and gamepaddisconnected events will not fire.

 

The post Hopping on Firefox 91 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Security BlogFirefox 91 Introduces Enhanced Cookie Clearing

We are pleased to announce a new, major privacy enhancement to Firefox’s cookie handling that lets you fully erase your browser history for any website. Today’s new version of Firefox Strict Mode lets you easily delete all cookies and supercookies that were stored on your computer by a website or by any trackers embedded in it.

Building on Total Cookie Protection, Firefox 91’s new approach to deleting cookies prevents hidden privacy violations and makes it easy for you to see which websites are storing information on your computer.

When you decide to tell Firefox to forget about a website, Firefox will automatically throw away all cookies, supercookies and other data stored in that website’s “cookie jar”. This “Enhanced Cookie Clearing” makes it easy to delete all traces of a website in your browser without the possibility of sneaky third-party cookies sticking around.

What data websites are storing in your browser

Browsing the web leaves data behind in your browser. A site may set cookies to keep you logged in, or store preferences in your browser. There are also less obvious kinds of site data, such as caches that improve performance, or offline data which allows web applications to work without an internet connection. Firefox itself also stores data safely on your computer about sites you have visited, including your browsing history or site-specific settings and permissions.

Firefox allows you to clear all cookies and other site data for individual websites. Data clearing can be used to hide your identity from a site by deleting all data that is accessible to the site. In addition, it can be used to wipe any trace of having visited the site from your browsing history.

Why clearing this data can be difficult

To make matters more complicated, the websites that you visit can embed content, such as images, videos and scripts, from other websites. This “cross-site” content can also read and write cookies and other site data.

Let’s say you have visited facebook.com, comfypants.com and mealkit.com. All of these sites store data in Firefox and leave traces on your computer. This data includes typical storage like cookies and localStorage, but also site settings and cached data, such as the HTTP cache. Additionally, comfypants.com and mealkit.com embed a like button from facebook.com.

Firefox Strict Mode includes Total Cookie Protection, where the cookies and data stored by each website on your computer are confined to a separate cookie jar. In Firefox 91, Enhanced Cookie Clearing lets you delete all the cookies and data for any website by emptying that cookie jar. Illustration: Megan Newell and Michael Ham.

Embedded third-party resources complicate data clearing. Before Enhanced Cookie Clearing, Firefox cleared data only for the domain that was specified by the user. That meant that if you were to clear storage for comfypants.com, Firefox deleted the storage of comfypants.com and left the storage of any sites embedded on it (facebook.com) behind. Keeping the embedded storage of facebook.com meant that it could identify and track you again the next time you visited comfypants.com.

How Enhanced Cookie Clearing solves this problem

Total Cookie Protection, built into Firefox, makes sure that facebook.com can’t use cookies to track you across websites. It does this by partitioning data storage into one cookie jar per website, rather than using one big jar for all of facebook.com’s storage. With Enhanced Cookie Clearing, if you clear site data for comfypants.com, the entire cookie jar is emptied, including any data facebook.com set while embedded in comfypants.com.

Now, if you click on Settings > Privacy and Security > Cookies and Site Data > Manage Data, Firefox no longer shows individual domains that store data. Instead, Firefox lists a cookie jar for each website you have visited. That means you can easily recognize and remove all data a website has stored on your computer, without having to worry about leftover data from third parties embedded in that website. Here is how it looks:

In Firefox’s Privacy and Security Settings, you can manage cookies and other site data stored on your computer. In Firefox 91 ETP Strict Mode, Enhanced Cookie Clearing ensures that all data for any site you choose has been completely removed.

How to Enable Enhanced Cookie Clearing

In order for Enhanced Cookie Clearing to work, you need to have Strict Tracking Protection enabled. Once enabled, Enhanced Cookie Clearing will be used whenever you clear data for specific websites. For example, when using “Clear cookies and site data” in the identity panel (lock icon) or in the Firefox preferences. Find out how to clear site data in Firefox.

If you not only want to remove a site’s cookies and caches, but want to delete it from history along with any data Firefox has stored about it, you can use the “Forget About This Site” option in the History menu:

Firefox’s History menu lets you clear all history from your computer of any site you have visited. Starting in Firefox 91 in ETP Strict Mode, Enhanced Cookie Clearing ensures that third-party cookies that were stored when you visited that site are deleted as well.

Thank you

We would like to thank the many people at Mozilla who helped and supported the development and deployment of Enhanced Cookie Clearing, including Steven Englehardt, Stefan Zabka, Tim Huang, Prangya Basu, Michael Ham, Mei Loo, Alice Fleischmann, Tanvi Vyas, Ethan Tseng, Mikal Lewis, and Selena Deckelmann.

 

The post Firefox 91 Introduces Enhanced Cookie Clearing appeared first on Mozilla Security Blog.

Mozilla Security BlogFirefox 91 introduces HTTPS by Default in Private Browsing

 

We are excited to announce that, starting in Firefox 91, Private Browsing Windows will favor secure connections to the web by default. For every website you visit, Firefox will automatically establish a secure, encrypted connection over HTTPS whenever possible.

What is the difference between HTTP and HTTPS?

The Hypertext Transfer Protocol (HTTP) is a key protocol through which web browsers and websites communicate. However, data transferred by the traditional HTTP protocol is unprotected and transferred in clear text, such that attackers are able to view, steal, or even tamper with the transmitted data. The introduction of HTTP over TLS (HTTPS) fixed this privacy and security shortcoming by allowing the creation of secure, encrypted connections between your browser and the websites that support it.

In the early days of the web, the use of HTTP was dominant. But, since the introduction of its secure successor HTTPS, and further with the availability of free, simple website certificates, the large majority of websites now support HTTPS. While there remain many websites that don’t use HTTPS by default, a large fraction of those sites do support the optional use of HTTPS. In such cases, Firefox Private Browsing Windows now automatically opt into HTTPS for the best available security and privacy.

How HTTPS by Default works

Firefox’s new HTTPS by Default policy in Private Browsing Windows represents a major improvement in the way the browser handles insecure web page addresses. As illustrated in the Figure below, whenever you enter an insecure (HTTP) URL in Firefox’s address bar, or you click on an insecure link on a web page, Firefox will now first try to establish a secure, encrypted HTTPS connection to the website. In the cases where the website does not support HTTPS, Firefox will automatically fall back and establish a connection using the legacy HTTP protocol instead:

If you enter an insecure URL in the Firefox address bar, or if you click an insecure link on a web page, Firefox Private Browsing Windows checks if the destination website supports HTTPS. If YES: Firefox upgrades the connection and establishes a secure, encrypted HTTPS connection. If NO: Firefox falls back to using an insecure HTTP connection.

(Note that this new HTTPS by Default policy in Firefox Private Browsing Windows is not directly applied to the loading of in-page components like images, styles, or scripts in the website you are visiting; it only ensures that the page itself is loaded securely if possible. However, loading a page over HTTPS will, in the majority of cases, also cause those in-page components to load over HTTPS.)

We expect that HTTPS by Default will expand beyond Private Windows in the coming months. Stay tuned for more updates!

It’s Automatic!

As a Firefox user, you can benefit from the additionally provided security mechanism as soon as your Firefox auto-updates to version 91 and you start browsing in a Private Browsing Window. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect you when browsing the internet.

Thank you

We are thankful for the support of our colleagues at Mozilla including Neha Kochar, Andrew Overholt, Joe Walker, Selena Deckelmann, Mikal Lewis, Gijs Kruitbosch, Andrew Halberstadt and everyone who is passionate about building the web we want: free, independent and secure!

The post Firefox 91 introduces HTTPS by Default in Private Browsing appeared first on Mozilla Security Blog.

Firefox Add-on ReviewsFind that font! I must have that font!

You’re probably a digital designer or work in some publishing capacity (otherwise it would be pretty strange to have a fascination with fonts); and you appreciate the aesthetic power of exceptional typography. 

So what do you do when you encounter a wonderful font in the wild that you might want to use in your own design work? Well, if you have a font finder browser extension you can learn all about it within a couple mouse clicks. Here are some of our favorite font discovery extensions…

Font Finder (revived)

Striking a balance between simple functionality and nuanced features, Font Finder (revived) delivers about everything you’d want in a font inspector. 

The extension provides three main functions:

  • Typography analysis. Font Finder reveals all relevant typographical characteristics like color, spacing, alignment, and of course font name. 
  • Copy information. Any portion of the font analysis can be copied to a clipboard so you can easily paste it anywhere. 
  • Inline editing. All font characteristic (e.g. color, size, type) on an active element can be changed right there on the page.

WhatFont

If you just want to know the name of any font you find and not much else, WhatFont is the ideal tool. 

See an interesting font? Just click the WhatFont toolbar button and mouseover any text on the page to see its font. If you want a bit more info, click the text and a pop-up will show font size, color, and family. 

<figcaption>Just mouseover a font and WhatFont will display the goods. </figcaption>

FontsNinja

With a few distinct features, FontsNinja is great if you’re doing a lot of font finding and organization. 

The extension really shines when you encounter a page loaded with a bunch of different fonts you want to learn about. Click the toolbar button and Fonts Ninja will analyze the entire page and display info for every single font found. Then, when you mouseover text on the page you’ll see which font it is and its CSS properties. 

<figcaption>Fonts Ninja has a unique Bookmarks feature that lets you to save your favorite fonts in simple fashion. </figcaption>

We hope these extensions help in your search for amazing fonts! Explore more visual customization extensions on addons.mozilla.org

Spidermonkey Development BlogTC39 meeting, July 13-16 2021

In this meeting, the Realms proposal finally moved forward to stage 3. The form it will take is as what is now called “isolated realms”. This form does not allow direct object access across the realm boundary (which you can do with iframes). To address this, a new proposal is being put forward titled getOriginals.

Beyond that, the ergonomic brand checks proposal moved to stage 4 and will be published in the next specification. Intl.Enumeration also finally moved to stage 3 and implementers have started working on it.

A feature that developers can look forward to experimenting with soon is Array find-from-last. This will enable programmers to easily search for an element from the end of a collection, rather than needing to first reverse the collection to do this search.

Keep an eye on…

  • Realms
  • Import assertions
  • Module fragments

Normative Spec Changes

Remove “Designed to be subclassable” note.

  • Notes
  • Proposal
  • Slides
  • Summary: Unrelated to the “remove subclassable proposal” – this pr seeks to remove confusing notes about the “subclassabilty” of classes such as “boolean” where such a note makes no sense.
  • Impact on SM: No change
  • Outcome: Consensus.

Restricting callables to only be able to return normal and throw completions

  • Notes
  • Proposal
  • Slides
  • Summary: This proposal tightens the specification language around the return value of callables. Prior to this change, it would be possible for a spec compliant implementation to have functions return with a completion type “break”. This doesn’t make that much sense and is fixed here.
  • Impact on SM: No change
  • Outcome: Consensus.

Proposals Seeking Advancement to Stage 4

Ergonomic Brand Checks

  • Notes
  • Proposal
  • PR
  • Spec
  • Summary: Provides an ergonomic way to check the presence of a private field when one of its methods is called.
  • Impact on SM: Already Shipping,
  • Outcome: Advanced to stage 4.

Proposals Seeking Advancement to Stage 3

Array Find From Last

  • Notes
  • Proposal Link
  • Slides
  • Summary: Proposal for .findLast() and .findLastIndex() methods on array.
  • Impact on SM: In progress
  • Outcome: Advanced to stage 3

Intl Enumeration API

  • Notes
  • Proposal Link
  • Slides
  • Summary: Intl enumeration allows inspecting what is available on the Intl API.
  • Impact on SM: In progress
  • Outcome: Advanced to stage 3.

Realms for stage 3

  • Notes Day 1
  • Notes Day 3
  • Proposal Link
  • Slides
  • Summary: The Realms proposal exposes a new global without a document for use by JS programmers, think iframes without the document. This new proposal api is “isolated realms” which does not allow passing bare object specifiers between realms. This is an improvement from the browser architecture perspective, but it is less ergonomic. This issue was called out in the previous meeting. In this meeting the issue was resolved by splitting out the added functionality to its own proposal, getOriginals. Realms advanced to stage 3. getOriginals advanced to stage 1.
  • Impact on SM: Needs implementation, must not ship until the name “Isolated Realms” has been resolved.
  • Outcome: Realms advanced to stage 3. GetOriginals advanced to stage 1.

Stage 3 Updates

Intl.NumberFormat v3

  • Notes
  • Proposal Link
  • Slides
  • Summary: A batch of internationalization features for number formatting. This update focused on changes to grouping enums, rounding and precision options, and sign display negative.
  • Impact on SM: In progress
  • Outcome: Advanced to stage 3.

Extend TimeZoneName Option Proposal

  • Notes
  • Proposal Link
  • Slides
  • Summary: Adds further options for the TimeZoneName option in Intl.DateTimeFormat, allowing for greater accuracy in representing different time zones. No major changes since last presentation.
  • Impact on SM: Implemented

Intl Locale update

  • Notes
  • Proposal Link
  • Slides
  • Summary: An API to expose information of locale, such as week data (first day of a week, weekend start, weekend end), hour cycle, measurement system, commonly used calendar, etc. There was a request regarding excluding standard and search from intl.Locale.prototype.collations, which was retrospectively agreed to.
  • Impact on SM: In progress

Intl DisplayNames

  • Notes
  • Proposal Link
  • Slides
  • Summary: Adds further coverage to the existing Intl.DisplayNames API. No significant changes since last presentation. There has been progress in implementation.
  • Impact on SM: In progress

Import Assertions update

  • Notes
  • Proposal Link
  • Slides
  • Summary: The Import Assertions proposal adds an inline syntax for module import statements to pass on more information alongside the module specifier. The initial application for such assertions will be to support additional types of modules in a common way across JavaScript environments, starting with JSON modules. The syntax allows for the following.
      import json from "./foo.json" assert { type: "json" };
    

    The update focused on the question of “what do we do when we have an assertion that isn’t recognized?”. Currently if a host sees a module type assertion that they don’t recognize they can choose what to do. From our perspective it would be better to restrict this somehow – for now the champions will not change the specification.

  • Impact on SM: Implementation in Progress

Object.hasOwn (Accessible Object hasOwnProperty)

  • Notes
  • Proposal Link
  • Slides
  • Summary: Checking an object for a property at the moment, is rather unintuitive and error prone. This proposal introduces a more ergonomic wrapper around a common pattern involving Object.prototype.hasOwnProperty which allows the following:
      let hasOwnProperty = Object.prototype.hasOwnProperty
    
      if (hasOwnProperty.call(object, "foo")) {
        console.log("has property foo")
      }
    

    to be written as:

      if (Object.hasOwn(object, "foo")) {
        console.log("has property foo")
      }
    

    No significant changes since the last update.

  • Impact on SM: Implemented

Proposals Seeking Advancement to Stage 2

Array filtering

  • Notes
  • Proposal Link
  • Slides
  • Summary: This proposal was two proposals bundled. It introduces a .filterReject method which is an alias for a negative filter, such as [1, 2, 3].filter(x => !(x > 2)) which would return all of the elements less than or equal to 2. This did not move forward. A second proposal, groupBy, which groups elements related to a condition (for example, [1,2,3].groupBy(x => x > 2), would return {false:[1,2],true: [3]}); GroupBy advanced to stage 1 as a separate proposal.
  • Impact on SM: No change yet.
  • Outcome: FilterOut did not advance. GroupBy is treated as its own proposal and is now stage 1.

Stage 2 Updates

Decorators update

  • Notes
  • Proposal Link
  • Slides
  • Summary: The decorators proposal had a champion switch, but they are now happy with the current semantics of the proposal and are seeking stage 3 proposal reviewers. Decorators are functions called on classes, class elements, or other JavaScript syntax forms during definition. They have 3 capabilities: to replace the value being decorated, to associate metadata with a given value being decorated, or provide access to that decorated value. Our concerns with the proposal were related to possible performance issues arising from the proposal. These were addressed in the last iteration, and we are looking forward to rereading the spec.
  • Impact on SM: Needs review.

Proposals Seeking Advancement to Stage 1

ArrayBuffer to/from Base64

  • Notes
  • Proposal Link
  • Slides
  • Summary: Transforms an array buffer to and from Base64. base64 is the de-facto standard way to represent arbitrary binary data as ASCII. JavaScript has ArrayBuffers (and other wrapping types) to work with binary data, but no built-in mechanism to encode that data as base64, nor to take base64’d data and produce a corresponding ArrayBuffer. Peter Hoodie from Moddable raised concerns about this being out of scope, but did not block stage 1.
  • Impact on SM: No change yet.
  • Outcome: Advanced to stage 1.

Stage 1 Updates

Module fragments current direction

  • Notes day 2
  • Notes day 3
  • Proposal Link
  • Slides
  • Summary: The Module fragments proposal allows multiple modules to be written in the same file. The issue was raised that this proposal should be closer in terms of syntax to module blocks, and this change achieved consensus. The primary changes are:

    • Module fragments are named by identifiers, not strings, so they are declared like module foo { export x = 1 }
    • Import statements can load a module fragment with syntax like import { x } from foo;, similarly as an identifier.
    • Import statements which import from a module fragment can work on anything which was declared by a top-level module fragment declaration in the same module, or one which was imported from another module. There’s a link-time data structure representing the subset of the lexical scope which is the statically visible module fragments.
    • When a declared module fragment is referenced as a variable, in a normal expression context, it evaluates to a module block (one per time when it was evaluated, so the same one is reused for module fragments declared at the top level). It appears as a const declaration (so the link-time and run-time semantics always correspond).
    • Module fragments are only visible from outside the module by importing the containing module, and here, only if they are explicitly exported. They have no particular URL (note related issue: Portability concerns of non-string specifiers #10)
    • Module fragment declarations can appear anywhere a statement can, e.g., eval, nested blocks, etc (but they can only have a static import against them if they are at the top-level of a module). In contexts which are not the top level of a module, module fragments are just useful for their runtime behavior, of a nice way of declaring a module block.

    This achieved consensus and the proposal had support overall.

  • Impact on SM: No change yet.

Cameron KaiserTenFourFox FPR32 SPR3 available

TenFourFox Feature Parity Release 32 Security Parity Release 3 "32.3" is available for testing (downloads, hashes). There are, once again, no changes to the release notes and nothing notable regarding the security patches in this release. Assuming no major problems, FPR32.3 will go live Monday evening Pacific time as usual. FPR32.4 will appear on September 7 and the final official build FPR32.5 on October 5.

Firefox Add-on ReviewsHow to use a temp mail extension for spam and security protection

One of the most common methods malicious hackers use to break into their victims’ computer systems is tricking them into clicking dangerous links within an email. It’s been popular with cyber criminals for decades because it’s so simple yet consistently effective. Just make the email appear like it’s from a trusted source and include a compelling link that, once clicked, is like opening the front door of your home to a thief. 

Temp mail (i.e. temporary email) is a tremendous way to combat this classic cyber scam. Temp mail creates disposable email accounts for you to use for non-personal/business situations, like registering with websites or online services when you don’t want them knowing your actual email, because the more your actual email is in circulation the greater its chances of falling into the hands of malicious actors. 

Beyond security protection, temp mail is also great for filtering spam. Consider how many daily emails you receive from different social media sites, services, etc.—trying to pull you back into their orbit. Certainly your inbox has seen better days? 

So clear the inbox clutter and better protect yourself against cybercrime by using a temp mail browser extension…

Temp Mail – Disposable Temporary Mail

Just click Temp Mail – Disposable Temporary Mail toolbar button to create a temp mail address and access other extension features. 

Temp Mail – Disposable Temporary Mail is free to use and, once installed, always available wherever you and your browser go on the web. Your Temp Mail email accounts will remain active until you delete them, so just how “temporary” they are is entirely up to you (also note that whenever you delete a Temp Mail account, other personal details like your IP address will be wiped away as well). 

To be clear, you can operate temp mail just like you would any other email account—you’re free to send and receive messages at will. 

<figcaption>The Temp Mail service will be right there whenever you need it. </figcaption>

Firefox Relay

Mozilla has developed a temp mail service designed for Firefox users called Firefox Relay. It lets you create anonymous email aliases that will forward messages on to your actual, personal email addresses. 

Relay will keep track of all the aliases you’ve created and they’ll remain active until you delete them. Do note, however, that Relay does not allow you to reply to messages anonymously, though that feature is in the works and will hopefully roll out soon. 

If curious, here’s more information about Firefox Relay.

<figcaption>Just click the Firefox Relay button in the email form fields to automatically generate your new alias. </figcaption>

Ew, Mail!

There are no distinct features of Ew, Mail! that you won’t find in Temp Mail – Disposable Mail or Firefox Relay, but it’s worth including here because it may be the most lightweight of the three. 

Whenever you encounter a need for temp mail, just place your mouse cursor in the address field and right-click to pull up an option to create a temp mail address. Simple as that. 

We hope one of these handy temp mail extensions will give you more security—and less spam. Feel free to explore more great privacy extensions on addons.mozilla.org.

Cameron KaiserAnd now for something completely different: Australia needs to cut the crap with expats

I'm going to be very tightly focused in this post, because there are tons of politics swirling around COVID-19 (and anyone who knows my actual line of work will know my opinions about it); any comments about masks, vaccines, etc. will be swiftly removed. Normally I don't discuss non-technical topics here, but this is a situation that personally affects me and this is my blog, so there. I want to talk specifically about the newly announced policy that Australians normally resident overseas will now require an exemption to leave the country.

(via twitter)

I am an Australian-American dual citizen (via my mother, who is Australian, but is resident in the United States), and my wife of five years is Australian. She is legimately a resident of Australia because she was completing her master's degree there and had to teach in the Australian system to get an unrestricted credential. All this happened when the borders closed. Anyone normally resident in Australia must obtain an exemption to leave the country and cite good cause, except to a handful of countries like New Zealand (who only makes the perfectly reasonable requirement that its residents have a spot in quarantine for when they return).

It was already difficult to exit Australia before, which is why, for the six weeks that I've gotten to see my wife since January 2020, it was me traveling to Australia. Here again many thanks to Air New Zealand, who were very understanding on rescheduling (twice) and even let us keep our Star Alliance Gold status even though we weren't flying much, I did my two weeks of quarantine, got my two negative tests, and was released into the hinterlands of regional New South Wales to visit that side of the family. Upon return to Sydney Airport, it was a simple matter to leave the country, since it was already obvious in the immigration records that I don't normally reside in it.

(The nearly abandoned International Terminal in Sydney when I left.)

Now, there is the distinct possibility that if I can land a ticket to visit my wife, and if I can get space in hotel quarantine (at A$3000, plus greatly inflated airfares), despite being fully vaccinated, I may not be able to leave. Trying to get my credentials approved in Australia has been hung up for months so I wouldn't be able to have a job there in my current employ, and with my father currently on chemo, if he were to take a turn for the worse there are plenty of horror stories of Australians being unable to see terminally ill family members due to refused exemptions (or, adding insult to injury, being approved when they actually died).

I realize as (technically) an expat there isn't much of a constituency to join, but even given we're in the middle of a pandemic this crap has to stop. Restricting entries is heavyhanded, but understandable. Reminding those exiting that they're responsible for hotel or camp quarantine upon return is onerous (and should be reexamined at minimum for those who have indeed gotten the jab), but defensible. Preventing Australian citizens from leaving altogether, especially those with family, is unconscionable and the arbitrary nature of the exemption process is a foul joke.

If Premier Palaszczuk can strike a pose at the International Olympic Committee and Prime Minster Morrison can go gallivanting with randos in English pubs, those of us who are vaccinated and following the law should have that same freedom. I should be able to visit my wife and she should be able to visit me.

Data@MozillaThis Week in Glean: Building a Mobile Acquisition Dashboard in Looker

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

As part of the DUET (Data User Engagement Team) working group, some of my day-to-day work involves building dashboards for visualizing user engagement aspects of the Firefox product. At Mozilla, we recently decided to use Looker to create dashboards and interactive views on our datasets. It’s a new system to learn but provides a flexible model for exploring data. In this post, I’ll walk through the development of several mobile acquisition funnels built in Looker. The most familiar form of engagement modeling is probably through funnel analysis — measuring engagement by capturing a cohort of users as they flow through various acquisition channels into the product. Typically, you’d visualize the flow as a Sankey or funnel plot, counting retained users at every step. The chart can help build intuition about bottlenecks or the performance of campaigns.

Mozilla owns a few mobile products; there is Firefox for Android, Firefox for iOS, and then Firefox Focus on both operating systems (also known as Klar in certain regions). We use Glean to instrument these products. The foremost benefit of Glean is that it encapsulates many best practices from years of instrumenting browsers; as such, all of the tables that capture anonymized behavior activity are consistent across the products. One valuable idea from this setup is that writing a query for a single product should allow it to extend to others without too much extra work. In addition, we pull in data from both the Google Play Store and Apple App Store to analyze the acquisition numbers. Looker allows us to take advantage of similar schemas with the ability to templatize queries.

ETL Pipeline

The pipeline brings all of the data into BigQuery so it can be referenced in a derived table within Looker.

  1. App Store data is exported into a table in BigQuery.
  2. Glean data flows into the org_mozilla_firefox.baseline table.
  3. A derived org_mozilla_firefox.baseline_clients_first_seen table is created from the baseline table. An org_mozilla_firefox.baseline_clients_daily table is created that references the first seen table.
  4. A Looker explore references the baseline_clients_clients_daily table in a parameterized SQL query, alongside data from the Google Play Store.
  5. A dashboard references the explore to communicate important statistics at first glance, alongside configurable parameters.

Peculiarities of Data Sources

Before jumping off into implementing a dashboard, it’s essential to discuss the quality of the data sources. For one, Mozilla and the app stores count users differently, which leads to subtle inconsistencies.

For example, there is no way for Mozilla to tie a Glean client back to the Play Store installation event in the Play Store. Each Glean client is assigned a new identifier for each device, whereas the Play Store only counts new installs by account (which may have several devices). We can’t track a single user across this boundary, and instead have to rely on the relative proportions over time. There are even more complications when trying to compare numbers between Android and iOS. Whereas the Play Store may show the number of accounts that have visited a page, the Apple App Store shows the total number of page visits instead. Apple also only reports users that have opted into data collection, which under-represents the total number of users.

These differences can be confusing to people who are not intimately familiar with the peculiarities of these different systems. Therefore, an essential part of putting together this view is documenting and educating the dashboard users to understand the data better.

Building a Looker Dashboard

There are three components to building a Looker dashboard: a view, an explore, and a dashboard. These files are written in a markup called LookML. In this project, we consider three files:

  • mobile_android_country.view.lkml
    • Contains the templated SQL query for preprocessing the data, parameters for the query, and a specification of available metrics and dimensions.
  • mobile_android_country.explore.lkml
    • Contains joins across views, and any aggregate tables suggested by Looker.
  • mobile_android_country.dashboard.lkml
    • Generated dashboard configuration for purposes of version-control.

View

The view is the bulk of data modeling work. Here, there are a few fields that are particularly important to keep in mind. First, there is a derived table alongside parameters, dimensions, and measures.

The derived table section allows us to specify the shape of the data that is visible to Looker. We can either refer to a table or view directly from a supported database (e.g., BigQuery) or write a query against that database. Looker will automatically re-run the derived table as necessary. We can also template the query in the view for a dynamic view into the data.

derived_table: {
  sql: with period as (SELECT ...),
      play_store_retained as (
          SELECT
          Date AS submission_date,
          COALESCE(IF(country = "Other", null, country), "OTHER") as country,
          SUM(Store_Listing_visitors) AS first_time_visitor_count,
          SUM(Installers) AS first_time_installs
          FROM
            `moz-fx-data-marketing-prod.google_play_store.Retained_installers_country_v1`
          CROSS JOIN
            period
          WHERE
            Date between start_date and end_date
            AND Package_name IN ('org.mozilla.{% parameter.app_id %}')
          GROUP BY 1, 2
      ),
      ...
      ;;
}

Above is the derived table section for the Android query. Here, we’re looking at the play_store_retained statement inside the common table expression (CTE). Inside of this SQL block, we have access to everything available to BigQuery in addition to view parameters.

# Allow swapping between various applications in the dataset
parameter: app_id {
  description: "The name of the application in the `org.mozilla` namespace."
  type:  unquoted
  default_value: "fenix"
  allowed_value: {
    value: "firefox"
  }
  allowed_value: {
    value: "firefox_beta"
  }
  allowed_value: {
    value:  "fenix"
  }
  allowed_value: {
    value: "focus"
  }
  allowed_value: {
    value: "klar"
  }
}

View parameters trigger updates to the view when changed. These are referenced using the liquid templating syntax:

AND Package_name IN (‘org.mozilla.{% parameter.app_id %}’)

For Looker to be aware of the shape of the final query result, we must define dimensions and metrics corresponding to columns in the result. Here is the final statement in the CTE from above:

SELECT
    submission_date,
    country,
    max(play_store_updated) AS play_store_updated,
    max(latest_date) AS latest_date,
    sum(first_time_visitor_count) AS first_time_visitor_count,
    ...
    sum(activated) AS activated
FROM play_store_retained
FULL JOIN play_store_installs
USING (submission_date, country)
FULL JOIN last_seen
USING (submission_date, country)
CROSS JOIN period
WHERE submission_date BETWEEN start_date AND end_date
GROUP BY 1, 2
ORDER BY 1, 2

 

Generally, in an aggregate query like this, the grouping columns will become dimensions while the aggregate values become metrics. A dimension is a column that we can filter or drill down into to get a different slice of the data model:

dimension: country {
  description: "The country code of the aggregates. The set is limited by those reported in the play store."
  type: string
  sql: ${TABLE}.country ;;
}

Note that we can refer to the derived table using the ${TABLE} variable (not unlike interpolating a variable in a bash script).

A measure is a column that represents a metric. This value is typically dependent on the dimensions.

measure: first_time_visitor_count {
  description: "The number of first time visitors to the play store."
  type: sum
  sql: ${TABLE}.first_time_visitor_count ;;
}

We must ensure that all dimensions and columns are declared to make them available to explores. Looker provides a few ways to create these fields automatically. For example, if you create a view directly from a table, Looker can autogenerate these from the schema. Likewise, the SQL editor has options to generate a view file directly. Whatever the method may be, some manual modification will be necessary to build a clean data model for use.

Explore

One of the more compelling features of Looker is the ability for folks to drill down into data models without the need to write SQL. They provide an interface where the dimensions and measures can be manipulated and plotted in an easy-to-use graphical interface. To do this, we need to declare which view to use. Often, just declaring the explore is sufficient:

include: "../views/*.view.lkml"

explore: mobile_android_country {
}

We include the view from a location relative to the explore file. Then we name an explore that shares the same name as the view. Once committed, the explore is available to explore in a drop-down menu in the main UI.

The explore can join multiple views and provide default parameters. In this project, we utilize a country view that we can use to group countries into various buckets. For example, we may have a group for North American countries, another for European countries, and so forth.

explore: mobile_android_country {
  join: country_buckets {
    type: inner
    relationship: many_to_one
    sql_on:  ${country_buckets.code} = ${mobile_android_country.country} ;;
  }
  always_filter: {
    filters: [
      country_buckets.bucket: "Overall"
    ]
  }
}

Finally, the explore is also the place where Looker will materialize certain portions of the view. Materialization is only relevant when copying the materialized segments from the exported dashboard code. An example of what this looks like follows:

aggregate_table: rollup__submission_date__0 {
  query: {
    dimensions: [
      # "app_id" is filtered on in the dashboard.
      # Uncomment to allow all possible filters to work with aggregate awareness.
      # app_id,
      # "country_buckets.bucket" is filtered on in the dashboard.
      # Uncomment to allow all possible filters to work with aggregate awareness.
      # country_buckets.bucket,
      # "history_days" is filtered on in the dashboard.
      # Uncomment to allow all possible filters to work with aggregate awareness.
      # history_days,
      submission_date
    ]
    measures: [activated, event_installs, first_seen, first_time_visitor_count]
    filters: [
      # "country_buckets.bucket" is filtered on by the dashboard. The filter
      # value below may not optimize with other filter values.
      country_buckets.bucket: "tier-1",
      # "mobile_android_country.app_id" is filtered on by the dashboard. The filter
      # value below may not optimize with other filter values.
      mobile_android_country.app_id: "firefox",
      # "mobile_android_country.history_days" is filtered on by the dashboard. The filter
      # value below may not optimize with other filter values.
      mobile_android_country.history_days: "7"
    ]
  }  # Please specify a datagroup_trigger or sql_trigger_value
  # See https://looker.com/docs/r/lookml/types/aggregate_table/materialization
  materialization: {
    sql_trigger_value: SELECT CURRENT_DATE();;
  }
}

Dashboard

Looker provides the tooling to build interactive dashboards that are more than the sum of its parts. Often, the purpose is to present easily digestible information that has been vetted and reviewed by peers. To build a dashboard, you start by adding charts and tables from various explores. Looker provides widgets for filters and for markdown text used to annotate charts.  It’s an intuitive process that can be somewhat tedious, depending on how complex the information you’re trying to present.

Once you’ve built the dashboard, Looker provides a button to get a YAML representation to check into version control. The configuration file contains all the relevant information for constructing the dashboard and could even be written by hand with enough patience.

Strengths and Weaknesses of Looker

Now that I’ve gone through building a dashboard end-to-end, here are a few points summarizing my experience and the take-aways from putting together this dashboard.

Parameterized queries allow flexibility across similar tables

I worked with Glean-instrumented data in another project by parameterizing SQL queries using Jinja2 and running queries multiple times. Looker effectively brings this process closer to runtime and allows the ETL and visualization to live on the same platform. I’m impressed by how well it works in practice. The combination of consistent data models in bigquery-etl (e.g. clients_first_seen) and the ability to parameterize based on app-id was surprisingly straightforward. The dashboard can switch between Firefox for Android and Focus for Android without a hitch, even though they are two separate products with two separate datasets in BigQuery.

I can envision many places where we may not want to precompute all results ahead of time but instead just a subset of columns or dates on-demand. The costs of precomputing and materializing data is non-negligible, especially for large expensive queries that are viewed once in a blue moon or dimensions that fall in the long tail. Templating and parameters provide a great way to build these into the data model without having to resort to manually written SQL.

LookML in version control allows room for software engineering best practices

While Looker appeals to the non-technical crowd, it also affords many conveniences for the data practitioners who are familiar with the software development practices.

Changes to LookML files are version controlled (e.g., git). Being able to create branches and work on multiple features in parallel has been handy at times. It’s relieving to have the ability to make changes in my instance of the Looker files when trying out something new without having to lose my place. In addition, the ability to configure LookML views, explores, and dashboards in code allow for the process of creating new dashboards to incorporate many best practices like code review.

In addition, it’s nice to be able to use a real editor for mass revision. I was able to create a new dashboard for iOS data that paralleled the Android dashboard by copying over files, modifying the SQL in the view, and making a few edits to the dashboard code directly.

Workflow management is clunky for deploying new dashboards

While there are many upsides to having LookML explores and dashboards in code, there are several pain points while working with the Looker interface.

In particular, the workflow for editing a Dashboard goes something like this. First, you copy the dashboard into a personal folder that you can edit. Next, you make whatever modifications to that dashboard using the UI. Afterward, you export the result and copy-paste it into the dashboard code. While not ideal, this prevents the Dashboard from going out of sync from the one that you’re editing directly (since there won’t be conflicts between the UI and the code in version control). However, it would be nice if it were possible to edit the dashboard directly instead of making a copy with Looker performing any conflict resolution internally.

There have been moments where I’ve had to fight with the built-in git interface built into Looker’s development mode. Reverting a commit to a particular branch or dealing with merge conflicts can be an absolute nightmare. Suppose you do happen to pull the project in a local environment. In that case, you aren’t able to validate your changes locally (you’ll need to push, pull into Looker, and then validate and fix anything). Finally, the formatting option is stuck behind a keyboard shortcut while the browser is already using the keyboard shortcut.

Conclusion: Iterating on Feedback

Simply building a dashboard is not enough to demonstrate that it has value. It’s important to gather feedback from peers and stakeholders to determine the best path forward. Some things benefit from having a concrete implementation, though; there are differences between different platforms and inconsistencies in the data that may only appear after putting together an initial draft of a project.

While hitting goals of making data across app stores and our user populations visible, the funnel dashboard has room for improvement. Having this dashboard located in Looker makes the process of iterating that much easier, though. In addition, the feedback cycle of changing the query to seeing the results is relatively low and is easy to roll back. The tool is promising, and I look forward to seeing how it transforms the data landscape at Mozilla.

Resources

Mozilla Addons BlogThank you, Recommended Extensions Community Board!

Given the broad visibility of Recommended extensions across addons.mozilla.org (AMO), the Firefox Add-ons Manager, and other places we promote extensions, we believe our curatorial process should include a wide range of perspectives from our global community of contributors. That’s why we have the Recommended Extensions Advisory Board—an ongoing project that involves a rotating group of contributors to help identify and evaluate new extension candidates for the program.

Our most recent community board just completed their six-month project and I’d like to take a moment to thank Sylvain Giroux, Jyotsna Gupta, Chandan Baba, Juraj Mäsiar, and Pranjal Vyas for sharing their time, passion, and knowledge of extensions. Their insights helped usher a wave of new extensions into the Recommended program, including really compelling content like I Don’t Care About Cookies (A+ cookie manager), Tab Stash (highly original take on tab management), Custom Scrollbars (neon colored scrollbar? Yes please!), PocketTube (great way to organize a bunch of YouTube subscriptions), and many more. 

On behalf of the entire Add-ons staff, thank you and all!

Now we’ll turn our attention to forming the next community board for another six-month project dedicated to evaluating new Recommended candidates. If you have a passion for browser extensions and you think you could make an impact contributing your insights to our curatorial process, we’d love to hear from you by Monday, 30 August. Just drop us an email at amo-featured [at] mozilla.org along with a brief note letting us know a bit about your experience with extensions—whether as a developer, user, or both—and why you’d like to participate on the next Recommended Extensions Community Advisory Board.

The post Thank you, Recommended Extensions Community Board! appeared first on Mozilla Add-ons Community Blog.

Mozilla Performance BlogPerformance in progress

In the last six months the Firefox performance team has implemented changes to improve startup, responsiveness, security (Fission), and web standards.

Startup and perceived startup

Doug Thayer and Emma Malysz implemented work to improve the perceived startup of Firefox on Windows using a concept called the skeleton UI. Users on Windows may click the Firefox icon and not get visual feedback in the timeline they expect that Firefox is starting. So they click the icon again. And again. And then their screen looks like this.

The reason that startup takes a long time is that many things need to happen before Firefox starts.

As part of startup, we need to start the JS engine, load the profile to get the size and position of the window. We also need to load a large library called XUL.dll which takes a lot of time to read from disk, especially if your computer is slow.

So what changes did the skeleton UI implement? Basically after the icon is clicked, we immediately show a window to indicate that Firefox is starting.

The final version of the skeleton UI looks at the user’s past sessions and creates a window with the theme, window dimensions, toolbar content and positions. You can see what it looks like in this video where the right hand side starts up with the skeleton UI in place. These changes are now available on Firefox 92 beta and riding the trails to release!

Photo by Saffu on Unsplash

In other impactful work to address startup, last summer, Keefer Rourke, an intern on the performance team wrote a simplified API for file IO called IOUtils for use with privileged JavaScript. Emma Malysz and Barret Rennie, along with contributors migrated the existing startup code to IOUtils to improve startup performance.

Responsiveness

Previously, when a Firefox user encountered a page that had a script that ran over a certain timing threshold, you would see a warning message that looked as follows:

For many people, this warning showed up too often, the cause was unclear and the options or next steps were confusing.

Doug Thayer and Emma Malysz embarked on work in early 2021 to reduce the proportion of users who experience the slow script warning. The solution that was implemented changed the user experience so the warning would only show if a user interacted with a hung page. They also added code to blame the page that’s causing the hang and remove the confusing “wait button”.

The result is a 50% reduction in slow script notification submissions!

Vsync

Sean Feng implemented changes to make user interaction more strictly aligned with when the next frame is going to be presented on the screen. This makes Firefox feel more responsive by making sure a Frame always contains the result of all pending user interactions. On mobile Sean also implemented changes for better responsiveness on mobile devices. Sean landed code to allow the coalescing of more touchmove events to generate the events more efficiently.

The impact of Sean’s work plus Matt Woodrow’s vsync work in bug is reflected in the graph above.  To read more about other responsiveness changes in Firefox, Bas Schouten’s blog post provides more details.

Security (Fission)

Fission is site isolation in Firefox. If you want to learn more, read this detailed and thorough blog post by Anny Gakhokidz and Neha Kochar to learn about the implementation and rollout of Fission in Firefox.

Sean Feng and Randell Jesup landed changes to improve process switches related to NSS initialization and http accept setup in process preallocation for Fission. There are improvements on several pages on Windows (~9% for google search, 5% for bing, around 3-4% for gmail, 2-3% for Microsoft); This should cut process-switch times by 6-8ms, perhaps as high as 10. Previously, we were seeing 20-40ms of time attributable to switching processes.

Web standards

The Performance Event Timing API was enabled in Firefox 89 by Sean Feng on all platforms. This API provides web page authors with insights into the latency of certain events triggered by user interactions which is a prerequisite for Web Vitals. To learn more read 1667836 – Prototype PerformanceEventTiming, the announcement and the specification.

Tooling

The performance team would like to thank everyone who contributed to this work

Markus Jaritz, Eric Smyth, Adam Gashlin, Molly Howell, Chris Martin, Jim Mathies, Aaron Klotz, Florian Quèze, Gijs Kruitbosch, Mike Conley, Markus Stange, Emma Malysz, Doug Thayer, Denis Palmerio, Sean Feng, Andrew Creskey, Barret Rennie, Benjamin De Kosnik, Bas Schouten Marc Leclair and Mike Comella. A special thanks to Doug Thayer for the artwork to display the changes in the skeleton UI and slow script work!

Firefox Add-on ReviewsRead EPUB e-books right in your browser

For many online readers you simply can’t beat the convenience and clarity of reading e-books in EPUB form (i.e. “electronic publication”). EPUB literature adjusts nicely to any screen size or device, but if you want to read EPUBs in your browser, you’ll need an extension to open their distinct files. Here are a few extensions to help turn your browser into an awesome digital bookshelf. 

EPUBReader

Extremely popular and easy to use, EPUBReader can take care of all your e-reading needs in one extension. 

Whenever you encounter a website that offers EPUB, the extension automatically loads the ebook for you. 

Access features by clicking EPUBReader’s toolbar icon, which launches a hub for all your EPUB activity. Here you’ll find all of your saved EPUB files (plus a portal for discovering new, free ebooks), as well as manage your layout settings like text font, size, colors, backgrounds, and more. 

<figcaption>The Adventures of Captain Hatteras in EPUBReader.</figcaption>

EPUBReader also works very well in tandem with…

Read Aloud: text to speech voice reader

Think of Read Aloud: text to speech voice reader as an audio version of a traditional text-based e-reader. Sit back and let it read the web to you. 

Key features:

  • 40+ languages 
  • Male/female voice options
  • Adjust the pitch and reading speed of any voice
  • PDF support 
<figcaption>Story time with Read Aloud. </figcaption>

EpubPress – read the web offline

Optimized for offline reading, EpubPress lets you easily download and organize web pages into a “book” for offline reading. Use it to compile an actual long read book, or utilize it for saving news articles and other short form reading lists.  

Very intuitive to operate. Once you have all the pages you want to collate opened in separate tabs, just order them how you want them to appear in your book. Ads and other distracting widgets are automatically removed from your saved pages. 

<figcaption>EpubPress conveniently turns individual web pages into easy-to-read offline e-books. </figcaption>

We hope these extensions bring you great browser reading joy! Explore more reading extensions on addons.mozilla.org

Jeff KlukasDeduplication: Where Apache Beam Fits In

Summary of a talk delivered at Apache Beam Digital Summit on August 4, 2021.

Title slide

This session will start with a brief overview of the problem of duplicate records and the different options available for handling them. We’ll then explore two concrete approaches to deduplication within a Beam streaming pipeline implemented in Mozilla’s open source codebase for ingesting telemetry data from Firefox clients.

We’ll compare the robustness, performance, and operational experience of using the deduplication built in to PubsubIO vs. storing IDs in an external Redis cluster and why Mozilla switched from one approach to the other.

Finally, we’ll compare streaming deduplication to a much stronger end-to-end guarantee that Mozilla achieves via nightly scheduled queries to serve historical analysis use cases.

Links

Mozilla Privacy BlogAdvancing advertising transparency in the US Congress

At Mozilla we believe that greater transparency into the online advertising ecosystem can empower individuals, safeguard advertisers’ interests, and address systemic harms. Lawmakers around the world are stepping up to help realize that vision, and in this post we’re weighing in with some preliminary reflections on a newly-proposed ad transparency bill in the United States Congress: the Social Media DATA Act.

The bill – put forward by Congresswoman Lori Trahan of Massachusetts – mandates that very large platforms create and maintain online ‘ad libraries’ that would be accessible to academic researchers. The bill also seeks to advance the policy discourse around transparency of platform systems beyond advertising (e.g. content moderation practices; recommender systems; etc), by directing the Federal Trade Commission to develop best-practice guidelines and policy recommendations on general data access.

We’re pleased to see that the bill has some many welcome features that mirror’s Mozilla’s public policy approach to ad transparency:

  • Clarity: The bill spells out precisely what kind of data should be made available, and includes many overlaps with Mozilla’s best practice guidance for ad archive APIs. This approach provides clarity for companies that need to populate the ad archives, and a clear legal footing for researchers who wish to avail of those archives.
  • Asymmetric rules: The ad transparency provisions would only be applicable to very large platforms with 100 million monthly active users. This narrow scoping ensures the measures only apply to the online services for whom they are most relevant and where the greatest public interest risks lie.
  • A big picture approach: The bill recognizes that questions of transparency in the platform ecosystem go beyond simply advertising, but that more work is required to define what meaningful transparency regimes should look like for things like recommender systems and automated content moderation systems. It provides the basis for that work to ramp up.

Yet while this bill has many positives, it is not without its shortcomings. Specifically:

  • Access: Only researchers with academic affiliations will be able to benefit from the transparency provisions. We believe that academic affiliation should not be the sole determinant of who gets to benefit from ad archive access. Data journalists, unaffiliated public interest researchers, and certain civil society organizations can also be crucial watchdogs.
  • Influencer ads: This bill does not specifically address risks associated with some of the novel forms of paid online influence. For instance, our recent research into influencer political advertising on TikTok has underscored that this emergent phenomenon needs to be given consideration in ad transparency and accountability discussions.
  • Privacy concerns: Under this bill, ad archives would include data related to the targeting and audience of specific advertisements. If targeting parameters for highly micro-targeted ads are disclosed, this data could be used to identify specific recipients and pose a significant data protection risk.

Fortunately, these shortcomings are not insurmountable, and we already have some ideas for how they could be addressed if and when the bill proceeds to mark-up. In that regard, we look forward to working with Congresswoman Trahan and the broader policy community to fine-tune the bill and improve it.

We’ve long-believed that transparency is a crucial prerequisite for accountability in the online ecosystem. This bill signals an encouraging advancement in the policy discourse.

 

The post Advancing advertising transparency in the US Congress appeared first on Open Policy & Advocacy.

Hacks.Mozilla.OrgHow MDN’s autocomplete search works

Last month, Gregor Weber and I added an autocomplete search to MDN Web Docs, that allows you to quickly jump straight to the document you’re looking for by typing parts of the document title. This is the story about how that’s implemented. If you stick around to the end, I’ll share an “easter egg” feature that, once you’ve learned it, will make you look really cool at dinner parties. Or, perhaps you just want to navigate MDN faster than mere mortals.

MDN's autocomplete search in action

In its simplest form, the input field has an onkeypress event listener that filters through a complete list of every single document title (per locale). At the time of writing, there are 11,690 different document titles (and their URLs) for English US. You can see a preview by opening https://developer.mozilla.org/en-US/search-index.json. Yes, it’s huge, but it’s not too huge to load all into memory. After all, together with the code that does the searching, it’s only loaded when the user has indicated intent to type something. And speaking of size, because the file is compressed with Brotli, the file is only 144KB over the network.

Implementation details

By default, the only JavaScript code that’s loaded is a small shim that watches for onmouseover and onfocus for the search <input> field. There’s also an event listener on the whole document that looks for a certain keystroke. Pressing / at any point, acts the same as if you had used your mouse cursor to put focus into the <input> field. As soon as focus is triggered, the first thing it does is download two JavaScript bundles which turns the <input> field into something much more advanced. In its simplest (pseudo) form, here’s how it works:

<input 
 type="search" 
 name="q"
 onfocus="startAutocomplete()" 
 onmouseover="startAutocomplete()"
 placeholder="Site search..." 
 value="q">
let started = false;
function startAutocomplete() {
  if (started) {
    return false;
  }
  const script = document.createElement("script");
  script.src = "/static/js/autocomplete.js";
  document.head.appendChild(script);
}

Then it loads /static/js/autocomplete.js which is where the real magic happens. Let’s dig deeper with the pseudo code:

(async function() {
  const response = await fetch('/en-US/search-index.json');
  const documents = await response.json();
  
  const inputValue = document.querySelector(
    'input[type="search"]'
  ).value;
  const flex = FlexSearch.create();
  documents.forEach(({ title }, i) => {
    flex.add(i, title);
  });

  const indexResults = flex.search(inputValue);
  const foundDocuments = indexResults.map((index) => documents[index]);
  displayFoundDocuments(foundDocuments.slice(0, 10));
})();

As you can probably see, this is an oversimplification of how it actually works, but it’s not yet time to dig into the details. The next step is to display the matches. We use (TypeScript) React to do this, but the following pseudo code is easier to follow:

function displayFoundResults(documents) {
  const container = document.createElement("ul");
  documents.forEach(({url, title}) => {
    const row = document.createElement("li");
    const link = document.createElement("a");
    link.href = url;
    link.textContent = title;
    row.appendChild(link);
    container.appendChild(row);
  });
  document.querySelector('#search').appendChild(container);
}

Then with some CSS, we just display this as an overlay just beneath the <input> field. For example, we highlight each title according to the inputValue and various keystroke event handlers take care of highlighting the relevant row when you navigate up and down.

Ok, let’s dig deeper into the implementation details

We create the FlexSearch index just once and re-use it for every new keystroke. Because the user might type more while waiting for the network, it’s actually reactive so executes the actual search once all the JavaScript and the JSON XHR have arrived.

Before we dig into what this FlexSearch is, let’s talk about how the display actually works. For that we use a React library called downshift which handles all the interactions, displays, and makes sure the displayed search results are accessible. downshift is a mature library that handles a myriad of challenges with building a widget like that, especially the aspects of making it accessible.

So, what is this FlexSearch library? It’s another third party that makes sure that searching on titles is done with natural language in mind. It describes itself as the “Web’s fastest and most memory-flexible full-text search library with zero dependencies.” which is a lot more performant and accurate than attempting to simply look for one string in a long list of other strings.

Deciding which result to show first

In fairness, if the user types foreac, it’s not that hard to reduce a list of 10,000+ document titles down to only those that contain foreac in the title, then we decide which result to show first. The way we implement that is relying on pageview stats. We record, for every single MDN URL, which one gets the most pageviews as a form of determining “popularity”. The documents that most people decide to arrive on are most probably what the user was searching for.

Our build-process that generates the search-index.json file knows about each URLs number of pageviews. We actually don’t care about absolute numbers, but what we do care about is the relative differences. For example, we know that Array.prototype.forEach() (that’s one of the document titles) is a more popular page than TypedArray.prototype.forEach(), so we leverage that and sort the entries in search-index.json accordingly. Now, with FlexSearch doing the reduction, we use the “natural order” of the array as the trick that tries to give users the document they were probably looking for. It’s actually the same technique we use for Elasticsearch in our full site-search. More about that in: How MDN’s site-search works.

The easter egg: How to search by URL

Actually, it’s not a whimsical easter egg, but a feature that came from the fact that this autocomplete needs to work for our content creators. You see, when you work on the content in MDN you start a local “preview server” which is a complete copy of all documents but all running locally, as a static site, under http://localhost:5000. There, you don’t want to rely on a server to do searches. Content authors need to quickly move between documents, so much of the reason why the autocomplete search is done entirely in the client is because of that.

Commonly implemented in tools like the VSCode and Atom IDEs, you can do “fuzzy searches” to find and open files simply by typing portions of the file path. For example, searching for whmlemvo should find the file files/web/html/element/video. You can do that with MDN’s autocomplete search too. The way you do it is by typing / as the first input character.

Activate "fuzzy search" on MDN

It makes it really quick to jump straight to a document if you know its URL but don’t want to spell it out exactly.
In fact, there’s another way to navigate and that is to first press / anywhere when browsing MDN, which activates the autocomplete search. Then you type / again, and you’re off to the races!

How to get really deep into the implementation details

The code for all of this is in the Yari repo which is the project that builds and previews all of the MDN content. To find the exact code, click into the client/src/search.tsx source code and you’ll find all the code for lazy-loading, searching, preloading, and displaying autocomplete searches.

The post How MDN’s autocomplete search works appeared first on Mozilla Hacks - the Web developer blog.

The Rust Programming Language BlogThe push for GATs stabilization

The push for GATs stabilization

Where to start, where to start...

Let's begin by saying: this is a very exciting post. Some people reading this will be overwhelmingly thrilled; some will have no idea what GATs (generic associated types) are; others might be in disbelief. The RFC for this feature did get opened in April of 2016 (and merged about a year and a half later). In fact, this RFC even predates const generics (which an MVP of was recently stabilized). Don't let this fool you though: it is a powerful feature; and the reactions to the tracking issue on Github should maybe give you an idea of its popularity (it is the most upvoted issue on the Rust repository): GATs reactions

If you're not familiar with GATs, they allow you to define type, lifetime, or const generics on associated types. Like so:

trait Foo {
    type Bar<'a>;
}

Now, this may seem underwhelming, but I'll go into more detail later as to why this really is a powerful feature.

But for now: what exactly is happening? Well, nearly four years after its RFC was merged, the generic_associated_types feature is no longer "incomplete."

crickets chirping

Wait...that's it?? Well, yes! I'll go into a bit of detail later in this blog post as to why this is a big deal. But, long story short, there have been a good amount of changes that have had to have been made to the compiler to get GATs to work. And, while there are still a few small remaining diagnostics issues, the feature is finally in a space that we feel comfortable making it no longer "incomplete".

So, what does that mean? Well, all it really means is that when you use this feature on nightly, you'll no longer get the "generic_associated_types is incomplete" warning. However, the real reason this is a big deal: we want to stabilize this feature. But we need your help. We need you to test this feature, to file issues for any bugs you find or for potential diagnostic improvements. Also, we'd love for you to just tell us about some interesting patterns that GATs enable over on Zulip!

Without making promises that we aren't 100% sure we can keep, we have high hopes we can stabilize this feature within the next couple months. But, we want to make sure we aren't missing glaringly obvious bugs or flaws. We want this to be a smooth stabilization.

Okay. Phew. That's the main point of this post and the most exciting news. But as I said before, I think it's also reasonable for me to explain what this feature is, what you can do with it, and some of the background and how we got here.

So what are GATs?

Note: this will only be a brief overview. The RFC contains many more details.

GATs (generic associated types) were originally proposed in RFC 1598. As said before, they allow you to define type, lifetime, or const generics on associated types. If you're familiar with languages that have "higher-kinded types", then you could call GATs type constructors on traits. Perhaps the easiest way for you to get a sense of how you might use GATs is to jump into an example.

Here is a popular use case: a LendingIterator (formerly known as a StreamingIterator):

trait LendingIterator {
    type Item<'a> where Self: 'a;

    fn next<'a>(&'a mut self) -> Option<Self::Item<'a>>;
}

Let's go through one implementation of this, a hypothetical <[T]>::windows_mut, which allows for iterating through overlapping mutable windows on a slice. If you were to try to implement this with Iterator today like

struct WindowsMut<'t, T> {
    slice: &'t mut [T],
    start: usize,
    window_size: usize,
}

impl<'t, T> Iterator for WindowsMut<'t, T> {
    type Item = &'t mut [T];

    fn next<'a>(&'a mut self) -> Option<Self::Item> {
        let retval = self.slice[self.start..].get_mut(..self.window_size)?;
        self.start += 1;
        Some(retval)
    }
}

then you would get an error.

error[E0495]: cannot infer an appropriate lifetime for lifetime parameter in function call due to conflicting requirements
  --> src/lib.rs:9:22
   |
9  |         let retval = self.slice[self.start..].get_mut(..self.window_size)?;
   |                      ^^^^^^^^^^^^^^^^^^^^^^^^
   |
note: first, the lifetime cannot outlive the lifetime `'a` as defined on the method body at 8:13...
  --> src/lib.rs:8:13
   |
8  |     fn next<'a>(&'a mut self) -> Option<Self::Item> {
   |             ^^
note: ...so that reference does not outlive borrowed content
  --> src/lib.rs:9:22
   |
9  |         let retval = self.slice[self.start..].get_mut(..self.window_size)?;
   |                      ^^^^^^^^^^
note: but, the lifetime must be valid for the lifetime `'t` as defined on the impl at 6:6...
  --> src/lib.rs:6:6
   |
6  | impl<'t, T: 't> Iterator for WindowsMut<'t, T> {
   |      ^^

Put succinctly, this error is essentially telling us that in order for us to be able to return a reference to self.slice, it must live as long as 'a, which would require a 'a: 't bound (which we can't provide). Without this, we could call next while already holding a reference to the slice, creating overlapping mutable references. However, it does compile fine if you were to implement this using the LendingIterator trait from before:

impl<'t, T> LendingIterator for WindowsMut<'t, T> {
    type Item<'a> where Self: 'a = &'a mut [T];

    fn next<'a>(&'a mut self) -> Option<Self::Item<'a>> {
        let retval = self.slice[self.start..].get_mut(..self.window_size)?;
        self.start += 1;
        Some(retval)
    }
}

As an aside, there's one thing to note about this trait and impl that you might be curious about: the where Self: 'a clause on Item. Briefly, this allows us to use &'a mut [T]; without this where clause, someone could try to return Self::Item<'static> and extend the lifetime of the slice. We understand that this is a point of confusion sometimes and are considering potential alternatives, such as always assuming this bound or implying it based on usage within the trait (see this issue). We definitely would love to hear about your use cases here, particularly when assuming this bound would be a hindrance.

As another example, imagine you wanted a struct to be generic over a pointer to a specific type. You might write the following code:

trait PointerFamily {
    type Pointer<T>: Deref<Target = T>;

    fn new<T>(value: T) -> Self::Pointer<T>;
}

struct ArcFamily;
struct RcFamily;

impl PointerFamily for ArcFamily {
    type Pointer<T> = Arc<T>;
    ...
}
impl PointerFamily for RcFamily {
    type Pointer<T> = Rc<T>;
    ...
}

struct MyStruct<P: PointerFamily> {
    pointer: P::Pointer<String>,
}

We won't go in-depth on the details here, but this example is nice in that it not only highlights the use of types in GATs, but also shows that you can still use the trait bounds that you already can use on associated types.

These two examples only scratch the surface of the patterns that GATs support. If you find any that seem particularly interesting or clever, we would love to hear about them over on Zulip!

Why has it taken so long to implement this?

So what has caused us to have taken nearly four years to get to the point that we are now? Well, it's hard to put into words how much the existing trait solver has had to change and adapt; but, consider this: for a while, it was thought that to support GATs, we would have to transition rustc to use Chalk, a potential future trait solver that uses logical predicates to solve trait goals (though, while some progress has been made, it's still very experimental even now).

For reference, here are some various implementation additions and changes that have been made that have furthered GAT support in some way or another:

  • Parsing GATs in AST (#45904)
  • Resolving lifetimes in GATs (#46706)
  • Initial trait solver work to support lifetimes (#67160)
  • Validating projection bounds (and making changes that allow type and const GATs) (#72788)
  • Separating projection bounds and predicates (#73905)
  • Allowing GATs in trait paths (#79554)
  • Partially replace leak check with universes (#65232)
  • Move leak check to later in trait solving (#72493)
  • Replacing bound vars in GATs with placeholders when projecting (#86993)

And to further emphasize the work above: many of these PRs are large and have considerable design work behind them. There are also several smaller PRs along the way. But, we made it. And I just want to congratulate everyone who's put effort into this one way or another. You rock.

What limitations are there currently?

Ok, so now comes the part that nobody likes hearing about: the limitations. Fortunately, in this case, there's really only one GAT limitation: traits with GATs are not object safe. This means you won't be able to do something like

fn takes_iter(_: &mut dyn for<'a> LendingIterator<Item<'a> = &'a i32>) {}

The biggest reason for this decision is that there's still a bit of design and implementation work to actually make this usable. And while this is a nice feature, adding this in the future would be a backward-compatible change. We feel that it's better to get most of GATs stabilized and then come back and try to tackle this later than to block GATs for even longer. Also, GATs without object safety are still very powerful, so we don't lose much by defering this.

As was mentioned earlier in this post, there are still a couple remaining diagnostics issues. If you do find bugs though, please file issues!

Wladimir PalantData exfiltration in Keepa Price Tracker

As readers of this blog might remember, shopping assistants aren’t exactly known for their respect of your privacy. They will typically use their privileged access to your browser in order to extract data. For them, this ability is a competitive advantage. You pay for a free product with a privacy hazard.

Usually, the vendor will claim to anonymize all data, a claim that can rarely be verified. Even if the anonymization actually happens, it’s really hard to do this right. If anonymization can be reversed and the data falls into the wrong hands, this can have severe consequences for a person’s life.

Meat grinder with the Keepa logo on its side is working on the Amazon logo, producing lots of prices and stars<figcaption> Image credits: Keepa, palomaironique, Nikon1803 </figcaption>

Today we will take a closer look at a browser extension called “Keepa – Amazon Price Tracker” which is used by at least two million users across different browsers. The extension is being brought out by a German company and the privacy policy is refreshingly short and concise, suggesting that no unexpected data collection is going on. The reality however is: not only will this extension extract data from your Amazon sessions, it will even use your bandwidth to load various Amazon pages in the background.

The server communication

The Keepa extension keeps a persistent WebSocket connection open to its server dyn.keepa.com. The server parameters include your unique user identifier, stored both in the extension and as a cookie on keepa.com. As a result, this identifier will survive both clearing browse data and reinstalling the extension, you’d have to do both for it to be cleared. If you choose to register on keepa.com, this identifier will also be tied to your user name and email address.

Looking at the messages being exchanged, you’ll see that these are binary data. But they aren’t encrypted, it’s merely deflate-compressed JSON-data.

Developer tools showing binary messages being exchanged

You can see the original message contents by copying the message as a Base64 string, then running the following code in the context of the extension’s background page:

pako.inflate(atob("eAGrViouSSwpLVayMjSw0FFQylOyMjesBQBQGwZU"), {to: "string"});

This will display the initial message sent by the server:

{
  "status": 108,
  "n": 71
}

What does Keepa learn about your browsing?

Whenever I open an Amazon product page, a message like the following is sent to the Keepa server:

{
  "payload": [null],
  "scrapedData": {
    "tld": "de"
  },
  "ratings": [{
    "rating": "4,3",
    "ratingCount": "2.924",
    "asin": "B0719M4YZB"
  }],
  "key": "f1",
  "domainId": 3
}

This tells the server that I am using Amazon Germany (the value 3 in domainId stands for .de, 1 would have been .com). It also indicates the product I viewed (asin field) and how it was rated by Amazon users. Depending on the product, additional data like the sales rank might be present here. Also, the page scraping rules are determined by the server and can change any time to collect more sensitive data.

A similar message is sent when an Amazon search is performed. The only difference here is that ratings array contains multiple entries, one for each article in your search results. While the search string itself isn’t being transmitted (not with the current scraping rules at least), from the search results it’s trivial to deduce what you searched for.

Extension getting active on its own

That’s not the end of it however. The extension will also regularly receive instructions like the following from the server (shortened for clarity):

{
  "key": "o1",
  "url": "https://www.amazon.de/gp/aod/ajax/ref=aod_page_2?asin=B074DDJFTH&…",
  "isAjax": true,
  "httpMethod": 0,
  "domainId": 3,
  "timeout": 8000,
  "scrapeFilters": [{
    "sellerName": {
      "name": "sellerName",
      "selector": "#aod-offer-soldBy div.a-col-right > a:first-child",
      "altSelector": "#aod-offer-soldBy .a-col-right span:first-child",
      "attribute": "text",
      "reGroup": 0,
      "multiple": false,
      "optional": true,
      "isListSelector": false,
      "parentList": "offers",
      "keepBR": false
    },
    "rating": {
      "name": "rating",
      "selector": "#aod-offer-seller-rating",
      "attribute": "text",
      "regExp": "(\\d{1,3})\\s?%",
      "reGroup": 1,
      "multiple": false,
      "optional": true,
      "isListSelector": false,
      "parentList": "offers",
      "keepBR": false
    },
    
  }],
  "l": [{
    "path": ["chrome", "webRequest", "onBeforeSendHeaders", "addListener"],
    "index": 1,
    "a": {
      "urls": ["<all_urls>"],
      "types": ["main_frame", "sub_frame", "stylesheet", "script", ]
    },
    "b": ["requestHeaders", "blocking", "extraHeaders"]
  }, , null],
  "block": "(https?:)?\\/\\/.*?(\\.gif|\\.jpg|\\.png|\\.woff2?|\\.css|adsystem\\.)\\??"
}

The address https://www.amazon.de/gp/aod/ajax/ref=aod_page_2?asin=B074DDJFTH belongs to an air compressor, not a product I’ve ever looked at but one that Keepa is apparently interested in. The extension will now attempt to extract data from this page despite me not navigating to it. Because of isAjax flag being set here, this address is loaded via XMLHttpRequest, after which the response text is being put into a frame of extensions’s background page. If isAjax flag weren’t set, this page would be loaded directly into another frame.

The scrapeFilters key sets the rules to be used for analyzing the page. This will extract ratings, prices, availability and any other information via CSS selectors and regular expressions. Here Keepa is also interested in the seller’s name, elsewhere in the shipping information and security tokens. There is also functionality here to read out contents of the Amazon cart, I didn’t look too closely at that however.

The l key is also interesting. It tells the extension’s background page to call a particular method with the given parameters, here chrome.webRequest.onBeforeSendHeaders.addListener method is being called. The index key determines which of the predefined listeners should be used. The purpose of the predefined listeners seems to be removing some security headers as well as making sure headers like Cookie are set correctly.

The server’s effective privileges

Let’s take a closer look at the privileges granted to the Keepa server here, these aren’t entirely obvious. Loading pages in the background isn’t meant to happen within the user’s usual session, there is some special cookie handling meant to produce a separate session for scraping only. This doesn’t appear to always work reliably, and I am fairly certain that the server can make pages load in the usual Amazon session, rendering it capable of impersonating the user towards Amazon. As the server can also extract arbitrary data, it is for example entirely possible to add a shipping address to the user’s Amazon account and to place an order that will be shipped there.

The l key is also worth taking a second look. At first the impact here seems limited by the fact that the first parameter will always be a function, one out of a few possible functions. But the server could use that functionality to call eval.call(function(){}, "alert(1)") in the context of the extension’s background page and execute arbitrary JavaScript code. Luckily, this call doesn’t succeed thanks to the extension’s default Content Security Policy.

But there are more possible calls, and some of these succeed. For example, the server could tell the extension to call chrome.tabs.executeScript.call(function(){}, {code: "alert(1)"}). This will execute arbitrary JavaScript code in the current tab if the extension has access to it (meaning any Amazon website). It would also be possible to specify a tab identifier in order to inject JavaScript into background tabs: chrome.tabs.executeScript.call(function(){}, 12, {code: "alert(1)"}). For this the server doesn’t need to know which tabs are open: tab identifiers are sequential, so it’s possible to find valid tab identifiers simply by trying out potential candidates.

Privacy policy

Certainly, a browser extension collecting all this data will have a privacy policy to explain how this data is used? Here is the privacy policy of the German-based Keepa GmbH in full:

You can use all of our services without providing any personal information. However, if you do so we will not sell or trade your personal information under any circumstance. Setting up a tracking request on our site implies that you’d like us to contact you via the contact information you provided us. We will do our best to only do so if useful and necessary - we hate spam as much as you do. If you login/register using Social-Login or OpenID we will only save the username and/or email address of the provided data. Should you choose to subscribe to one of our fee-based subscriptions we will share your email and billing address with the chosen payment provider - solely for the purpose of payment related communication and authentication. You can delete all your information by deleting your account through the settings.

This doesn’t sound right. Despite being linked under “Privacy practices” in the Chrome Web Store, it appears to apply only to the Keepa website, not to any of the extension functionality. The privacy policy on the Mozilla Add-ons site is more specific despite also being remarkably short (formatting of the original preserved):

You can use this add-on without providing any personal information. If you do opt to share contact information, we will only use it to provide you updates relevant to your tracking requests. Under no circumstances will your personal information be made available to a third party. This add-on does not collect any personal data beyond the contact information provided by you.

Whenever you visit an Amazon product page the ASIN (Amazon Standard Identification Number) of that product is used to load its price history graph from Keepa.com. We do not log such requests.

The extension creates required functional cookies containing a session and your settings on Keepa.com, which is required for session management (storing settings and accessing your Keepa.com account, if you create one). No other (tracking, advertising) cookies are created.

This refers to some pieces of the Keepa functionality but it once again completely omits the data collection outlined here. It’s reassuring to know that they don’t log product identifiers when showing product history, but they don’t need to if on another channel their extension sends far more detailed data to the server. This makes the first sentence, formatted as bold text, a clear lie. Unless of course you don’t consider the information collected here personal. I’m not a lawyer, maybe in the legal sense it isn’t.

I’m fairly certain however that this privacy policy doesn’t meet the legal requirements of the GDPR. To be compliant it would need to mention the data being collected, explain the legal grounds for doing so, how it is being used, how long it is being kept and who it is shared with.

That said, this isn’t the only regulation violated by Keepa. As a German company, they are obliged to publish a legal note (in German: Impressum) on their website so that visitors can immediately recognize the party responsible. Keepa hides both this information and the privacy policy in a submenu (one has to click “Information” first) under the misleading name “Disclaimer.” The legal requirements are for both pages to be reachable with one click, and the link title needs to be unambiguous.

Conclusions

Keepa extension is equipped to collect any information about your Amazon visits. Currently it will collect information about the products you look at and the ones you search for, all that tied to a unique and persistent user identifier. Even without you choosing to register on the Keepa website, there is considerable potential for the collected data to be deanonymized.

Some sloppy programming had the (likely unintended) consequence of making the server even more powerful, essentially granting it full control over any Amazon page you visit. Luckily, the extension’s privileges don’t give it access to any websites beyond Amazon.

The company behind the extension fails to comply with its legal obligations. The privacy policy is misleading in claiming that no personal data is being collected. It fails to explain how the data is being used and who it is shared with. There are certainly companies interested in buying detailed online shopping profiles, and a usable privacy policy needs to at least exclude the possibility of the data being sold.

Cameron KaiserAnd now for something completely different: "Upgrading" your Quad G5 LCS

One of the most consistently popular old posts on this blog is our discussion on long-life computing and how to extend the working, arguably even useful, life of your Power Mac. However, what I think gives it particular continued traction is it has a section on how to swap out the liquid cooling system of the Quad G5, obviously the most powerful Power Macintosh ever made and one of the only two G5 systems I believe worth using (the other being the dual-processor 2.3GHz, as it is aircooled). LCSes are finicky beasts under the best of conditions and certain liquid-cooled models of the G5 line have notoriously bad reputations for leakage. My parents' dual 2.5GHz, for example, succumbed to a leak and it ended up being a rather ugly postmortem.

The Quad G5 is one of the better ones in this regard and most of the ones that would have suffered early deaths already have, but it still requires service due to evaporative losses and sediment, and any Quad on its original processors is by now almost certainly a windtunnel under load. An ailing LCS, even an intact one, runs the real risk of an unexpected shutdown if the CPU it can no longer cool effectively ends up exceeding its internal thermal limits; you'll see a red OVERTEMP light illuminate on the logic board when this is imminent, followed by a CHECKSTOP. Like an automotive radiator it is possible to open the LCS up and flush the coolant (and potentially service the pumps), but this is not a trivial process. Additionally, those instructions are for the single-pump Delphi version 1 assembly, which is the more reliable of the two; the less reliable double-pump Cooligy version 2 assemblies are even harder to work on.

Unfortunately our current employment situation requires I downsize, so I've been starting on consolidating or finding homes for excess spare systems. I had several spare Quad G5 systems in storage in various states, all version 2 Cooligy LCSes, but the only LCS assemblies I have in stock (and the LCS in my original Quad G5) are version 1. These LCSes were bought Apple Certified Refurbished, so they were known to be in good condition and ready to go; as the spare Quads were all on their original marginal LCSes and processors, I figured I would simply "upgrade" the best-condition v2 G5 with a v1 assembly. The G5 service manual doesn't say anything about this, though it has nothing in it indicating that they aren't interchangeable, or that they need different logic boards or ROMs, and now having done it I can attest that it "just works." So here's a few things to watch out for.

Both the v1 and the v2 assemblies have multiple sets of screws: four "captive" (not really) float plate screws, six processor mount screws, four terminal assembly screws (all of which require a 3mm flathead hex driver), and four captive ballheads (4mm ballhead hex). Here's the v1, again:

And here's the v2. Compare and contrast.
The float plate screws differ between the two versions, and despite the manual calling them "captive" can be inadvertently removed. If your replacement v1 doesn't have float plate screws in it, as mine didn't, the system will not boot unless they are installed (along with the terminal assembly screws, which are integral portions of the CPU power connections). I had to steal them from a dead G5 core module that I fortunately happen to have kept.

Once installed, the grey inlet frame used in the v2 doesn't grip the v1:

The frame is not a necessary part. You can leave it out as the front fan module and clear deflector are sufficient to direct airflow. However, if you have a spare v1 inlet frame, you can install that; the mounting is the same.

The fan and pump connector cable is also the same between v1 and v2, though you may need to move the cable around a bit to get the halves to connect if it was in a wacky location.

Now run thermal calibration, and enjoy your renewed Apple PowerPC tank.

Firefox Add-on ReviewsSupercharge your productivity with a browser extension

With more work and education happening online (and at home) you may find yourself needing new ways to juice your productivity. From time management to organizational tools and more, the right browser extension can give you an edge in the art of efficiency. 

I need help saving and organizing a lot of web content 

Gyazo

Capture, save, and share anything you find on the web. Gyazo is a great tool for personal or collaborative record keeping and research. 

Clip entire pages or just pertinent portions. Save images or take screenshots. Gyazo makes it easy to perform any type of web clipping action by either right-clicking on the page element you want to save or using the extension’s toolbar button. Everything gets saved to your Gyazo account, making it accessible across devices and collaborative teams. 

On your Gyazo homepage you can easily browse and sort everything you’ve clipped; and organize everything into shareable topics or collections.

<figcaption>With its minimalist pop-up interface, Gyazo makes it easy to clip elements, sections, or entire web pages. </figcaption>

Evernote Web Clipper

Similar to Gyazo, Evernote Web Clipper offers a kindred feature set—clip, save, and share web content—albeit with some nice user interface distinctions. 

Evernote places emphasis on making it easy to annotate images and articles for collaborative purposes. It also has a strong internal search feature, allowing you to search for specific words or phrases that might appear across scattered groupings of clipped content. Evernote also automatically strips out ads and social widgets on your saved pages. 

Focus! Focus! Focus!

Anti-distraction extensions can be a major boon for online workers and students… 

Block Site 

Do you struggle avoiding certain time-wasting, productivity-sucking websites? With Block Site you can enforce restrictions on sites that tempt you away from good work habits. 

Just list the websites you want to avoid for specified periods of time (certain hours of the day or some days entirely, etc.) and Block Site won’t let you access them until you’re out of the focus zone. There’s also a fun redirection feature where you’re automatically redirected to a more productive website anytime you try to visit a time waster 

<figcaption>Give yourself a custom message of encouragement (or scolding?) whenever you try to visit a restricted site with Block Site</figcaption>

LeechBlock NG

Very similar in function to Block Site, LeechBlock NG offers a few intriguing twists beyond standard site-blocking features. 

In addition to blocking sites during specified times, LeechBlock NG offers an array of granular, website-specific blocking abilities—like blocking just portions of websites (e.g. you can’t access the YouTube homepage but you can see video pages) to setting restrictions on predetermined days (e.g. no Twitter on weekends) to 60-second delayed access to certain websites to give you time to reconsider that potentially productivity killing decision. 

Tomato Clock

A simple but highly effective time management tool, Tomato Clock (based on the Pomodoro technique) helps you stay on task by tracking short, focused work intervals. 

The premise is simple: it assumes everyone’s productive attention span is limited, so break up your work into manageable “tomato” chunks. Let’s say you work best in 40-minute bursts. Set Tomato Clock and your browser will notify you when it’s break time (which is also time customizable). It’s a great way to stay focused via short sprints of productivity. The extension also keeps track of your completed tomato intervals so you can track your achieved results over time.

Tranquility Reader

Imagine a world wide web where everything but the words are stripped away—no more distracting images, ads, tempting links to related stories, nothing—just the words you’re there to read. That’s Tranquility Reader

Simply hit the toolbar button and instantly streamline any web page. Tranquility Reader offers quite a few other nifty features as well, like the ability to save content offline for later reading, customizable font size and colors, add annotations to saved pages, and more. 

We hope some of these great extensions will give your productivity a serious boost! Fact is there are a vast number of extensions out there that could possibly help your productivity—everything from ways to organize tons of open tabs to translation tools to bookmark managers and more.