Firefox NightlyFantastic Firefox Fixes – These Weeks in Firefox: Issue 167

Highlights

  • Firefox 130 goes out today! Check out some interesting opt-in early features in Firefox Labs!
  • Puppeteer v23 released with official Firefox support, using Webdriver BiDi. Read our announcement on hacks, as well as the Chrome DevTools’ blog post.
  • Marco fixed a regression bug where the Mobile Bookmarks folder was no longer visible in the bookmarks menus – Bug 1913976
  • Amy, Maxx, Scott and Nathan have been working on some new layout variants for New Tab that we aim to experiment with in the next few releases. (Meta bug)
    • Try it in Nightly: (Set either of these prefs to True)
      • browser.newtabpage.activity-stream.newtabLayouts.variant-a
      • browser.newtabpage.activity-stream.newtabLayouts.variant-b
  • Mandy has implemented autofill for intuitive restrict keywords (e.g. typing @bookmarks instead of *) – Bug 1912045
    • You must set browser.urlbar.searchRestrictKeywords.featureGate to true in about:config for this for now.

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Gregory Pappas [:gregp]
  • Irene Ni
  • Nipun Shukla
  • Robert Holdsworth
  • Tim Williams

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • As part of follow ups to the Manifest V3 improvements, the extensions button setWhenClicked/setAlwaysOn context menu items have been fixed to account for the extension host permissions listed in the manifest and the ones already granted – Bug 1905146
  • We fixed a regression with the unlimitedStorage permission being revoked for extensions when users cleared recent history – Bug 1907732
  • Thanks to Gregory Pappas, the internals used by the tabs’s captureTab/captureVisibleTab API methods have been migrated to use OffscreenCanvas (and migrated away from using an hidden window) – Bug 1914102
WebExtension APIs
  • Fixed openedTabId for notified through tabs.onUpdated API event when changes through tabs.update API method – Bug 1409262
  • Fixed downloads.download API method throwing on folder names that contains a dot and a space – Bug 1903780
    • NOTE: this fix has been landed in Nightly 131, but it has been also uplifted to Firefox 130 and Firefox ESRs 128 and 115.
  • Fixed webRequest issues related to ChannelWrapper cached attributes missing to be invalidated on HTTP redirects (Bug 1909081, Bug 1909270)
  • Introduced quota enforcement to storage.session API – Bug 1908925
Addon Manager & about:addons
  • Fixed enable/disabled state of the new sidebar extension context menu items (adjusted based on the addon permissions and Firefox prefs) – Bug 1910581

DevTools

DevTools Toolbox
  • Gregory Pappas is reducing usage of hidden windows in the codebase, which we were using in a few places in DevTools (#1914107, #1546738, #1914101, #1915014)
  • Mathew Hodson added a link to MDN in Netmonitor for the Priority header (#1894758)
  • Emilio fixed an issue that was preventing users to modify CSS declarations in the Inspector for stylesheet imported into a layer (#1912996)
  • Nicolas tweaked the styling of focused element and inputs in the markup view so it’s less confusing (#1907803)
  • Nicolas made a few changes to improve custom properties in the Inspector
    • We’re now displaying the computed value of custom properties in the tooltip when it differs from the declaration value (#1626234), and made the different values displayed in the tooltip more colorful (#1912006)
    • And since we now have the computed values, it’s easy to show color swatches for CSS variables, even when the variable depends on other variables (#1630950)
    • We also display the computed value in the input autocomplete (#1911524)
      • Display empty CSS variable value as <empty> , in the variable tooltip and in the computed panel, so it stands out (#1912267, #1912268)
  • Nicolas fixed a crash in the Rules view that was happening when the page was using a particular declaration value (e.g. (max-width: 10px)) (#1915353)
  • Julian made it possible to change css values with mouse scroll when hovering a numeric value in the input (#1801545)
  • Julian fixed an annoying issue that forced users to disconnect and reconnect the device when remote debugging Android WebExtensions (#1856481)
  • Still in WebExtension land, Julian got rid of a bug where breakpoints could still be triggered after being deleted (#1908095)
  • Alex Thayer Implemented a native backend for the JS tracer which will make tracing much faster (#1906719)
  • Alexandre made it possible to show function arguments in tracer popup previews (#1909548)
  • Hubert is on the last stretch to migrate the Debugger to CodeMirror 6 (#1898204, #1897755, #1914654)
  • Julian fixed a couple issues in the Inspector node picker: picking a video would play/pause said video (#1913263), and also, the NodePicker randomly stopped working after cancelled navigation from about:newtab (#1914863)
WebDriver BiDi
  • External:
    • Gatlin Newhouse updated mozrunner to search for DevEdition when running on macos (#1909999)
    • Dan implemented 2 enhancements for our WebDriver BiDi codebase:
      • Introduced a base class RootBiDiModule (#1850682)
      • Added an emitEventForBrowsingContext method which is useful for most of our root BiDi modules (#1859328)
  • Updates:
    • Julian updated the vendored version of Puppeteer to v23.1.0, which is one of the first releases to officially support Firefox. This should also fix a nasty side effect which could wipe your files when running ./mach puppeteer-test (#1912239 and 1911968)
    • Geckodriver 0.35.0 was released with support for Permissions, a flag to enable the crash reporter, and improvements for the unhandledPromptBehavior capability. (#1871543, blog post)
    • James fixed a bug with input.KeyDownAction and input.keyUpAction which would unexpectedly accept multiple characters (#1910352)
    • Sasha updated the browsingContext.navigate command to properly fail with “unknown error” when the navigation failed (#1905083)
    • Sasha fixed a bug where WebDriver BiDi session.new would return an invalid value for the default unhandledPromptBehavior capability. (#1909455)
    • Julian added support to all the remaining arguments for network.continueResponse, which can now update cookies, headers, statusCode and reasonPhrase of a real network response intercepted in the responseStarted phase (which roughly corresponds to the http-on-examine-response notification) (#1913737 + #1853887)

Fluent

Lint, Docs and Workflow

  • Updated eslint-plugin-jsdoc, which has also enforced some extra formatting around jsdoc comments.
  • Document generation is getting some updates.
    • Errors and Critical issues are now being raised as errors (previously they weren’t being considered).
    • More warnings will now be “fatal”, all the existing instances of those warnings have been eliminated. They’ll now be listed in as a specific failure rather than being hidden in the list of general warnings.
    • Some of the warnings that were being output by the generate CI task have now been resolved, which should make it clearer when trying to understand the failures.

Migration Improvements

  • fchasen is working on a new messaging experiment to help encourage people to create accounts to help facilitate device migration / data transfer. QA has come back green, and we expect to begin enrollment soon!

New Tab Page

  • Scott (:thecount) is working on a plan to transition us off the two separate endpoints that provide firesponsored stories and top sites to New Tab to a single end-point.
  • A new mechanism to let users specify the kinds of stories they are interested in with “thumbs up” / “thumbs down” feedback is being experimented with. We’ll be studying this during the Firefox 130 cycle.
  • We’re (slowly) rolling out a new endpoint for recommended stories to New Tab, powered by Merino. The goal is to eventually allow us to better serve specific content topics that users will be able to choose. This is early days, and still being experimented with – but the new endpoint will make things much simpler for us.

Privacy & Security

Profile Management

  • (Note: to avoid potentially breaking the world for nightly users, this work is currently behind the MOZ_SELECTABLE_PROFILES build flag and the browser.profiles.enabled pref.)
  • Mossop removed the –no-remote command line argument and MOZ_NO_REMOTE environment variable, so that the remoting server will always be enabled in a running instance of Firefox (bug 1906260)
  • Mossop updated the remoting service to support sending command lines after startup (bug 1892400). We’ll use this to broadcast updates across concurrently running instances whenever one of them updates the profile group’s shared SQLite datastore.
  • Niklas landed a change to update the default Firefox profile to the last used (last app focused) profile if multiple profiles in a group are running at the same time (bug 1893710)
  • Jared added support for launching selectable profiles (or any unmanaged profiles not in profiles.ini) using the –profile command line option (bug 1910716). This enables launching selectable profiles from UI clicks.
  • Jared updated the startup sequence to allow starting into new the profile selector window (bug 1893667)

Search and Navigation

  • Scotch Bonnet redesign
    • James improved support for persisting search terms when the feature is enabled – Bug 1901871, Bug 1909301
    • Karandeep implemented updating the unified button icon when the default search engine changes – Bug 1906054
    • James fixed a bug causing 2 search engine chiclets to show in the address bar at the same time – Bug 1911777
    • Dale has restored Actions search mode (“> ”) – Bug 1907147
    • Daisuke fixed alignment of the dedicated search button with results – Bug 1908924 
    • Daisuke fixed search settings not opening in a foreground tab – Bug 1913197
  • Search
    • Moritz added support for SHIFT+Enter/Click on search engines in the legacy search bar to open the initial search engine page – Bug 1907034
  • Other relevant fixes
    • Henri Sivonen has restored functionality of the `network.IDN_show_punycode` pref that affects URLs shown in the address bar – Bug 1913022

Mozilla ThunderbirdThunderbird for Android/ K-9 Mail: July and August 2024 Progress Report

We’re back for an update on Thunderbird for Android/K-9 Mail, combining progress reports for July and August. Did you miss our June update? Check it out! The focus over these two months has been on quality over quantity—behind each improvement is significant groundwork that reduces our technical debt and makes future feature work easier to tackle.

Material 3 Update

As we head  towards the release of Thunderbird for Android, we want you to feel like you are using Thunderbird, and not just any email client. As part of that, we’ve made significant strides toward compatibility with Material 3 to better control coloring and give you a native feel. What do you think so far?

The final missing piece is the navigation drawer, which we believe will land in September. We’ve heard your feedback that the unread emails have been a bit hard to see, especially in dark mode, and have made a few other color tweaks to accompany it.

Feature Modules

If you’ve considered contributing as a developer to Thunderbird for Android, you may have  noticed many intertwined code modules that are hard to tackle without intricate knowledge of the application. To lower the barrier of entry, we’re continuing the move to a feature module system and have been refactoring code to use them. This shift improves maintainability and opens the door for unique features specific to Thunderbird for Android.

Ready to Play

Having a separate Thunderbird for Android app requires some setup in various app-stores, as well as changes to how apps are signed. While this isn’t the fun feature work you’d be excited to hear about, it is foundational to getting Thunderbird for Android out of the door. We’re almost ready to play, just a few legal checkboxes we need to tick.

Documentation

 K-9 Mail user documentation has become outdated, still referencing older versions like K-9 Mail 6.4. Given our current resources, we’ve paused updates to the guide, but if you’re passionate about improving documentation, we’d love your help to bring it back online! If you are interested in maintaining our user documentation, please reach out on the K-9 Forums.

Community Contributions

We’ve had a bunch of great contributions come in! Do you want to see your name here next time? Learn how to contribute.

The post Thunderbird for Android/ K-9 Mail: July and August 2024 Progress Report appeared first on The Thunderbird Blog.

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 130-131)

SpiderMonkey Newsletter 130-131

Hello everyone!

I’m Bryan Thrall, just passing two and a half years on the SpiderMonkey team, and taking a try at newsletter writing.

This is our opportunity to highlight what’s happened in the world of SpiderMonkey over Firefox releases 130 and 131.

I’d love to hear any feedback on the newsletter you have, positive or negative (you won’t hurt my feelings). Send it to my email!

🚀 Performance

Though Speedometer 3 has shipped, we cannot allow that to let us get lax with our performance. It’s important that SpiderMonkey be fast so Firefox can be fast!

  • Contributor Andre Bargull (@anba) added JIT support for Float16Array (bug 1835034)
⚡ Wasm
  • Ryan (@rhunt) implemented speculative inlining (bug 1910194)*. This allows us to inline calls based on profiling data in wasm
  • Julian (@jseward) added support for direct call inlining in Ion (bug 1868521)*
  • Ryan (@rhunt) landed initial support for lazy tiering (bug 1905716)*
  • Ryan (@rhunt) shipped exnref support (bug 1908375)
  • Yury (@yury) added JS Promise Integration support for x86-32 and ARM (bug 1896218, bug 1897153)*

* Disabled by default while they are tested and refined.

🕸️ Web Features Work
  • Andre Bargull (@anba), has dramatically improved our JIT support for BigInt operations (bug 1913947, bug 1913949, bug 1913950)
  • Andre Bargull (@anba) also implemented the RegExp.escape proposal (bug 1911097)
  • Contributor Kiril K (@kirill.kuts.dev) implemented the Regular Expression Pattern Modifiers proposal (bug 1899813)
  • Dan (@dminor) shipped synchronous Iterator Helpers (bug 1896390)
👷🏽‍♀️ SpiderMonkey Platform Improvements
  • Matt (@mgaudet) introduced JS_LOG, which connects to MOZ_LOG when building SpiderMonkey with Gecko (bug 1904429). This will eventually allow collecting SpiderMonkey logs from the profiler and about:logging.

Will Kahn-GreeneSwitching from pyenv to uv

Premise

The 0.4.0 release of uv does everything I currently do with pip, pyenv, pipx, pip-tools, and pipdeptree. Because of that, I'm in the process of switching to uv.

This blog post covers switching from pyenv to uv.

History

  • 2024-08-29: Initial writing.

  • 2024-09-12: Minor updates and publishing.

Start state

I'm running Ubuntu Linux 24.04. I have pyenv installed using the the automatic installer. pyenv is located in $HOME/.pyenv/bin/.

I have the following Pythons installed with pyenv:

I'm not sure why I have 3.7 still installed. I don't think I use that for anything.

My default version is 3.10.14 for some reason. I'm not sure why I haven't updated that to 3.12, yet.

In my 3.10.14, I have the following Python packages installed:

That probably means I installed the following in the Python 3.10.14 Python environment:

  • MozPhab

  • pipx

  • virtualenvwrapper

Maybe I installed some other things for some reason lost in the sands of time.

Then I had a whole bunch of things installed with pipx.

I have many open source projects all of which have a .python-version file listing the Python versions the project uses.

I think that covers the start state.

Steps

First, I made a list of things I had.

I uninstalled all the packages I installed with pipx.

Then I uninstalled pyenv and everything it uses. I followed the pyenv uninstall instructions:

Then I removed the bits in my shell that add to the PATH and set up pyenv and virtualenvwrapper.

Then I started a new shell that didn't have all the pyenv and virtualenvwrapper stuff in it.

Then I installed uv using the uv standalone installer.

Then I ran uv --version to make sure it was installed.

Then I installed the shell autocompletion.

Then I started a new shell to pick up those changes.

Then I installed Python versions:

When I type "python", I want it to be a Python managed by uv. Also, I like having "pythonX.Y" symlinks, so I created a uv-sync script which creates symlinks to uv-managed Python versions:

https://github.com/willkg/dotfiles/blob/main/dotfiles/bin/uv-sync

Then I installed all my tools using uv tool install.

For tox, I had to install the tox-uv package in the tox environment:

Now I've got everything I do mostly working.

So what does that give me?

I installed uv and I can upgrade uv using uv self update.

Python interpreters are managed using uv python. I can create symlinks to interpreters using uv-sync script. Adding new interpreters and removing old ones is pretty straight-forward.

When I type python, it opens up a Python shell with the latest uv-managed Python version. I can type pythonX.Y and get specific shells.

I can use tools written in Python and manage them with uv tool including ones where I want to install them in an "editable" mode.

I can write scripts that require dependencies and it's a lot easier to run them now.

I can create and manage virtual environments with uv venv.

Next steps

Delete all the .python-version files I've got.

Update documentation for my projects and add a uv tool install PACKAGE option to installation instructions.

Probably discover some additional things to add to this doc.

This Week In RustThis Week in Rust 564

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is cargo-override, a cargo plugin for quick overriding of dependencies.

Thanks to Ajith for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

399 pull requests were merged in the last week

Rust Compiler Performance Triage

A relatively quiet week with a majority of regressions coming in rollups which makes investigation more difficult. Luckily the regressions are relatively small and overall the week was a slight improvement in compiler performance.

Triage done by @rylev. Revision range: 6199b69c..263a3aee

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.6% [0.2%, 1.4%] 57
Regressions ❌
(secondary)
0.7% [0.2%, 1.5%] 23
Improvements ✅
(primary)
-2.2% [-4.0%, -0.4%] 23
Improvements ✅
(secondary)
-0.3% [-0.3%, -0.2%] 10
All ❌✅ (primary) -0.2% [-4.0%, 1.4%] 80

3 Regressions, 1 Improvement, 2 Mixed; 3 of them in rollups 26 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Tracking Issues or PRs entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2024-09-11 - 2024-10-09 🦀

Virtual
Africa
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Alas! We are once more bereft
of a quote to elate or explain
so this editor merely has left
the option in rhyme to complain.

– llogiq

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Servo BlogBuilding a browser using Servo as a web engine!

As a web engine, Servo primarily handles everything around scripting and layout. For embedding use cases, the Tauri community experimented with adding a new Servo backend, but Servo can also be used to build a browser.

We have a reference browser in the form of servoshell, which has historically been used as a minimal example and as a test harness for the Web Platform Tests. Nevertheless, the Servo community has steadily worked towards making it a browser in its own right, starting with our new browser UI based on egui last year.

This year, @wusyong, a member of Servo TSC, created the Verso project as a way to explore the features Servo needs to power a robust web browser. In this post, we’ll explain what we tried to achieve, what we found, and what’s next for building a browser using Servo as a web engine.

Multi-view

Of course, the first major feature we want to achieve is multiple webviews. A webview is a term abstracted from the top-level browsing context. This is what people refer to as a web page. With multi-view support, we can create multiple web pages as tabs in a single window. Most importantly, we can draw our UI with additional webviews. The main reason we want to write UI using Servo itself is that we can dogfood our own stack and verify that it can meet practical requirements, such as prompt windows, context menus, file selectors, and more.

Basic multi-view support was reviewed and merged into Servo earlier this year thanks to @delan (#30840, #30841, #30842). Verso refined that into a specific type called WebView. From there, any function that owns webviews can decide how to present them depending on their IDs. In a Verso window, two webviews are created at the moment—one for handling regular web pages and the other for handling the UI, which is currently called the Panel. The result of the showcase in Verso’s README.md looks like this:

Verso displaying ASCII text in a CRT style <figcaption>Figure 1: Verso window displaying two different webviews. One for the UI, the other for the web page.</figcaption>

For now, the inter-process communication is done via Servo’s existing channel messages like EmbedderMsg and EmbedderEvent. We are looking to improve the IPC mechanism with more granular control over DOM elements. So, the panel UI can be updated based on the status of web pages. One example is when the page URL is changed and the navigation bar needs to be updated. There are some candidates for this, such as WebDriverCommandMsg. @webbeef also started a discussion about defining custom elements like <webview> for better ergonomics. Overall, improving IPC will be the next target to research after initial multi-view support. We will also define more specific webview types to satisfy different purposes in the future.

Multi-window

The other prominent feature after multi-view is the ability to support multiple windows. This one wasn’t planned at first, but because it affects too many components, we ended up resolving them together from the ground up.

Servo uses WebRender, based on OpenGL, to render its layout. To support multiple windows, we need to support multiple OpenGL surfaces. One approach would be to create separate OpenGL contexts for each window. But since our implementations of WebGL, WebGPU, and WebXR are all tied to a single WebRender instance, which in turn only supports a single OpenGL context for now, we chose to use a single context with multiple surfaces. This alternative approach could potentially use less memory and spawn fewer threads. For more details, see this series of blog posts by @wusyong.

Verso displaying two windows <figcaption>Figure 2: Verso creates two separate windows with the same OpenGL context.</figcaption>

There is still room for improvement. For example, WebRender currently only supports rendering a single “document”. Unless we create multiple WebRender instances, like Firefox does, we have one WebRender document that has to constantly update all of its display lists to show on all of our windows. This could potentially lead to race conditions where a webview may draw to the wrong window for a split second.

There are also different OpenGL versions across multiple platforms, which can be challenging to configure and link. Verso is experimenting with using Glutin for better configuration and attempting to get closer to the general Rust ecosystem.

What’s next?

With multi-view and multi-window support as the fundamental building blocks, we could create more UI elements to keep pushing the envelope of our browser and embedding research. At the same time, Servo is a huge project, with many potential improvements still to come, so we want to reflect on our progress and decide on our priorities. Here are some directions that are worth pursuing.

Benchmarking and metrics

We want to gather the strength of the community to help us track the statistics of supported CSS properties and web APIs in Servo by popularity order and benchmark results such as jetstream2 and speedometer3. @sagudev already started a subset of speedometer3 to experiment. We hope this will eventually give newcomers a better overview of Servo.

Script triage

There’s a Servo triage meeting every two weeks to triage any issues around the script crate and more. Once we get the statistics of supported web APIs, we can find the most popular ones that haven’t been implemented or fixed yet. We are already fixing some issues around loading the order and re-implementing ReadableStream in Rust. If you are interested in implementing web APIs in Servo, feel free to join the next meeting.

Multi-process and sandboxing

Some features are crucial to the browser but not visible to users. Multi-process architecture and sandboxing belong to this category. Both of these are implemented in Servo to some extent, but only on Linux and macOS right now, and neither of the features are enabled by default.

We would like to improve these features and validate them in CI workflows. In the meantime, we are looking for people who can extend our sandbox to Windows via Named Pipes and AppContainer Isolation.

Acknowledgments

This work was sponsored by NLNet and the Next Generation Internet initiative. We are grateful the European Commission shares the same vision for a better and more open browser ecosystem.

NLNet Logo NGI Logo

Mozilla ThunderbirdWhy Use a Mail Client vs Webmail

Many of us Thunderbird users often forget just how convenient using a mail client can be. But as webmail has become more popular over the last decade, some new users might not know the difference between the two, and why you would want to swap your browser for a dedicated app.

In today’s digital world, email remains a cornerstone of personal and professional communication. Managing emails, however, can be a daunting task especially when you have multiple email accounts with multiple service providers to check and keep track of. Thankfully, decades ago someone invented the email client application. While web-based solutions have taken off in recent years, they can’t quite replace the need for managing emails in one dedicated place.

Let’s go back to the basics: What is the difference between an email service provider and an email client application? And more importantly, can we make a compelling case for why an email client like Thunderbird is not just relevant in today’s world, but essential in maintaining productivity and sanity in our fast-paced lives?

An email service provider (ESP) is a company that offers services for sending, receiving, and storing emails. Popular examples include Gmail, Yahoo Mail, Hotmail and Proton Mail. These services offer web-based interfaces, allowing users to access their emails from any device with an internet connection.

On the other hand, an email client application is software installed on your device that allows you to manage any or all of those email accounts in one dedicated app. Examples include Thunderbird, Microsoft Outlook, and Apple Mail. Email clients offer a unified platform to access multiple email accounts, calendars, tasks, and contacts, all in one place. They retrieve emails from your ESP using protocols like IMAP or POP3 and provide advanced features for organizing, searching, and composing emails.

Despite the convenience of web-based email services, email client applications play a huge role in enhancing productivity and efficiency. Webmail is a juggling game of switching tabs, logins, and sometimes wildly different interfaces. This fragmented approach can steal your time and your focus.

So, how can an email client help with all of that?

One Inbox – All Your Accounts

As already mentioned, an email client eliminates the need to switch between different browser tabs or sign in and out of accounts. Combine your Gmail, Yahoo, and other accounts so you can read, reply to, and search through the emails using a single application. For even greater convenience, you can opt for a unified inbox view, where emails from all your different accounts are combined into a single inbox.

Work Offline – Anywhere

Email clients store your emails locally on your device, so you can access and compose emails even without an internet connection. This is really useful when you’re travelling or in areas with poor connectivity. You can draft responses, organize your inbox, and synchronize your changes once you’re back online.

Thunderbird email client

Enhanced Productivity

Email clients come packed with features designed to boost productivity. These include advanced search capabilities across multiple accounts, customizable filters and rules, as well as integration with calendar and task management tools. Features like email templates and delayed sending can streamline your workflow even more.

Care About Privacy?

Email clients offer enhanced security features, such as encryption and digital signatures, to protect your sensitive information. With local storage, you have more control over your data compared to relying solely on a web-based ESP.

No More Clutter and Distractions

Web-based email services often come with ads, sometimes disguised as emails, and other distractions. Email clients, on the other hand, provide a cleaner ad-free experience. It’s just easier to focus with a dedicated application just for email. Not having to reply on a browser for this purpose means less chance of getting sidetracked by latest news, social media, and random Google searches.

All Your Calendars in One Place

Last but not least, managing your calendar, or multiple calendars, is easier with an email client. You can sync calendars from various accounts, set reminders, and schedule meetings all in one place. This is particularly useful when handling calendar invites from different accounts, as it allows you to easily shift meetings between calendars or maintain one main calendar to avoid double booking.

Calendar view in Thunderbird

So, if you’re not already using an email client, perhaps this post has given you a few good reasons to at least try it out. An email client can help you organize your busy digital life, keep all your email and calendar accounts in one place, and even draft emails during your next transatlantic flight with non-existent or questionable Wi-Fi.

And just as email itself has evolved over the past decades, so have email client applications. They’ll adapt to modern trends and get enhanced with the latest features and integrations to keep everyone organized and productive – in 2024 and beyond.

The post Why Use a Mail Client vs Webmail appeared first on The Thunderbird Blog.

Don MartiAI legal links

part 1: copyright

Generative AI’s Illusory Case for Fair Use by Jacqueline Charlesworth :: SSRN The exploitation of copied works for their intrinsic expressive value sharply distinguishes AI copying from that at issue in the technological fair use cases relied upon by AI’s fair use advocates. In these earlier cases, the determination of fair use turned on the fact that the alleged infringer was not seeking to capitalize on expressive content-exactly the opposite of generative AI.

Urheberrecht und Training generativer KI-Modelle - technologische und juristische Grundlagen by Tim W. Dornis, Sebastian Stober :: SSRN Even if AI training occurs outside Europe, developers cannot fully avoid European copyright laws. If works are replicated inside an AI model, making the model available in Europe could infringe the right of making available under Article 3 of the InfoSoc Directive. (while the US tech industry plays with the IT equivalent of shoplifting comic books, the EU has grown-up problems to worry about.)

Case Tracker: Artificial Intelligence, Copyrights and Class Actions is a useful page maintained by attorneys at Baker & Hostetler LLP. Good for keeping track of what’s where in the court system.

Copyright lawsuits pose a serious threat to generative AI The core question in fair use analysis is whether a new product acts as a substitute for the product being copied, or whether it transforms the old product into something new and distinctive. In the Google Books case, for example, the courts had no trouble finding that a book search engine was a new, transformative product that didn’t in any way compete with the books it was indexing. Google wasn’t making new books. Stable Diffusion is creating new images. And while Google could guarantee that its search engine would never display more than three lines of text from any page in a book. Stability AI can’t make a similar promise. To the contrary, we know that Stable Diffusion occasionally generates near-perfect copies of images from its training data.

part 2: defamation

KI-Chat macht Tübinger Journalisten zum Kinderschänder - SWR Aktuell

OpenAI, ChatGPT facing defamation case in Gwinnett County Georgia | 11alive.com

part 3: antitrust

Hausfeld files globally significant antitrust class action against Google for abusive use of digital media content Publishers have no economically viable or practical way to stop [Google Search Generative Experience] SGE from plagiarizing their content and siphoning away referral traffic and ad revenue. SGE uses the same web crawler as Google’s general search service: GoogleBot. This means the only way to block SGE from plagiarizing content is to block GoogleBot completely—and disappear from Google Search.

The Case for Vigilance in AI Markets - ProMarket (competition regulators in the USA, EU, and UK are getting involved)

part 4: false advertising

(case in which a generative AI ad system outputs an ad that misrepresents product features TK)

part 3: misc

Meta AI Keeps Telling Strangers It Owns My Phone Number - Business Insider

Related

AI models are being blocked from fresh data — except the trash – Pivot to AI We knew LLMs were running out of data as they had indexed pretty much the entire public Web and they still sucked. But increasingly AI company crawlers are being blocked from collecting more — especially data of any quality

NaNoWriMo Shits The Bed On Artificial Intelligence (imho they’ll figure this out before November, either the old org will reform or a new one will launch. Recording artist POVs on Napster were varied, writer POVs on generative AI, not so much.)

Is AI a Silver Bullet? — Ian Cooper - Staccato Signals TDD becomes a powerful tool when you ask the AI to implement code for your tests (TDD is already a powerful tool, and LLMs could be a good force multiplier. Not just writing code that you can filter the bullshit out of by adding tests, but also by suggesting tests that your code should be able to pass. If the LLM outputs a test that obviously shouldn’t pass but does, then you can fix your code sooner. If I had to guess I would say that programming language advocacy scenes are going to figure out the licensing for training sets first. If the coding assistant in the IDE can train on zillions of lines of a certain language because of a programmer co-op agreement, that’s an advantage for the language.)

Why A.I. Isn’t Going to Make Art

Have we stopped to think about what LLMs actually model? Big corporations like Meta and Google tend to exaggerate and make misleading claims that do not stand up to scrutiny. Obviously, as a cognitive scientist who has the expertise and understanding of human language, it’s disheartening to see a lot of these claims made without proper evidence to back them up. But they also have downstream impacts in various domains. If you start treating these massive complex engineering systems as language understanding machines, it has implications in how policymakers and regulators think about them.

Slop is Good Search engines you can’t trust because they are cesspools of slop is hard to imagine. But that end feels inevitable at this point. We will need a new web. (I tend to agree with this. Search engine company management tends to be so ideologically committed to busting the search quality raters union, and other labor organizing by indirect employees, or TVCs, that they will destroy the value of the search engine to do it.)

The Rust Programming Language BlogAnnouncing Rust 1.81.0

The Rust team is happy to announce a new version of Rust, 1.81.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.81.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.81.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.81.0 stable

core::error::Error

1.81 stabilizes the Error trait in core, allowing usage of the trait in #![no_std] libraries. This primarily enables the wider Rust ecosystem to standardize on the same Error trait, regardless of what environments the library targets.

New sort implementations

Both the stable and unstable sort implementations in the standard library have been updated to new algorithms, improving their runtime performance and compilation time.

Additionally, both of the new sort algorithms try to detect incorrect implementations of Ord that prevent them from being able to produce a meaningfully sorted result, and will now panic on such cases rather than returning effectively randomly arranged data. Users encountering these panics should audit their ordering implementations to ensure they satisfy the requirements documented in PartialOrd and Ord.

#[expect(lint)]

1.81 stabilizes a new lint level, expect, which allows explicitly noting that a particular lint should occur, and warning if it doesn't. The intended use case for this is temporarily silencing a lint, whether due to lint implementation bugs or ongoing refactoring, while wanting to know when the lint is no longer required.

For example, if you're moving a code base to comply with a new restriction enforced via a Clippy lint like undocumented_unsafe_blocks, you can use #[expect(clippy::undocumented_unsafe_blocks)] as you transition, ensuring that once all unsafe blocks are documented you can opt into denying the lint to enforce it.

Clippy also has two lints to enforce the usage of this feature and help with migrating existing attributes:

Lint reasons

Changing the lint level is often done for some particular reason. For example, if code runs in an environment without floating point support, you could use Clippy to lint on such usage with #![deny(clippy::float_arithmetic)]. However, if a new developer to the project sees this lint fire, they need to look for (hopefully) a comment on the deny explaining why it was added. With Rust 1.81, they can be informed directly in the compiler message:

error: floating-point arithmetic detected
 --> src/lib.rs:4:5
  |
4 |     a + b
  |     ^^^^^
  |
  = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#float_arithmetic
  = note: no hardware float support
note: the lint level is defined here
 --> src/lib.rs:1:9
  |
1 | #![deny(clippy::float_arithmetic, reason = "no hardware float support")]
  |         ^^^^^^^^^^^^^^^^^^^^^^^^

Stabilized APIs

These APIs are now stable in const contexts:

Compatibility notes

Split panic hook and panic handler arguments

We have renamed std::panic::PanicInfo to std::panic::PanicHookInfo. The old name will continue to work as an alias, but will result in a deprecation warning starting in Rust 1.82.0.

core::panic::PanicInfo will remain unchanged, however, as this is now a different type.

The reason is that these types have different roles: std::panic::PanicHookInfo is the argument to the panic hook in std context (where panics can have an arbitrary payload), while core::panic::PanicInfo is the argument to the #[panic_handler] in #![no_std] context (where panics always carry a formatted message). Separating these types allows us to add more useful methods to these types, such as std::panic::PanicHookInfo::payload_as_str() and core::panic::PanicInfo::message().

Abort on uncaught panics in extern "C" functions

This completes the transition started in 1.71, which added dedicated "C-unwind" (amongst other -unwind variants) ABIs for when unwinding across the ABI boundary is expected. As of 1.81, the non-unwind ABIs (e.g., "C") will now abort on uncaught unwinds, closing the longstanding soundness problem.

Programs relying on unwinding should transition to using -unwind suffixed ABI variants.

WASI 0.1 target naming changed

Usage of the wasm32-wasi target (which targets WASI 0.1) will now issue a compiler warning and request users switch to the wasm32-wasip1 target instead. Both targets are the same, wasm32-wasi is only being renamed, and this change to the WASI target is being done to enable removing wasm32-wasi in January 2025.

Fixes CVE-2024-43402

std::process::Command now correctly escapes arguments when invoking batch files on Windows in the presence of trailing whitespace or periods (which are ignored and stripped by Windows).

See more details in the previous announcement of this change.

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.81.0

Many people came together to create Rust 1.81.0. We couldn't have done it without all of you. Thanks!

The Rust Programming Language BlogChanges to `impl Trait` in Rust 2024

The default way impl Trait works in return position is changing in Rust 2024. These changes are meant to simplify impl Trait to better match what people want most of the time. We're also adding a flexible syntax that gives you full control when you need it.

TL;DR

Starting in Rust 2024, we are changing the rules for when a generic parameter can be used in the hidden type of a return-position impl Trait:

  • a new default that the hidden types for a return-position impl Trait can use any generic parameter in scope, instead of only types (applicable only in Rust 2024);
  • a syntax to declare explicitly what types may be used (usable in any edition).

The new explicit syntax is called a "use bound": impl Trait + use<'x, T>, for example, would indicate that the hidden type is allowed to use 'x and T (but not any other generic parameters in scope).

Read on for the details!

Background: return-position impl Trait

This blog post concerns return-position impl Trait, such as the following example:

fn process_data(
    data: &[Datum]
) -> impl Iterator<Item = ProcessedDatum> {
    data
        .iter()
        .map(|datum| datum.process())
}

The use of -> impl Iterator in return position here means that the function returns "some kind of iterator". The actual type will be determined by the compiler based on the function body. It is called the "hidden type" because callers do not get to know exactly what it is; they have to code against the Iterator trait. However, at code generation time, the compiler will generate code based on the actual precise type, which ensures that callers are fully optimized.

Although callers don't know the exact type, they do need to know that it will continue to borrow the data argument so that they can ensure that the data reference remains valid while iteration occurs. Further, callers must be able to figure this out based solely on the type signature, without looking at the function body.

Rust's current rules are that a return-position impl Trait value can only use a reference if the lifetime of that reference appears in the impl Trait itself. In this example, impl Iterator<Item = ProcessedDatum> does not reference any lifetimes, and therefore capturing data is illegal. You can see this for yourself on the playground.

The error message ("hidden type captures lifetime") you get in this scenario is not the most intuitive, but it does come with a useful suggestion for how to fix it:

help: to declare that
      `impl Iterator<Item = ProcessedDatum>`
      captures `'_`, you can add an
      explicit `'_` lifetime bound
  |
5 | ) -> impl Iterator<Item = ProcessedDatum> + '_ {
  |                                           ++++

Following a slightly more explicit version of this advice, the function signature becomes:

fn process_data<'d>(
    data: &'d [Datum]
) -> impl Iterator<Item = ProcessedDatum> + 'd {
    data
        .iter()
        .map(|datum| datum.process())
}

In this version, the lifetime 'd of the data is explicitly referenced in the impl Trait type, and so it is allowed to be used. This is also a signal to the caller that the borrow for data must last as long as the iterator is in use, which means that it (correctly) flags an error in an example like this (try it on the playground):

let mut data: Vec<Datum> = vec![Datum::default()];
let iter = process_data(&data);
data.push(Datum::default()); // <-- Error!
iter.next();

Usability problems with this design

The rules for what generic parameters can be used in an impl Trait were decided early on based on a limited set of examples. Over time we have noticed a number of problems with them.

not the right default

Surveys of major codebases (both the compiler and crates on crates.io) found that the vast majority of return-position impl trait values need to use lifetimes, so the default behavior of not capturing is not helpful.

not sufficiently flexible

The current rule is that return-position impl trait always allows using type parameters and sometimes allows using lifetime parameters (if they appear in the bounds). As noted above, this default is wrong because most functions actually DO want their return type to be allowed to use lifetime parameters: that at least has a workaround (modulo some details we'll note below). But the default is also wrong because some functions want to explicitly state that they do NOT use type parameters in the return type, and there is no way to override that right now. The original intention was that type alias impl trait would solve this use case, but that would be a very non-ergonomic solution (and stabilizing type alias impl trait is taking longer than anticipated due to other complications).

hard to explain

Because the defaults are wrong, these errors are encountered by users fairly regularly, and yet they are also subtle and hard to explain (as evidenced by this post!). Adding the compiler hint to suggest + '_ helps, but it's not great that users have to follow a hint they don't fully understand.

incorrect suggestion

Adding a + '_ argument to impl Trait may be confusing, but it's not terribly difficult. Unfortunately, it's often the wrong annotation, leading to unnecessary compiler errors -- and the right fix is either complex or sometimes not even possible. Consider an example like this:

fn process<'c, T> {
    context: &'c Context,
    data: Vec<T>,
) -> impl Iterator<Item = ()> + 'c {
    data
        .into_iter()
        .map(|datum| context.process(datum))
}

Here the process function applies context.process to each of the elements in data (of type T). Because the return value uses context, it is declared as + 'c. Our real goal here is to allow the return type to use 'c; writing + 'c achieves that goal because 'c now appears in the bound listing. However, while writing + 'c is a convenient way to make 'c appear in the bounds, also means that the hidden type must outlive 'c. This requirement is not needed and will in fact lead to a compilation error in this example (try it on the playground).

The reason that this error occurs is a bit subtle. The hidden type is an iterator type based on the result of data.into_iter(), which will include the type T. Because of the + 'c bound, the hidden type must outlive 'c, which in turn means that T must outlive 'c. But T is a generic parameter, so the compiler requires a where-clause like where T: 'c. This where-clause means "it is safe to create a reference with lifetime 'c to the type T". But in fact we don't create any such reference, so the where-clause should not be needed. It is only needed because we used the convenient-but-sometimes-incorrect workaround of adding + 'c to the bounds of our impl Trait.

Just as before, this error is obscure, touching on the more complex aspects of Rust's type system. Unlike before, there is no easy fix! This problem in fact occurred frequently in the compiler, leading to an obscure workaround called the Captures trait. Gross!

We surveyed crates on crates.io and found that the vast majority of cases involving return-position impl trait and generics had bounds that were too strong and which could lead to unnecessary errors (though often they were used in simple ways that didn't trigger an error).

inconsistencies with other parts of Rust

The current design was also introducing inconsistencies with other parts of Rust.

async fn desugaring

Rust defines an async fn as desugaring to a normal fn that returns -> impl Future. You might therefore expect that a function like process:

async fn process(data: &Data) { .. }

...would be (roughly) desugared to:

fn process(
    data: &Data
) -> impl Future<Output = ()> {
    async move {
        ..
    }
}

In practice, because of the problems with the rules around which lifetimes can be used, this is not the actual desugaring. The actual desugaring is to a special kind of impl Trait that is allowed to use all lifetimes. But that form of impl Trait was not exposed to end-users.

impl trait in traits

As we pursued the design for impl trait in traits (RFC 3425), we encountered a number of challenges related to the capturing of lifetimes. In order to get the symmetries that we wanted to work (e.g., that one can write -> impl Future in a trait and impl with the expected effect), we had to change the rules to allow hidden types to use all generic parameters (type and lifetime) uniformly.

Rust 2024 design

The above problems motivated us to take a new approach in Rust 2024. The approach is a combination of two things:

  • a new default that the hidden types for a return-position impl Trait can use any generic parameter in scope, instead of only types (applicable only in Rust 2024);
  • a syntax to declare explicitly what types may be used (usable in any edition).

The new explicit syntax is called a "use bound": impl Trait + use<'x, T>, for example, would indicate that the hidden type is allowed to use 'x and T (but not any other generic parameters in scope).

Lifetimes can now be used by default

In Rust 2024, the default is that the hidden type for a return-position impl Trait values use any generic parameter that is in scope, whether it is a type or a lifetime. This means that the initial example of this blog post will compile just fine in Rust 2024 (try it yourself by setting the Edition in the Playground to 2024):

fn process_data(
    data: &[Datum]
) -> impl Iterator<Item = ProcessedDatum> {
    data
        .iter()
        .map(|datum| datum.process())
}

Yay!

Impl Traits can include a use<> bound to specify precisely which generic types and lifetimes they use

As a side-effect of this change, if you move code to Rust 2024 by hand (without cargo fix), you may start getting errors in the callers of functions with an impl Trait return type. This is because those impl Trait types are now assumed to potentially use input lifetimes and not only types. To control this, you can use the new use<> bound syntax that explicitly declares what generic parameters can be used by the hidden type. Our experience porting the compiler suggests that it is very rare to need changes -- most code actually works better with the new default.

The exception to the above is when the function takes in a reference parameter that is only used to read values and doesn't get included in the return value. One such example is the following function indices(): it takes in a slice of type &[T] but the only thing it does is read the length, which is used to create an iterator. The slice itself is not needed in the return value:

fn indices<'s, T>(
    slice: &'s [T],
) -> impl Iterator<Item = usize> {
    0 .. slice.len()
}

In Rust 2021, this declaration implicitly says that slice is not used in the return type. But in Rust 2024, the default is the opposite. That means that callers like this will stop compiling in Rust 2024, since they now assume that data is borrowed until iteration completes:

fn main() {
    let mut data = vec![1, 2, 3];
    let i = indices(&data);
    data.push(4); // <-- Error!
    i.next(); // <-- assumed to access `&data`
}

This may actually be what you want! It means you can modify the definition of indices() later so that it actually does include slice in the result. Put another way, the new default continues the impl Trait tradition of retaining flexibility for the function to change its implementation without breaking callers.

But what if it's not what you want? What if you want to guarantee that indices() will not retain a reference to its argument slice in its return value? You now do that by including a use<> bound in the return type to say explicitly which generic parameters may be included in the return type.

In the case of indices(), the return type actually uses none of the generics, so we would ideally write use<>:

fn indices<'s, T>(
    slice: &'s [T],
) -> impl Iterator<Item = usize> + use<> {
    //                             -----
    //             Return type does not use `'s` or `T`
    0 .. slice.len()
}

Implementation limitation. Unfortunately, if you actually try the above example on nightly today, you'll see that it doesn't compile (try it for yourself). That's because use<> bounds have only partially been implemented: currently, they must always include at least the type parameters. This corresponds to the limitations of impl Trait in earlier editions, which always must capture type parameters. In this case, that means we can write the following, which also avoids the compilation error, but is still more conservative than necessary (try it yourself):

fn indices<T>(
    slice: &[T],
) -> impl Iterator<Item = usize> + use<T> {
    0 .. slice.len()
}

This implementation limitation is only temporary and will hopefully be lifted soon! You can follow the current status at tracking issue #130031.

Alternative: 'static bounds. For the special case of capturing no references at all, it is also possible to use a 'static bound, like so (try it yourself):

fn indices<'s, T>(
    slice: &'s [T],
) -> impl Iterator<Item = usize> + 'static {
    //                             -------
    //             Return type does not capture references.
    0 .. slice.len()
}

'static bounds are convenient in this case, particularly given the current implementation limitations around use<> bounds, but use<> bound are more flexible overall, and so we expect them to be used more often. (As an example, the compiler has a variant of indices that returns newtype'd indices I instead of usize values, and it therefore includes a use<I> declaration.)

Conclusion

This example demonstrates the way that editions can help us to remove complexity from Rust. In Rust 2021, the default rules for when lifetime parameters can be used in impl Trait had not aged well. They frequently didn't express what users needed and led to obscure workarounds being required. They led to other inconsistencies, such as between -> impl Future and async fn, or between the semantics of return-position impl Trait in top-level functions and trait functions.

Thanks to editions, we are able to address that without breaking existing code. With the newer rules coming in Rust 2024,

  • most code will "just work" in Rust 2024, avoiding confusing errors;
  • for the code where annotations are required, we now have a more powerful annotation mechanism that can let you say exactly what you need to say.

Appendix: Relevant links

Frédéric WangMy recent contributions to Gecko (3/3)

Note: This blog post was written on June 2024. As of September 2024, final work to ship the feature is still in progress. Please follow bug 1797715 for the latest updates.

Introduction

This is the final blog post in a series about new web platform features implemented in Gecko, as part as an effort at Igalia to increase browser interoperability.

Let’s take a look at fetch priority attributes, which enable web developers to optimize resource loading by specifying the relative priority of resources to be fetched by the browser.

Fetch priority

The web.dev article on fetch priority explains in more detail how web developers can use fetch priority to optimize resource loading, but here’s a quick overview.

fetchpriority is a new attribute with the value auto (default behavior), high, or low. Setting the attribute on a script, link or img element indicates whether the corresponding resource should be loaded with normal, higher, or lower priority 1:

<head>
  <script src="high.js" fetchpriority="high"></script>
  <link rel="stylesheet" href="auto.css" fetchpriority="auto">
</head>
<body>
  <img src="low.png" alt="low" fetchpriority="low">
</body>

The priority can also be set in the RequestInit parameter of the fetch() method:

await fetch("high.txt", {priority: "high"});

The <link> element has some interesting features. One of them is combining rel=preload and as to fetch a resource with a particular destination 2:

<link rel="preload" as="font" href="high.woff2" fetchpriority="high">

You can even use Link in HTTP response headers and in particular early hints sent before the final response:

103 Early Hint
Link: <high.js>; rel=preload; as=script; fetchpriority=high

These are basically all the places where a fetch priority attribute can be used.

Note that other parameters are also taken into account when deciding the priority to use for resources, such as the position of the element in the page (e.g. blocking resources in <head>), other attributes on the element (<script async>, <script defer>, <link media>, <link rel>…) or the resource’s destination.

Finally, some browsers implement speculative HTML parsing, allowing them to continue fetching resources declared in the HTML markup while the parser is blocked. As far as I understand, Firefox has its own separate HTML parsing code for that purpose, which also has to take fetch priority attributes into account.

Implementation-defined prioritization

If you have not run away after reading the complexity described in the previous section, let’s talk a bit more about how fetch priority attributes are interpreted. The spec contains the following step when fetching a resource (emphasis mine):

If request’s internal priority is null, then use request’s priority, initiator, destination, and render-blocking in an implementation-defined manner to set request’s internal priority to an implementation-defined object.

So browsers would use the high/low/auto hints as well as the destination in order to calculate an internal priority value 3, but the details of this value are not provided in the specification, and it’s up to the browser to decide what to do. This is a bit unfortunate for our interoperability goal, but that’s probably the best we can do, given that each browser already has its own stategies to optimize resource loading. I think this also gives browsers some flexibility to experiment with optimizations… which can be hard to predict when you realize that web devs also try to adapt their content to the behavior of (the most popular) browsers!

In any case, the spec authors were kind enough to provide a note with more suggestions (emphasis mine):

The implementation-defined object could encompass stream weight and dependency for HTTP/2, priorities used in Extensible Prioritization Scheme for HTTP for transports where it applies (including HTTP/3), and equivalent information used to prioritize dispatch and processing of HTTP/1 fetches. [RFC9218]

OK, so what does that mean? I’m not a networking expert, but this is what I could gather after discussing with the Necko team and reading some HTTP specs:

  • HTTP/1 does not have a dedicated prioritization mechanism, but Firefox uses its internal priority to order requests.
  • HTTP/2 has a “stream priority” mechanism and Firefox uses its internal priority to implement that part of the spec. However, it was considered too complex and inefficient, and is likely poorly supported by existing web servers…
  • In upcoming releases, Firefox will use its internal priority to implement the Extensible Prioritization Scheme used by HTTP/2 and HTTP/3. See bug 1865040 and bug 1864392. Essentially, this means using its internal priority to adjust the urgency parameter.

Note that various parts of Firefox rely on NS_NewChannel to load resources, including the fetching algorithm above, which Firefox uses to implement the fetch() method. However, other cases mentioned in the first section have their own code paths with their own calls to NS_NewChannel, so these places must also be adjusted to take the fetch priority and destination into account.

Finishing the implementation work

Summarizing a bit, implementing fetch priority is a matter of:

  1. Adding fetchpriority to DOM objects for HTMLImageElement, HTMLLinkElement, HTMLScriptElement, and RequestInit.
  2. Parsing the fetch priority attribute into an auto/low/high enum.
  3. Passing the information to the callers of NS_NewChannel.
  4. Using that information to set the internal priority.
  5. Using that internal priority for HTTP requests.

Mirko Brodesser started this work in June 2023, and had already implemented almost all of the features discussed above. fetch(), <img>, and <link rel=preload as=image> were handled by Ziran Sun and I, while Valentin Gosu from Mozilla made HTTP requests use the internal priority.

The main blocker was due to that “implementation-defined” use of fetch priority. Mirko’s approach was to align Firefox with the behavior described in the web.dev article, which reflects Chromium’s implementation. But doing so would mean changing Firefox’s default behavior when fetchpriority is not specified (or explicitly set to auto), and it was not clear whether Chromium’s prioritization choices were the best fit for Firefox’s own implementation of resource loading.

After meeting with Mozilla, we agreed on a safer approach:

  1. Introduce runtime preferences to control how Firefox adjusts internal priorities when low, high, or auto is specified. By default, auto does not affect the internal priority so current behavior is preserved.
  2. Ask Mozilla’s performance team to run an experiment, so we can decide the best values for these preferences.
  3. Ship fetch priority with the chosen values, probably cleaning things up a bit. Any other ideas, including the ones described in the web.dev article, could be handled in future enhancements.

We recently entered phase 2 of this plan, so fingers crossed it works as expected!

Internal WPT tests

This project is part of the interoperability effort, but again, the “implementation-defined” part meant that we had very few WPT tests for that feature, really only those checking fetchpriority attributes for the DOM part.

Fortunately Mirko, who is a proponent of Test-driven development, had written quite a lot of internal WPT tests that use internal APIs to retrieve the internal priority. To test Link headers, he used the handy wptserve pipes. The only thing he missed was checking support in Early hints, but some WPT tests for early hints using WPT Python Handlers were available, so integrating them into Mirko’s tests was not too difficult.

It was also straightforward for Ziran and I to extend Mirko’s tests to cover fetch, img, and <link rel=preload as=image>, with one exception: when the fetch() method uses a non-default destination. In most of these code paths, we call NS_NewChannel to perform a fetch. But fetch() is tricky, because if the fetch event is intercepted, the event handler might call the fetch() method again using the same destination (e.g. image).

Handling this correctly involves multiple processes and IPC communication, which ended up not working well with the internal APIs used by Mirko’s tests. It took me a while to understand what was happening in bug 1881040, and in the end I came up with a new approach.

Upstreamable WPT tests

First, let’s pause for a moment: all the tests we have so far use an internal API to verify the internal priority, but they don’t actually check how that internal priority is used by Firefox when it sends HTTP requests. Valentin mentioned we should probably have some tests covering that, and not only would it solve the problem with fetch() calls in fetch event handlers, it would also remove the use of an internal API, making the tests potentially reusable by other browsers.

To make this kind of test possible, I added a WPT Python Handler that parses the urgency from a HTTP request and responds with an urgency-dependent resource, such as a stylesheet with different property values, an image of a different size, or an audio or video file of a different duration.

When a test uses resources with different fetch priorities, this influences the urgency values of their HTTP requests, which in turn influences the response in a way that the test can check for in JavaScript. This is a bit complicated, but it works!

Conclusion

Fetch priority has been enabled in Firefox Nightly for a while, and experiments started recently to determine the optimal priority adjustments. If everything goes well, we will be able to push this feature to the finish line after the (northern) summer.

Helping implement this feature also gave me the opportunity to work a bit on the Firefox networking code, which I had not touched since the collaboration with IPFS, and I learned a lot about resource loading and WPT features for HTTP requests.

To me, the “implementation-defined” part was still a bit awkward for the web platform. We had to write our own internal WPT tests and do extra effort to prepare the feature for shipping. But in the end, I believe things went relatively smoothly.

Acknowledgments

To conclude this series of blog posts, I’d also like to thank Alexander Surkov, Cathie Chen, Jihye Hong, Martin Robinson, Mirko Brodesser, Oriol Brufau, Ziran Sun, and others at Igalia who helped on implementing these features in Firefox. Thank you to Emilio Cobos, Olli Pettay, Valentin Gosu, Zach Hoffman, and others from the Mozilla community who helped with the implementation, reviews, tests and discussions. Finally, our spelling and grammar expert Delan Azabani deserves special thanks for reviewing this series of blog post and providing useful feedback.

  1. Other elements have been or are being considered (e.g. <iframe>, SVG <image> or SVG <script>), but these are the only ones listed in the HTML spec at the time of writing. 

  2. As mentioned below, the browser needs to know about the actual destination in order to properly calculate the priority. 

  3. As far as I know, Firefox does not take initiator into account, nor does it support render-blocking yet

Mozilla ThunderbirdThunderbird Monthly Development Digest: August 2024

Hello Thunderbird Community! It’s August, where did our summer go? (or winter for the folks on the other hemisphere).

Our August has been packed with ESR fixes, team conferences, and some personal time off, so this is gonna be a bit of a shorter update, tackling more upcoming efforts than what recently landed on daily. Miss our last update? Find it here.

More Rust

If you’ve been looking at our monthly metrics you might have noticed that the % of Rust code in our code base is slowly increasing.

We’re planning to push forward this effort in the near future with more protocol reworks and clean up of low level code.

Stay tuned for more updates on this matter and some dedicated posts from the engineers that are driving this effort.

Pushing forward with Exchange

Nothing new to report here, other than that we’re continuing with this implementation and we hope to be able to enable this feature by default in a not so far off Beta.

The general objective before next ESR is to have complete email support and start tapping into Calendar and Address Book integration to offer the full experience out of the box. 

Global database

This is also one of the most important pieces of work that we’ve been planning for a while. Bringing this to completion will drastically reduce our most common data loss problems as well as drastically speeding up the performance of Thunderbird when it comes to internal message search and archiving.

Calendar rebuild

Another very large initiative we’re kicking off during this new ESR cycle is a complete rebuild of our Calendar.

Not only are we  going to clean up and improve our back-end code handling protocols and synchronization, but we’re also taking a hard look at our UI and UX, in order to provide a more flexible and intuitive experience, reducing the amount of dialogs, and implementing those features that users have come to expect from any calendaring application.

As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.

See ya next month.

Alessandro Castellani (he, him)
Director, Desktop and Mobile Apps

If you’re interested in joining the technical discussion around Thunderbird development, consider joining one or several of our mailing list groups here.

The post Thunderbird Monthly Development Digest: August 2024 appeared first on The Thunderbird Blog.

This Week In RustThis Week in Rust 563

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is vimania-uri-rs, a VIM plugin for file and URI handling.

Thanks to sysid for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

416 pull requests were merged in the last week

Rust Compiler Performance Triage

This week we had some trouble with our performance bot, but luckily the issue has been resolved. In the end, we saw much more improvements than regressions.

Triage done by @kobzol. Revision range: acb4e8b6..6199b69c

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.3% [0.2%, 0.4%] 8
Regressions ❌
(secondary)
0.7% [0.2%, 1.5%] 9
Improvements ✅
(primary)
-0.8% [-3.4%, -0.2%] 158
Improvements ✅
(secondary)
-0.7% [-2.3%, -0.2%] 96
All ❌✅ (primary) -0.7% [-3.4%, 0.4%] 166

2 Regressions, 3 Improvements, 1 Mixed; 3 of them in rollups 19 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Tracking Issues or PRs entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-09-04 - 2024-10-02 🦀

Virtual
Africa
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I'm pretty sure I'm the only person ever to single handedly write a complex GPU kernel driver that has never had a memory safety kernel panic bug (itself) in production, running on thousands of users' systems for 1.5 years now.

Because I wrote it in Rust.

Asahi Lina on vt.social

Thanks to Ludwig Stecher for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Rust Programming Language BlogSecurity advisory for the standard library (CVE-2024-43402)

On April 9th, 2024, the Rust Security Response WG disclosed CVE-2024-24576, where std::process::Command incorrectly escaped arguments when invoking batch files on Windows. We were notified that our fix for the vulnerability was incomplete, and it was possible to bypass the fix when the batch file name had trailing whitespace or periods (which are ignored and stripped by Windows).

The severity of the incomplete fix is low, due to the niche conditions needed to trigger it. Note that calculating the CVSS score might assign a higher severity to this, but that doesn't take into account what is required to trigger the incomplete fix.

The incomplete fix is identified by CVE-2024-43402.

Overview

Refer to the advisory for CVE-2024-24576 for details on the original vulnerability.

To determine whether to apply the cmd.exe escaping rules, the original fix for the vulnerability checked whether the command name ended with .bat or .cmd. At the time that seemed enough, as we refuse to invoke batch scripts with no file extension.

Unfortunately, Windows removes trailing whitespace and periods when parsing file paths. For example, .bat. . is interpreted by Windows as .bat, but our original fix didn't check for that.

Mitigations

If you are affected by this, and you are using Rust 1.77.2 or greater, you can remove the trailing whitespace (ASCII 0x20) and trailing periods (ASCII 0x2E) from the batch file name to bypass the incomplete fix and enable the mitigations.

Rust 1.81.0, due to be released on September 5th 2024, will update the standard library to apply the CVE-2024-24576 mitigations to all batch files invocations, regardless of the trailing chars in the file name.

Affected versions

All Rust versions before 1.81.0 are affected, if your code or one of your dependencies invoke a batch script on Windows with trailing whitespace or trailing periods in the name, and pass untrusted arguments to it.

Acknowledgements

We want to thank Kainan Zhang (@4xpl0r3r) for responsibly disclosing this to us according to the Rust security policy.

We also want to thank the members of the Rust project who helped us disclose the incomplete fix: Chris Denton for developing the fix, Amanieu D'Antras for reviewing the fix; Pietro Albini for writing this advisory; Pietro Albini, Manish Goregaokar and Josh Stone for coordinating this disclosure.

Mozilla Addons BlogDeveloper Spotlight: AudD® Music Recognition

AudD identifies an obscure song in a DJ set.

We’ve all been there. You’re streaming music on Firefox and a great song plays but you have no idea what it’s called or who the artist is. If your phone is handy you could install a music recognition app, but that’s a clunky experience involving two devices. It would be a lot better to just click a button on Firefox and have the AudD® Music Recognition extension fetch you song details.

“And if you’re listening on headphones,” adds Mikhail Samin, CEO of AudD, “using a phone app is a nightmare. We tried to make learning what’s playing as uncomplicated as possible for users.” Furthermore, Samin claims browser based music recognition is more accurate than mobile apps because audio doesn’t get distorted by speakers or a microphone.

Of course, making things amazing and simple for users often requires complex engineering.

“It’s one thing for the browser to play audio from a source, such as an audio or video file on a webpage, to a destination connected to the device, like speakers,” explains Samin. “It’s another thing if a new and external part of the browser wants to add itself to the list of destinations. It isn’t straightforward to make an extension that successfully does that… Fortunately, we got some help from the awesome add-ons developer community. We went to the Matrix room.”

AudD is built to recognize any song from anywhere so long as it’s been properly published on digital streaming platforms. Samin says one of his team’s main motivations for developing AudD is simply the joy of connecting music fans with new artists, so install AudD to make sure you never miss another great musical discovery. If you’ve got any new ideas or feedback for the AudD team, they’re always eager to hear from users.


Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey.

The post Developer Spotlight: AudD® Music Recognition appeared first on Mozilla Add-ons Community Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter 130

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 130 release cycle.

Contributions

Firefox – including our WebDriver implementation – is developed as an open source project, and everyone is welcome to contribute. If you ever wanted to contribute to an open source project used by millions of users, or are interested in some experience in software development, jump in.

We are always grateful to receive external contributions, here are the ones which made it in Firefox 130:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette.

General

Bug fixes

WebDriver BiDi

New: Support for the “browsingContext.navigationFailed” event

When automating websites, navigation is a common scenario that requires careful handling, especially when it comes to notifying clients if the navigation fails. The new browsingContext.navigationFailed” event is designed to assist with this by allowing clients to register for and receive events when a navigation attempt is unsuccessful. The payload of the event is similar to all the other already available navigation specific events.

Bug fixes

Marionette (WebDriver classic)

Bug fixes

Don Martijournalist-owned news sites (Sunday Internet optimism, part 2)

Congratulations to 404 Media, which celebrated its successful first year on August 22. They link to other next-generation news sites, owned by the people who write for them. I checked for ads.txt files and advertiser pages to see which are participating in the conventional RTB ad system and which are doing something else. (404 Media does have an ads.txt file managed by BuySellAds.)

Defector: sports site that’s famous for not sticking to sports (and even has an Arts And Culture section and #AI coverage: Whatever AI Looks Like, It’s Not) (ads.txt not found, advertise with us link redirects to a page of contact info.)

Hell Gate: New York City news (not just for those who finally canceled their subscriptions to that other New York site) (ads.txt not found, advertise with Hell Gate is just a page with a contact email address.)

Racket - Your writer-owned, reader-funded source for news, arts, and culture in the Twin Cities such as What It’s Like to Eat Your Own 90-lb. Butter Head (ads.txt not found, but the Advertise with Racket link goes to a nice page including advertiser logos and testimonials.)

Remap: Video game site that also covers a variety of topics, including but not limited to games, rooting for sports teams that break your heart, inflatable hot tubs, hanging out on car auction websites, and more. Old News from the Latest Disasters: [T]he fact that these studio tell-all features have started to feel so same-y says less about the journalist reporting them and more about how mundane this kind of dysfunction is in AAA game development. (ads.txt not found, no ad contact or page)

Aftermath: a worker-owned, subscription-based website covering video games, the internet and everything that comes after. Short-Sighted AI Deals Aren’t The Future Of Journalism (ads.txt not found, no ad contact or page.)

Another good example, not on 404 Media’s list, is The Kyiv Independent — News from Ukraine, Eastern Europe. The Kyiv Independent was born out of a fight for freedom of speech. It was co-founded by a group of journalists who were fired from the Kyiv Post, then a prominent newspaper, as the owner attempted to take the newsroom under control and end its critical coverage of Ukrainian authorities. Instead of giving up, the fired team founded a new media outlet to carry on the torch — and be a truly independent voice of Ukraine. Opinion: AI complacency is compromising Western defense (ads.txt found, looks like they use an ad management service.)

What all these sites have in common is a focus on subscriber/member revenue and first-party data.

For quite a while, operating an independent site has meant getting into a frenemy relationship with Big Tech. Yes, they pay some ad money, and can be shamed into writing checks (CA News Funding Agreement Falls Short), but they also grab as much reader data as possible in order to target the same readers in cheaper contexts, including some of the worst places on the Internet. But the bargain is changing rapidly—Big Tech is taking site content in order to keep eyeballs, not send them to the source. And sometimes worse: Copilot AI calls journalist a child abuser, Microsoft tries to launder responsibility. So The Backlash Against AI Scraping Is Real and Measurable. At first this situation seems like a massive value extraction crisis. If the ads move to AI content, and surveillance ad money goes away, where will the money for new data journalism and investigative reporting come from?

As a privacy nerd, I’m an optimist about this apparent mess. Yes, part of success in running a modern news operation is figuring out how to get by without legacy management layers and investors (404 Media Shows Online Journalism Can Be Profitable When You Remove Overpaid, Fail-Upward Brunchlords From The Equation). But the other big set of trends is technical and regulatory improvements that—if kept up and not worked around—will lower the ROAS (return on ad spendnot rodents of average size) for surveillance advertising. So the Internet optimist version of the story is

  1. Big Tech value extraction drives independent journalists to business models other than surveillance advertising

  2. Users choose effective privacy tools and settings (If the sites you like don’t need surveillance ads, and the FBI and FTC say they’re crooked, you might as well join the ad blocking trend to be on the safe side. Especially the YouTube ads…yeech)

  3. People with better privacy protection buy better goods and services

  4. With the money saved in step 3, people can afford more subscriptions.

The big objection to that is: what about free riding problems? Won’t people choose not to subscribe, or choose infringing or AI-exfiltrated versions of content? But most people aren’t as likely to try to free ride as tech executives are. The rise of 404 Media and related sites is a good sign. More: Sunday Internet optimism

Related

Purple box claims another victim

privacy economics sources

Bonus links

Scoop: The Trade Desk is building its own smart TV OS On the web, the Trade Desk is on the high end as far as adtech companies go, less likely to put advertisers’ money into illegal stuff than some of the others. Could be a win for smart TV users who want the ads. And, nice timing for TTD, the California bill requiring Global Privacy Control only applies to browsers and smartphone platforms, not TVs.

Satori Threat Intelligence Alert: Camu cashes out ads on piracy content (This is why you don’t build an inclusion list by looking at the ad reports and adding what looks legit. Illegal sites can check Referer headers and hide their real content from advertisers who cut and paste the URL. Referer lists have to be built from known legit sources like customer surveys, press lists, support tickets, and employee chat logs.)

U.S. State Privacy Laws – A Lack of Imagination So far, the laws have been underwhelming. They use approaches and measures (sensitive data, rights, notice-and-choice, etc.) that are either unworkable (I argue elsewhere that sensitive data doesn’t work) or ineffective. (fwiw I say avoid all this stuff and set up a surveillance licensing system. This story backs up that point: Don’t Sleep On Maryland’s Strict New Data Privacy Law (if the way to comply is to hire more lawyers, not protect customers better, the law is suboptimal.)

Murky Consent: An Approach to the Fictions of Consent in Privacy Law – FINAL VERSION (I don’t know many people who know enough about surveillance advertising to actually give informed consent to it.)

Your use of AI is directly harming the environment I live in Instead of putting limits to “AI” and cryptocoin mining, the official plan is currently to destroy big parts of places like Þjórsárdalur valley, one of the most green and vibrant ecosystems in Iceland. That’s why I take it personally when people use “AI” models and cryptocoins. You are complicit in creating the demand that is directly threatening to destroy the environment I live in. None of this would be happening if there wasn’t demand so I absolutely do think the people using these tools and services are personally to blame, at least partially, for the harm done in their name.

Thinking About an Old Copyright Case and Generative AI The precedent in Wheaton has often been highlighted by anti-copyright scholars because it limits the notion that copyright rights are in any sense natural rights. This, in turn, supports the skeptical (I would say cynical) view that copyright is a devil’s bargain with authors, begrudgingly granting a temporary “monopoly” in exchange for production and distribution of their works. But aside from the fact that the Court of 1834 stated that the longstanding question remained “by no means free from doubt,” its textual interpretation of the word securing was simply unfounded. (Some good points here. IMHO neither the copyright maximalists nor the techbro my business model is always fair use crowd are right. Authors and artists have both natural rights and property-like commercial interests that are given to them by the government as a subsidy.)

Plain Vanilla – a tutorial website for vanilla web development The plain vanilla style of web development makes a different choice, trading off a few short term comforts for long term benefits like simplicity and being effectively zero-maintenance. This approach is made possible by today’s browser landscape, which offers excellent web standards support.

The Servo BlogThis month in Servo: tabbed browsing, Windows buffs, devtools, and more!

Servo nightly with a flexbox-based table of new features including textarea text, ‘border-image’, structuredClone(), crypto.randomUUID(), ‘clip-path’, and flexbox properties themselves <figcaption>A flexbox-based table showcasing some of Servo’s new features this month.</figcaption>

Servo has had several new features land in our nightly builds over the last month:

  • as of 2024-07-27, basic support for show() on HTMLDialogElement (@lukewarlow, #32681)
  • as of 2024-07-29, the type property on HTMLFieldSetElement (@shanehandley, #32869)
  • as of 2024-07-31, we now support rendering text typed in <textarea> (@mrobinson, #32886)
  • as of 2024-07-31, we now support the ‘border-image’ property (@mrobinson, #32874)
  • as of 2024-08-02, unsafe-eval and wasm-unsafe-eval CSP sources (@chocolate-pie, #32893)
  • as of 2024-08-04, we now support playback of WAV audio files (@Melchizedek6809, #32924)
  • as of 2024-08-09, we now support the structuredClone() API (@Taym95, #32960)
  • as of 2024-08-12, we now support IIRFilterNode in Web Audio (@msub2, #33001)
  • as of 2024-08-13, we now support navigating through cross-origin redirects (@jdm, #32996)
  • as of 2024-08-23, we now support the crypto.randomUUID() API (@webbeef, #33158)
  • as of 2024-08-29, the ‘clip-path’ property, except path(), polygon(), shape(), or url() values (@chocolate-pie, #33107)

We’ve upgraded Servo to SpiderMonkey 128 (@sagudev, @jschwe, #32769, #32882, #32951, #33048), WebRender 0.65 (@mrobinson, #32930, #33073), wgpu 22.0 (@sagudev, #32827, #32873, #32981, #33209), and Rust 1.80.1 (@Hmikihiro, @sagudev, #32896, #33008).

WebXR (@msub2, #33245) and flexbox (@mrobinson, #33186) are now enabled by default, and web APIs that return promises now correctly reject the promise on failure, rather than throwing an exception (@sagudev, #32923, #32950).

To get there, we revamped our WebXR API, landing support for Gamepad (@msub2, #32860), and updates to hand input (@msub2, #32958), XRBoundedReferenceSpace (@msub2, #33176), XRFrame (@msub2, #33102), XRInputSource (@msub2, #33155), XRPose (@msub2, #33146), XRSession (@msub2, #33007, #33059), XRTargetRayMode (#33155), XRView (@msub2, #33007, #33145), and XRWebGLLayer (@msub2, #33157).

And to top it all off, you can now call makeXRCompatible() on WebGL2RenderingContext (@msub2, #33097), not just on WebGLRenderingContext.

The biggest flexbox features that landed this month are the ‘gap’ property (@Loirooriol, #32891), ‘align-content: stretch’ (@mrobinson, @Loirooriol, #32906, #32913), and the ‘start’ and ‘end’ values on ‘align-items’ and ‘align-self’ (@mrobinson, @Loirooriol, #33032), as well as basic support for ‘flex-direction: column’ and ‘column-reverse’ (@mrobinson, @Loirooriol, #33031, #33068).

‘position: relative’ is now supported on flex items (@mrobinson, #33151), ‘z-index’ always creates stacking contexts for flex items (@mrobinson, #32961), and we now give flex items and flex containers their correct intrinsic sizes (@delan, @mrobinson, @mukilan, #32854).

We’re now working on support for bidirectional text, with architectural changes to the fragment tree (@mrobinson, #33030) and ‘writing-mode’ interfaces (@mrobinson, @atbrakhi, #33082), and now partial support for the ‘unicode-bidi’ property and the dir attribute (@mrobinson, @atbrakhi, #33148). Note that the dir=auto value is not yet supported.

Servo nightly showing a toolbar with icons on the buttons, one tab open with the title “Servo - New Tab”, and a location bar that reads “servo:newtab” <figcaption>servoshell now has a more elegant toolbar, tabbed browsing, and a clean but useful “new tab” page.</figcaption>

Beyond the engine

Servo-the-browser now has a redesigned toolbar (@Melchizedek6809, 33179) and tabbed browsing (@webbeef, @Wuelle, #33100, #33229)! This includes a slick new tab page, taking advantage of a new API that lets Servo embedders register custom protocol handlers (@webbeef, #33104).

Servo now runs better on Windows, with keyboard navigation now fixed (@crbrz, #33252), --output to PNG also fixed (@crbrz, #32914), and fixes for some font- and GPU-related bugs (@crbrz, #33045, #33177), which were causing misaligned glyphs with incorrect colors on servo.org (#32459) and duckduckgo.com (#33094), and corrupted images on wikipedia.org (#33170).

Our devtools support is becoming very capable after @eerii’s final month of work on their internship project, with Servo now supporting the HTML tree (@eerii, #32655, #32884, #32888) and the Styles and Computed panels (@eerii, #33025). Stay tuned for a more in-depth post about the Servo devtools!

Changes for Servo developers

Running servoshell immediately after building it is now several seconds faster on macOS (@mrobinson, #32928).

We now run clippy in CI (@sagudev, #33150), together with the existing tidy checks in a dedicated linting job.

Servo now has new CI runners for Windows builds (@delan, #33081), thanks to your donations, cutting Windows-only build times by 70%! We’re not stopping at Windows though, and with new runners for Linux builds just around the corner, your WPT try builds will soon be a lot faster.

We’ve been running some triage meetings to investigate GitHub issues and coordinate our work on them. The next Servo issue triage meeting is on 2 September at 10:00 UTC. For more details, see project#99.

Engine reliability

August has been a huge month for squashing crash bugs in Servo, including on real-world websites.

We’ve fixed crashes when rendering floats near tables in the HTML spec (@Wuelle, #33098), removed unnecessary explicit reflows that were causing crashes on w3schools.com (@jdm, #33067), and made the HTML parser re-entrant (@jdm, #32820, #33056, html5ever#548), fixing crashes on kilonova.ro (#32454), tweakers.net (#32744), and many other sites. Several other crashes have also been fixed:

  • crashes when resizing windows with WebGL on macOS (@jdm, #33124)
  • crashes when rendering text with extremely long grapheme clusters (@crbrz, #33074)
  • crashes when rendering text with tabs in certain fonts (@mrobinson, #32979)
  • crashes in the parser after calling window.stop() (@Taym95, #33173)
  • crashes when passing some values to console.log() (@jdm, #33085)
  • crashes when parsing some <img srcset> values (@NotnaKO, #32980)
  • crashes when parsing some HTTP header values (@ToBinio, #32973)
  • crashes when setting window.opener in certain situations (@Taym95, #33002, #33122)
  • crashes when removing iframes from documents (@newmoneybigbucks, #32782)
  • crashes when calling new AudioContext() with unsupported options (@Taym95, #33023)
  • intermittent crashes in WRSceneBuilder when exiting Servo (@Taym95, #32897)

We’ve fixed a bunch of BorrowError crashes under SpiderMonkey GC (@jdm, #33133, #24115, #32646), and we’re now working towards preventing this class of bugs with static analysis (@jdm, #33144).

Servo no longer leaks the DOM Window object when navigating (@ede1998, @rhetenor, #32773), and servoshell now terminates abnormally when panicking on Unix (@mrobinson, #32947), ensuring web tests correctly record their test results as “CRASH”.

Donations

Thanks again for your generous support! We are now receiving 3077 USD/month (+4.1% over July) in recurring donations. This includes donations from 12 people on LFX, but we will stop accepting donations there soon — please move your recurring donations to GitHub or Open Collective.

Servo is also on thanks.dev, and already three GitHub orgs that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

3077 USD/month
10000

As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Don MartiLinks for 31 August 2024

First, some good news: Sweden’s been stealthily using hydrogen to forge green steel. Now it’s ready to industrialise (the EU isn’t against technology, they’re against crooks and bullshitters. The DMA Version of iOS Is More Fun Than Vanilla iOS - MacStories, Silicon Valley’s Very Online Ideologues are in Model Collapse)

AI Has Created a Battle Over Web Crawling The report, Consent in Crisis: The Rapid Decline of the AI Data Commons, notes that a significant number of organizations that feel threatened by generative AI are taking measures to wall off their data. (IMHO this is not just a TOS or copyright issue. In the medium term the main problem for AI scrapers is going to be privacy and defamation law. Meta AI Keeps Telling Strangers It Owns My Phone Number - Business Insider)

From the United States Court of Appeals for the Third Circuit, more news from the circuit split between common sense (advertisers should not be paying the PRC to kill kids) and the epicycles of increasingly contrived Big Tech advocacy still in the law books: The Limits of the CDA Section 230: Accountability for Algorithmic Decisions, Judges Rule Big Tech’s Free Ride on Section 230 Is Over. Yes, the Big Tech defenders are big mad. They thought they won with the ISIS recruiting on Twitter case. And they’re probably right about how well the Third Circuit’s decision (PDF) will hold up on appeal. I don’t think this will hold up in court with today’s judges. At least for now we need to regulate Big Tech in a way that avoids free speech issues. The motivation to deal with the situation is just getting stronger: Here are 13 other explanations for the adolescent mental health crisis. None of them work.)

DOJ sues TikTok, alleging “massive-scale invasions of children’s privacy” (Throwing the book at creepy surveillance companies is a win. Meta to pay $1.4 billion settlement after Texas facial recognition complaint)

Opt Out of Clearview AI Giveaway Class actions are terminally disappointing, but this one is especially egregious and it is worthy of special attention. We think you should opt out. Not just as a protest, but to preserve your rights in the event of further litigation. Here is how to do it. The deadline is September 20th.

Google’s Real Googly. No Not The Anti-Trust! Google search is starting to look old, tired, and less and less useful. (True, but that’s not because of disruption or innovation, it’s mainly that Google management has put dogmatic union-busting of TVC (second-class, indirect) employees ahead of a quality experience for users. The biggest mistake that companies with a cash cow make isn’t under-investing in innovation, it’s making wasteful investments in non-core areas while pursuing false economies in the core business. Meanwhile, Google writes checks for legacy media: Will Google’s $250 million deal with California really help journalism? California tried to make Google pay news outlets. The company cut a deal that includes funding AI and a new generation of journalist-owned news sites become going concerns)

More news from the regular people side of the AI story arc: Excuse Me, Is There AI in That? - The Atlantic Businesses and creators see a new opportunity in the anti-AI movement. Why putting AI in your product description is actually hurting sales The Generative-AI Revolution May Be a Bubble Law firm page following copyright cases: Case Tracker: Artificial Intelligence, Copyrights and Class Actions | BakerHostetler The other shoe dropping on ‘AI’ and office work

Ethics and Rule Breaking Among Life Hackers (to defeat the techbro, think like a techbro? full text)

Point of order: I decided not to put some otherwise good links in here because the writers chose to stick a big obvious AI-generated image on them. That’s like Rolling Coal for the web. Unless your intent is to claim membership in evil oligarch fan club or artist hater club, cut it out. I can teach you to find perfectly good Creative Commons images if you don’t have an illustration budget.

Mozilla ThunderbirdPlan Less, Do More: Introducing Appointment By Thunderbird

We’re excited to share a new project we’ve been working on at Thunderbird called Appointment. Appointment makes it simple to schedule meetings with anyone, from friends and family to colleagues and strangers. Escape the endless email threads trying to find a suitable meeting time across multiple time zones and organizations.

With Appointment, you can easily share your customized availability and let others schedule time on your calendar. It’s simple and straightforward, without any clutter.


If you have tried similar tools, Appointment will feel familiar, while capturing what’s unique about Thunderbird: it’s open source and built on our fundamental values of privacy, openness, and transparency. In the future, we intend for Appointment to be part of a wider suite of helpful products enhancing the core Thunderbird experience. Our ambition is to provide you with not only a first-rate email application but a hub of productivity tools to make your days more efficient and stress-free.

We’ll be rolling out Appointment in phases, continuing to improve it as we open up access to more people. It’s currently in closed beta, so we encourage you to sign up for our waiting list. Let us know what features you find valuable and any improvements you’d like to see. Your feedback will be invaluable as we make this tool as useful and seamless as possible.

To that end, the development repository for Appointment is publicly available on Github, and we encourage any future testers or contributors to get involved and build this with us.


Free yourself from cluttered scheduling apps and never-ending email threads. The simplicity of Appointment lets you find that perfect meeting time, without wasting your precious time.

The post Plan Less, Do More: Introducing Appointment By Thunderbird appeared first on The Thunderbird Blog.

The Mozilla BlogHow Mozilla’s AI website creator, Solo, is shaking up a $2.1B industry

In the world of entrepreneurship, one business owner’s journey proves the power of simple technology.

And group chats.

When Richelle Samy founded Culture of Stamina, a coaching service, she set out to create an online presence that was elegant and professional. She found what she was looking for when a group chat led her to Solo, Mozilla’s AI website creator for solopreneurs. After a few clicks using Solo’s generative AI (GenAI) tools, Richelle had a website for her brand with bold, sharp text and colors that perfectly captured her vision.

For Richelle, Solo enables her to focus more on empowering and training her clients instead of spending hours on her website. Other website builders weren’t as easy.

“Those tools are really nice, but I feel like you need a little bit of knowledge of what you want to do and how you want to put things together,” she recalled. “Whereas with Solo, I knew I was looking for a window for my business for people to contact me, and I only wanted a couple of pages. It was very easy to use something that was already pre-made, versus something I had to do from scratch.”

When Mozilla launched Solo in December, we were curious to see how people like Richelle would receive GenAI with website creation. Six months into this journey, we’re happy with the progress, and it’s time to reflect on what we’ve accomplished and learned along the way. We talked with the head of Solo at Mozilla, Raj Singh, about the AI website creator, its journey since the early stages, how it’s disrupting itself in the $2.1 billion website builder software industry with free custom domains and much more. Below is a snippet of our conversation. For the entire interview, follow along at our Innovations Projects blog.

To start, let’s talk about the first few months of Solo since its Beta launch in December. How big is the team, and how many websites have been published?

Solo started in May of last year with just myself and a part-time designer. We built a lightweight, clickable prototype and technical implementation to test whether generative AI could really work for website authoring. We also spent significant time surveying the broader landscape to make sure we had something that could differentiate and compete in an entrenched market.

Two early design sketches for a website builder called "Solo." The left image shows a simple, initial setup screen asking, "What does your business do?" with "Chess tutoring" as an example and a "Start" button below. The right image shows a more detailed interface where users can customize their brand by choosing colors, fonts, and moods, such as "cute" and "playful." It also includes a preview of a chess coaching website with images, descriptions, and contact information.<figcaption class="wp-element-caption">The first sketch for Solo to assist solopreneurs with their website.</figcaption>

After initial validation, we added one engineer and started the development process in June. By September, we had our first iteration that could create a website for a solopreneur with just a few simple inputs, and from there, we continued to refine the user experience. In October, we released an internal beta, and then in December, we launched our beta publicly.

Since then, our team within the Mozilla Innovation Projects group grew during this journey from just an engineer, a part-time designer and me, to three engineers, a part-time designer, myself and other part-time resources to support us.

We launched Solo 1.0 this past month, and in that period since beta, we’ve seen over 7,000 published websites across industries, from pool cleaners, to coaches, to immigration consultants.

When we started, our goal was to make it simple for non-technical solopreneurs to build their web presence and grow their business, and we believe we have accomplished the first step.

Solo--Timeline<figcaption class="wp-element-caption">Solo’s 0 to 1 product journey from inception to launch visualized.</figcaption>

How do you compete with Solo in such a crowded market? How are you making Solo free?

When we initially conceptualized Solo, marrying GenAI with the service provider segment was an insertion point. Since then, and as expected, the incumbents have also built GenAI capabilities and improved their user experiences for the service provider audience.

In this situation, where we are the underdog, my approach is to look for maximum disruption, and we landed at the business model. Every competitor — that we are aware of, at least — charges for connecting and hosting your custom domain. This makes sense – 20 years ago, bandwidth wasn’t cheap and SSL (Secure Sockets Layer) certificates that enable an encrypted web connection cost money. Today, the former is near zero and the latter is zero.

We asked, “What if we just made this free?” This would be very disruptive, so this is exactly what we are doing. Not only is it disruptive, but democratizing the category is also in line with our mission to increase access to the web. We are making web hosting and connecting your custom domain free, similar to how Robinhood disrupted brokerages by eliminating trading fees. Many do not launch their website because they can’t afford or don’t yet have enough business to justify the cost. It also doesn’t help that many of these incumbents rely on hidden upsells and next thing you know, you’re spending $100 a month for your dog walking service. In this way, Mozilla continues to be a global public resource looking out for the interests of people.

A section of a website titled "Decorating Tips" displaying a YouTube video about adding plants to your living spaces. Below the video, there's a prompt for users to enter a video link, with an example URL highlighted. The page encourages viewers to check out more decorating tips on a linked YouTube page, emphasizing stylish plant decor for a modern home.<figcaption class="wp-element-caption">Screenshot of Solo’s video upload support.</figcaption>

How is it building Solo, a new product at Mozilla?

Building a new product at Mozilla, also known as zero to one, and probably any large organization, has challenges. I come from a startup background, so this is my comfort zone and I have some principles I generally abide by.

First, it’s important to be the top advocate for the product. This can be hard because things will pull you in different directions, whether other initiatives, shiny objects or your own self-doubt. Second, adopt the tools you need and optimize on speed — it’s easy to get stuck in administrative stuff. Third, resourcing can be slow, so optimize on generalists and make sure everyone is comfortable with grunt work. Fourth, make product decisions — many of them one-way — quickly. There’s just not enough time to get consensus or have everything be data-driven at the onset. The last thing is to take agency when you can. The five minutes here, the half day there, the follow-up meeting tomorrow cause delays and they compound.

Can you share details about how Solo fits into Mozilla’s overall mission?

In many ways, Mozilla has been at the intersection of the internet and the interests of the people, as opposed to big tech. With Solo, we are squarely within this vision. We are democratizing access to the web for solopreneurs, and we’re increasing equity by helping those that can’t afford to host their websites in emerging markets, or where English isn’t their first language, with writing, designing and curating their content.

For the entire interview, follow along at our Innovations Projects blog.

The logo features a stylized "S" in purple and red hues with a black oval shape in the center, next to the text "Solo" in bold black font.

Ready to start creating?

Launch your website

The post How Mozilla’s AI website creator, Solo, is shaking up a $2.1B industry appeared first on The Mozilla Blog.

Mozilla Localization (L10N)Engineering the Mozilla Way: My Internship Story

When I began my 16-month journey as a Software Engineer intern at Mozilla, I had no idea how enriching the experience would be. I had just finished my third-year as a computer science student at the University of Toronto, passionate about Artificial Intelligence (AI), Machine Learning (ML), and software engineering, with a thirst for hands-on experience. Mozilla, with its commitment to the open web and global community, was the perfect place for me to grow, learn, and contribute meaningfully.

First meeting

Starting off strong on day one at Mozilla—calling the shots from the big screen :)!

Integrating into a Global Team

Joining Mozilla felt like being welcomed into a global family. Mozilla’s worldwide presence meant that asynchronous communication was not just a convenience but a necessity. My team was scattered across various time zones around the world—from Berlin to Helsinki, Slovenia to Seattle, and everywhere in between. Meanwhile, I was located in Toronto, where morning standups became my lifeline. The early hours of the day were crucial; I had to ensure all my questions were answered before my teammates signed off for the day. Collaborating across continents with a diverse team honed my adaptability and proficiency in asynchronous communication, ensuring smooth project progress despite time zone differences. This taught me the art of clear, concise communication and the importance of being proactive in a globally distributed team.

Weekly team meeting

Our weekly team meeting, connecting from all corners of the globe!

Working on localization with such a diverse team gave me a unique perspective. I learned that while we all used the same technology, the challenges and solutions were as diverse as the locales we supported. This experience underscored the importance of creating technology that is not just globally accessible but also locally relevant.

Team photo

Who knew software engineering could be so… circus-y? Meeting the team in style at Mozilla’s All Hands event in Montréal!

Building Success Through Teamwork

During my internship, I was treated as a full-fledged engineer, entrusted with significant responsibilities that allowed me to lead projects. This experience honed my strategic thinking and built my confidence, but it also taught me the importance of collaboration. Working closely with a team of three engineers, I quickly learned that effective communication was essential to our success. I actively participated in code reviews, feature assessments, and bug resolutions, always keeping my team informed through regular updates in standups and Slack. This open communication not only fostered strong relationships but also made me an effective team player, ensuring that our collective efforts were aligned and that we could achieve our goals together.

Driving Innovation

One of the things I quickly realized at Mozilla was that innovation isn’t just about coming up with new ideas—it’s about identifying areas for improvement and enhancing them. My interest in AI led me to spot an opportunity to elevate the translation process in Pontoon, Mozilla’s localization platform. After thorough research and discussions with my mentor and team, I proposed integrating large language models to boost the platform’s capabilities. This proactive approach not only enhanced the platform but also showcased my ability to think critically and solve problems effectively.

Diving into the Tech Stack

Mozilla gave me the opportunity to dive deep into a tech stack that was both challenging and exciting. I worked extensively with Python using the Django framework, React, TypeScript, and JavaScript, along with HTML and CSS. But it wasn’t just about the tools—it was about applying them in ways that would have a lasting impact.

One of my most significant projects was leading the integration of GPT-4 into Pontoon. This wasn’t just about adding another tool to the platform; it was about enhancing the translation process in a way that captured the subtle nuances of language, something that traditional machine translation tools often missed. The result? A feature that allowed localizers to rephrase text, or make text more formal or informal as needed, ultimately ensuring that Mozilla’s products resonated with users worldwide.

This project was a full-stack adventure. From prompt engineering on the backend to crafting a seamless frontend interface, I was involved in every stage of the development process. The impact was immediate and widespread—by August 2024, the feature had been used over 2,000 times across 52 distinct locales. Seeing something I worked on make such a tangible difference was incredibly rewarding. You can read more about this feature in my blog post here.

Another project that stands out is the implementation of a light theme in Pontoon, aimed at promoting accessibility and enhancing user experience. Recognizing that a single dark theme could be straining for some users, I spearheaded the development of a light theme and system theme option that adhered to accessibility standards and catered to diverse user preferences. Within the first six months of its launch, the feature was adopted by over 14% of users who logged in within the last 12 months, significantly improving usability and demonstrating Mozilla’s commitment to inclusive design.

Building a Stronger Community

Mozilla’s commitment to community is one of the things that drew me to the organization, and I was thrilled to contribute to it in meaningful ways. One of my proudest achievements was initiating the introduction of gamification elements in Pontoon. The goal was to enhance community engagement by recognizing and rewarding contributions through badges. By analyzing user data and drawing inspiration from platforms like Duolingo and GitHub, I helped design a system that not only motivated contributors but also enhanced the trustworthiness of translations.

But my impact extended beyond that. I had the opportunity to interact with our global audience and participate in various virtual events focused on engaging with our localization community. For instance, I took part in the “Three Women in Localization” interview, where I shared my experiences as a female engineer in the tech industry. I also participated in a fireside chat with the localization tech team to discuss our work and the future of localization at Mozilla. More recently, I organized a live virtual interview featuring the Firefox Translations team, which turned out to be our most engaging online event to date. It was an incredible opportunity to connect with Mozilla’s global community, discuss important topics like privacy and AI, and facilitate real-time interaction. These experiences not only allowed me to share my insights but also deepened my understanding of the broader community that powers Mozilla’s mission.

Community event

Joining forces with the inspiring women of Mozilla’s localization team during the “Three Women in Localization” interview, where we shared our experiences and insights as females in the tech industry.

From Mentee to Mentor

During the last four months of my internship, I had the opportunity to mentor and onboard our new intern, Harmit Goswami, who would be taking over my role once I returned to my last semester of university. My team entrusted me with this responsibility, and I guided him through the onboarding process—helping him get everything set up, introducing him to the codebase, and supporting him as he tackled his first bugs.

Zoom meeting

Mentoring our new intern, Harmit, as he joins our weekly tech team call for the first time from the Toronto office—welcoming him to the Mozilla family, one Zoom call at a time!

This experience taught me the importance of clear communication, setting expectations, and creating a learning path for his growth and success. I was fortunate to have an amazing mentor, Matjaž Horvat, throughout my internship, and it was incredibly rewarding to take what I had learned from him and pass it on. In the process, I also gained a deeper understanding of my own skills and how to teach and guide others effectively.

Learning and Growing Every Day

The fast-paced, collaborative environment at Mozilla pushed me to learn new technologies and skills on a tight schedule. Whether it was diving into Django for backend development or mastering the intricacies of version control with Git and GitHub, I was constantly learning and growing. More importantly, I learned the value of adaptability and how to thrive in an open-source work culture that was vastly different from my previous experiences in the financial sector.

Reflecting on the Journey

As I wrap up my internship, I can’t help but reflect on how much I’ve grown—both as an engineer and as a person.

As a person, I was able to step out of my comfort zone and host virtual events that were open to both the company and the public, enhancing my confidence and public speaking skills. Engaging with a diverse audience and facilitating meaningful discussions taught me the importance of effective communication and community engagement.

As an engineer, I had the opportunity to lead my own projects from the initial idea to deployment, which allowed me to fully immerse myself in the software development lifecycle and project management. This experience sharpened my technical acumen and taught me how to provide constructive feedback during senior code reviews, ensuring code quality and adherence to best practices. Beyond technical development, I expanded my expertise by adopting a user-centric approach—writing proposal documents, conducting research, analyzing user data, and drafting detailed specification documents. This comprehensive approach required me to blend technical skills with strategic thinking and user-focused design, ultimately refining my problem-solving, research, and communication abilities. These experiences made me a more versatile and well-rounded engineer.

This journey has been about more than just writing code. It’s been about building something that matters, connecting with a global community, and growing into the kind of engineer who not only solves problems but also embraces challenges with creativity and resilience. As I look ahead to the future, I’m excited to continue this journey, armed with the knowledge, skills, and passion that Mozilla has helped me cultivate.

Acknowledgments

I want to extend my deepest gratitude to my manager, Francesco Lodolo, and my mentor, Matjaž Horvat, for their unwavering support and guidance throughout my internship. To my incredible team and the entire Mozilla community, thank you for fostering an environment of learning, collaboration, and innovation. This experience has been invaluable, and I will carry these lessons and memories with me throughout my career.

*Thank you for reading about my journey! If you have any questions or would like to discuss my experiences further, feel free to reach out via Linkedin.

This Week In RustThis Week in Rust 562

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is wtx, a batteries-included web application framework.

Thanks to Caio for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

429 pull requests were merged in the last week

Rust Compiler Performance Triage
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Tracking Issues or PRs entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-08-28 - 2024-09-25 🦀

Virtual
Africa
Asia
Europe
North America

Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

... opaque number sequences (\<GitHub> "issue numbers") are not very informative about what is behind that pointer, and pretending they are is harmful. People could provide, instead, actual reasons for things, which do not require dereferencing random pointers, which thrashes cache.

Jubilee on rust-internals

Thanks to Anton Fetisov for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox NightlyStreamline your screen time with auto-open Picture-in-Picture and more – These Weeks in Firefox: Issue 166

Highlights

  • Special shout-out to Daniele (egglessness) who landed a new experimental Picture-in-Picture feature that can be enabled in Firefox 130! This feature automatically triggers Picture-in-Picture mode for any playing video when the associated tab is backgrounded. This can be enabled in about:settings#experimental
  • Olli Pettay fixed very long cycle collection times in workers which improved performance when Debugging large files in the DevTools Debugger (#1907794)
  • You can now hover over elements in the shadow DOM, allowing you to capture more snippets of a page for screenshots. Thanks to Niklas for this Screenshots improvement and making it work with openOrClosedShadowRoot.
    • Firefox Screenshots feature being used to hover over a JavaScript code block.

      Want to highlight sample code from your favorite dev site? Now it’s possible with the latest Nightly version.

  • Mandy has added support for showing search restriction keywords when users type @ in the address bar. If you want to check it out, be sure to set browser.urlbar.searchRestrictKeywords.featureGate to true.
    • Dropdown of available search keywords for the Firefox address bar, after typing an @ symbol. Options include “Search with History” and “Search with Bookmarks”.

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Louis Mascari
  • Mathew Hodson

New contributors (🌟 = first patch)

General triage

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons

  • Fixed origin control messages for MV3 extensions requesting access to all urls through two separate host permissions (e.g. “http://*/*” and “https://*/*”, instead of a single “<all_urls>” host permission) – Bug 1856383

WebExtension APIs

  • Fixed webRequest.getSecurityInfo to make sure the options parameter is optional – Bug 1909474

DevTools

DevTools Toolbox
WebDriver BiDi
  • External:
    • Thanks to Cauã Sene (cauasene00) for updating our tests to fully avoid requests related to system add-on updates. Previously they would just be redirected to a dummy URL and as a result were polluting our test logs. (#1904310)
  • Updates:
    • Sasha implemented a new event called browsingContext.navigationFailed, which is raised whenever a navigation fails (e.g. canceled, network error, etc.). In combination with other events such as browsingContext.load, this allows clients to monitor navigations from start to finish in all scenarios (#1846601)
    • Sasha fixed a bug in the browsingContext.navigate command. If the client used the parameter wait=none, we now resolve the command even if the navigation triggered a “beforeunload” prompt. (#1763134)
    • Sasha fixed a bug with the network.authRequired event, which was previously duplicated after each manual authentication attempt, leading to too many events. (#1899711)
    • Julian updated the data-channel-opened notification to also be emitted for data URL channels created in content processes. Thanks to this WebDriver BiDi will now raise network events for all data URL requests. (#1904343)
    • Julian updated the logic for the network.responseCompleted and network.fetchError events in order to raise them at the right time, and ensure a correct ordering of events. For instance, per spec for a successful navigation network.responseCompleted should be raised before browsingContext.load. (#1882803)

Migration Improvements

Picture-in-Picture

  • Some strings were updated to use capitalised “Picture-in-Picture” rather than “picture-in-picture” per our word list (bug)

Screenshots

Search and Navigation

  • Search
    • Mortiz has created a new test function, SearchTestUtils.setRemoteSettingsConfig for setting the search configuration in xpcshell-tests, and improved SearchTestUtils.updateRemoteSettingsConfig.
      • Both will take a partial search configuration & expand it into a full configuration. This simplifies test-setup, so that you only need to specify the bits that are important to the test.
      • Some tests are already using these, we’ll be rolling it out to more soon.
  • Address Bar

Storybook/Reusable Components

The Mozilla BlogFakespot’s guide to trending back-to-school products

Back-to-school season is here, and TikTok is teeming with viral product recommendations. Gone are the days of battling crowded aisles and long checkout lines at big box stores. Now, with just a few clicks, you can have almost anything you want delivered to your door before classes start, thanks to the convenience of two-day shipping from Amazon and other online retailers. But how can you be sure those rave reviews are reliable? That’s where Fakespot, your shopping sidekick, steps in.

Spotting unreliable reviews on TikTok’s trending products with Fakespot

Fakespot is a browser extension that is powered by AI to help millions of shoppers make better purchases. It analyzes product reviews in real-time across supported retailers like Amazon, Best Buy, and Walmart, giving you the lowdown on which product reviews seem credible and when you should proceed with caution. Whether you’re seeking product pros and cons or highlights, Fakespot has you covered. It even provides seller ratings for eBay and Shopify stores so you can shop with confidence.

How to read Fakespot’s Review Grades

Here’s how Fakespot’s grading system works:

  • A and B: Reviews you can trust.
  • C: Mixed bag of reliable and unreliable reviews — approach with caution.
  • D and F: Probably unreliable.
<figcaption class="wp-element-caption">Fakespot’s grading system </figcaption>

Top TikTok back-to-school product categories on Amazon and their reliability

We took a deep dive into top back-to-school categories and analyzed their Fakespot product review grades. Here’s what we found:

Laptops

Impressively, nearly 83% of laptop reviews on Amazon appear to be reliable. A safe bet, especially if you’re shopping with the Fakespot extension and sticking to verified retailers. The Lenovo Yoga 7i, a popular choice on TikTok, gets a Fakespot review grade of A. 

  • Trending Product: Lenovo Yoga 7i
  • Fakespot Review Grade: A 
  • Review highlight: “For the price, you can’t beat it.” 

Water bottles

About 30% of water bottle reviews on Amazon appear unreliable. Staying hydrated? Just make sure you’re buying from verified retailers. The Thermos Hydration Bottle has been trending on TikTok and gets a B grade from Fakespot, meaning the product listing has pretty reliable reviews. 

Keyboards and mice

While 34% of Amazon reviews in the keyboards and mice category appear to be unreliable, one popular gaming mouse still earns a solid B from Fakespot, so you can be more confident in the reviews. 

  • Trending Product: Razer DeathAdder V3 Pro Wireless Gaming Mouse
  • Fakespot Review Grade: B
  • Review highlight: “The razer deathadder v3 pro wireless gaming mouse ‘faker edition’ is a tribute to all gamers who seek excellence. It eliminates the shackles of wired gaming, allowing you the freedom to move and game as you wish without compromising on responsiveness or speed.”

Pillows

With 37% of Amazon reviews on pillows appearing unreliable, those trendy pillows still have potential. If you’re looking for a decorative dorm pillow, we found an option with reliable reviews for you to consider. 

  • Trending Product: Wedge Body Pillow 
  • Fakespot Review Grade: A
  • Review highlight: “The item is well-made, very soft and comfortable. Even has a place to hold his cell phone.”

Backpacks

Fakespot detected concerns with nearly half – 47% – of the reviews on backpacks on Amazon. Despite this, we found a durable option with an A review rating from Fakespot. Nonetheless, before you snag any bag, let Fakespot give you confidence that those raves are reliable.

  • Trending Product: Laptop Backpack 
  • Fakespot Review Grade: A
  • Review highlight: “It was light but sturdy and has many pockets for storage.”
<figcaption class="wp-element-caption">Fakespot detected concerns with nearly half – 47% – of the reviews on backpacks</figcaption>

Chargers

Those fast chargers blowing up on TikTok? They might not be so fast after all. About 53% of Amazon reviews for chargers seem unreliable. However, this listing for a popular wireless charging dock gets an A from Fakespot, so you can be more confident in the reviews.

Earbud headphones and computer accessories

In the earbud, headphones and computer accessories category on Amazon, more than half appear to be unreliable reviews (58%)  – so take a beat before you buy. But these noise-canceling headphones, a TikTok favorite, still get a B grade from Fakespot, meaning the reviews are more reliable. 

  • Trending Product: Sony WH-1000XM4
  • Fakespot Review Grade: B
  • Review highlight: “I love blasting my music. I enjoy barbershop music, and you can hear each individual voice so very well in great quality. I’m stunned.”

If you’re one of the millions of students gearing up for the new school year, don’t waste time scrolling through endless product reviews. Download Fakespot today, and spend your time soaking up the last bits of summer instead.

A check mark next to the text "Fakespot."
Shop confidently with Fakespot
Download the latest version today

The post Fakespot’s guide to trending back-to-school products  appeared first on The Mozilla Blog.

The Rust Programming Language Blog2024 Leadership Council Survey

One of the responsibilities of the leadership council, formed by RFC 3392, is to solicit feedback on a yearly basis from the Project on how we are performing our duties.

Each year, the Council must solicit feedback on whether the Council is serving its purpose effectively from all willing and able Project members and openly discuss this feedback in a forum that allows and encourages active participation from all Project members. To do so, the Council and other Project members consult the high-level duties, expectations, and constraints listed in this RFC and any subsequent revisions thereof to determine if the Council is meeting its duties and obligations.

This is the council's first year, so we are still figuring out the best way to do this. For this year, a short survey was sent out to all@ on June 24th, 2024, ran for two weeks, and we are now presenting aggregated results from the survey. Raw responses will not be shared beyond the leadership council, but the results below reflect sentiments shared in response to each question. We invite feedback and suggestions on actions to take on Zulip or through direct communication to council members.

We want to thank everyone for their feedback! It has been very valuable to hear what people are thinking. As always, if you have thoughts or concerns, please reach out to your council representative any time.

Survey results

We received 53 responses to the survey, representing roughly a 32% response rate (out of 163 current recipients of all@).

Do you feel that the Rust Leadership Council is serving its purpose effectively?

Option Response count
Strongly agree 1
Agree 18
Unsure 30
Disagree 4
Strongly disagree 0

I am aware of the role that the Leadership Council plays in the governance of the Rust Project.

Option Response count
Strongly agree 9
Agree 20
Unsure 14
Disagree 7
Strongly disagree 3

The Rust Project has a solid foundation of Project governance.

Option Response count
Strongly agree 3
Agree 16
Unsure 20
Disagree 11
Strongly disagree 3

Areas that are going well

For the rest of the questions we group responses into rough categories. The number of those responses is also provided; note that some responses may have fallen into more than one of these categories.

  • (5) Less drama
  • (5) More public operations
  • (5) Lack of clarity / knowledge about what it does
    • It's not obvious why this is a "going well" from the responses, but it was given in response to this question.
  • (4) General/inspecific positivity.
  • (2) Improved Foundation/project relations
  • (2) Funding travel/get-togethers of team members
  • (1) Clear representation of members of the Project
  • (1) Turnover while retaining members

Areas that are not going well

  • (15) Knowing what the council is doing
  • (3) Not enough delegation of decisions
  • (2) Finding people interested in being on the council / helping the council
  • (1) What is the role of the project directors? Are they redundant given the council?
  • (2) Too conservative in trying things / decisions/progress is made too slowly.
  • (1) Worry over Foundation not trusting Project

Suggestions for things to do in the responses:

  • (2) Addressing burnout
  • (2) More social time between teams
  • (2) More communication/accountability with/for the Foundation
  • (2) Hiring people, particularly for non-technical roles
  • (1) Helping expand the moderation team
  • (1) Resolving the launching pad issues, e.g., through "Rust Society" work
  • (1) Product management for language/compiler/libraries

Takeaways for future surveys

  • We should structure the survey to specifically ask about high-level duties and/or enumerate areas of interest (e.g., numeric responses on key questions like openness and effectiveness)
  • Consider linking published material/writing 1-year retrospective and that being linked from the survey as pre-reading.
  • We should disambiguate between neutral and "not enough information/knowledge to answer" responses in multiple choice response answers.

Proposed action items

We don't have any concrete proposed actions at this time, though are interested in finding ways to have more visilibity for council activities, as that seems to be one of the key problems called out across all of the questions asked. How exactly to achieve this remains unclear though.

As mentioned earlier, we welcome input from the community on suggestions for both improving this process and for actions to change how the council operates.

Don Martipile of money fail

Really good example of a market failure in software quality incentivization: ansuz / ऐरन: “there’s a wee story brewing in…” Read the whole thing. Good counterexample for money talks. With the wrong market design, money says little or nothing.

To summarize (you did read the whole thing, right?) in 2019, a software algorithm called a Variable Delay Function (VDF) was the subject of a $100,000 reward program. Daniel J. Bernstein asked, in a talk recorded on video if the VDF was vulnerable to a method that he had already published in a paper.

If Bernstein was right, then a developer who

  • read Bernstein’s paper on the subject

  • applied Bernstein’s work to attacking the VDF

  • and was first to claim the reward

could earn $100,000. But the money was left unclaimed—nobody got the bounty, and the attack on VDFs didn’t come out until now.

It would take some time to read and understand the paper, and to figure out if it really described a way to break the VDF—but that’s not the main problem. The catch with the bounty scheme is that as a contender for the bounty, you don’t know how many other contenders there are and how fast they work. If 64 people (the number of viewers on the video) are working on it, and Bernstein is 95% likely to be right about the paper, then the expected payout is $100,000 × 0.95 × 1/64 = $1,484.38.

In this case, the main purpose of the bounty was to collect information about the quality of the VDF algorithm, and it failed to achieve this purpose. A better way to achieve this information-gathering goal is to use a system that also incentivizes meta-work such as evaluating whether a particular approach is relevant to a particular problem. More: Some ways that bug futures markets differ from open source bounties

Related

How I Made $10k Predicting Which Studies Will Replicate A prediction market trader made profitable trades predicting if the results in scientific papers would be replicatd, without detailed investigations into the subject of each paper.

The Science Prediction Market Project

Bonus links

The sad compromise of “sponsored results” Not only are the ads a worse experience for the user, they are also creating a tax on all the advertisers, and thus, on us.

The AI Arms Race Isn’t Inevitable (But the bigger point for international AI competition is that we’re not contending with the PRC to better take money from content creators, or better union-bust the TVCs.)

Replace Twitter Embeds with Semantic HTML (Good reminder, I think I got this blog fixed up already but will double check.)

Google’s New Playbook: Ads Next to Nazis and Naughty Bits (See also The case for cutting off Google supply. If you’re putting ads where Google puts them by default, you’re sponsoring the worst people on the Internet, and you’ll be sponsoring more and more of them as other advertisers move to inclusion lists.)

What? PowerPoint 95 no longer supported? (LibreOffice will do it, so keep a copy around just in case.)

Google is killing uBlock Origin in Chrome, but this trick lets you keep it for another year (From the makers of the end of the third-party cookie, it’s the end of ad blocking)

MIT leaders describe the experience of not renewing Elsevier contract Since the cancellation, MIT Libraries estimates annual savings at more than 80% of its original spend. This move saves MIT approximately $2 million each year, and the Libraries provide alternative means of access that fulfills most article requests in minutes.

The End Of GARM Is A Reset, Not A Setback (if GARM was a traffic cone, Check My Ads is a bollard)

Former geography teacher Tim Walz is really into maps

Pluralistic: Private equity rips off its investors, too (08 Aug 2024)

How I Use “AI” [T]hese examples are real ways I’ve used LLMs to help me. They’re not designed to showcase some impressive capabiltiy; they come from my need to get actual work done. This means the examples aren’t glamorous, but a large fraction of the work I do every day isn’t, and the LLMs that are available to me today let me automate away almost all of that work.

China is slowly joining the economic war against Russia

Steve Ballmer’s got a plan to cut through the partisan divide with cold, hard facts

Inside the Swedish factory that could be the future of green steel

Navy Ad: Gig Work Is a Dystopian, Unregulated Hellscape, Build Submarines Instead

Mozilla Privacy BlogDatenschutzfreundliche Werbemessung: Testen für einen neuen Weg beim Datenschutz in der digitalen Werbung

Hinweis: Dies ist eine deutsche Übersetzung des englischen Original-Blogbeitrags. Der ursprüngliche Beitrag dient weiterhin als die ursprüngliche und maßgebliche Erklärung des Themas.

Im Internet hat sich ein dichtes Netz zur Überwachung entwickelt, wo Werbetreibende und Werbeplattformen detaillierte Informationen über die Online-Aktivitäten der Nutzenden sammeln. Bei Mozilla glauben wir, dass diese Informationen ausschließlich den einzelnen Personen gehören und dass ihre uneingeschränkte Sammlung eine nicht hinnehmbare Verletzung des Datenschutzes ist. Wir haben in Firefox immer fortschrittliche Anti-Tracking-Technologien bereitgestellt und werden dies auch weiter tun. Allerdings glauben wir, dass im Ökosystem auch weiterhin neuartige Techniken zur User-Nachverfolgung entwickeln werden, solange es einen starken wirtschaftlichen Anreiz dazu gibt.

Wir sind darüber hinaus sehr besorgt über Bestrebungen in einigen Ländern, Anti-Tracking-Funktionen in Browsern einzuschränken. In einer Welt, in der die Gesetzgebung widerstreitende Interessen ausgleichen muss, ist es gefährlich, wenn sich Werbung und Datenschutz in einem Nullsummen-Konflikt befinden.

Um diese technischen und regulatorischen Gefahren für den User-Datenschutz anzugehen und gleichzeitig Mozillas Mission voranzubringen, entwickeln wir eine neue Technologie namens Privacy Preserving Attribution (PPA, im Deutschen: datenschutzfreundliche Werbemessung). Mit dieser Technologie soll ein Weg für die Werbetreibenden aufgezeigt werden, die Werbewirksamkeit insgesamt zu messen, ohne Informationen über bestimmte Einzelpersonen zu sammeln.

Die Funktionsweise der PPA

Anstatt private Informationen zu sammeln, um zu bestimmen, wann bestimmte User mit einer Werbung interagieren, basiert PPA auf neuartigen kryptographischen Techniken, die darauf ausgelegt sind, die Daten der User zu schützen und gleichzeitig aggregierte Attribution zuzulassen. So können Werbetreibende aggregierte Statistiken bekommen, um zu prüfen, ob ihre Werbung funktioniert. Dabei wird jedoch keinerlei zielgerichtete Werbung (Ad Targeting) aktiviert. Im Kern wird bei PPA ein System zur Mehrparteien-Berechnung (Multi Party Computation, MPC) namens Distributed Aggregation Protocol (DAP) genutzt, das in Partnerschaft mit dem Divvi-Up-Projekt der Internet Security Research Group (ISRG), der Organisation hinter Let‘s Encrypt, verwendet wird.

So funktioniert es:

Anstatt individuelle Surfaktivitäten offenzulegen, um zu bestimmen, wer eine bestimmte Werbung ansieht, nutzt PPA mathematische Verfahren, mit denen die Konsumentendaten privat bleiben. Interagieren User mit einer Werbung oder einem Werbetreibenden, so wird die jeweilige Interaktion auf deren Geräten in zwei unkenntlich gemachte Segmente aufgeteilt – jedes dieser Segmente ist verschlüsselt und wird dann an zwei unabhängig voneinander arbeitende Dienste gesendet. Ähnliche Segmente von vielen Usern werden dann von diesen Diensten zusammengeführt um eine aggregierte Zahl zu generieren. Diese Zahl bezeichnet, wie viele Menschen eine Aktion (etwa das Anmelden zu einem Newsletter) durchgeführt haben, nachdem sie eine Werbung gesehen haben – all dies jedoch ohne jegliche Informationen über die Aktivitäten irgendeiner Einzelperson gegenüber dem Dienst oder dem Werbetreibenden offenzulegen. Im Einzelnen werden die folgenden Schritte durchgeführt:

  • Verschlüsselung der Daten: Interagiert ein User mit einer Werbung oder einem Werbetreibenden, so wird im Browser ein Ereignis in Form eines Wertes protokolliert. Dieser Wert wird in einzelne, unkenntlich gemachte Segmente geteilt und dann verschlüsselt. Jedes Segment wird an eine jeweils andere Stelle adressiert – eines an Divvi Up und eins an Mozilla – auf diese Weise hat keine Stelle für sich jemals beide Segmente.
  • Maskierung: Als zusätzlichen Schutz werden die Segmente an Divvi Up und Mozilla über ein Oblivious HTTP-Relay übermittelt, das von einem Drittanbieter (Fastly) betrieben wird. So wird sichergestellt, dass weder Divvi Up noch Mozilla auch nur die IP-Adresse des unkenntlich gemachten Segments kennen, das sie erhalten. Der Traffic ist für Fastly nicht einsehbar und mit anderen Anfragearten gemischt, sodass auch sie keine Informationen daraus ziehen können.
  • Aggregation: Divvi Up und Mozilla führen jeweils bei sich all die unkenntlich gemachten Segmente zusammen, die sie erhalten, um einen (ebenfalls unkenntlich gemachten) Aggregationswert zu bilden. Das heißt, dass die Daten vieler User zusammengeführt werden, ohne dass irgendjemand der Beteiligten die Inhalte oder Quellen der jeweiligen individuellen Datenpunkte erfährt.
  • Randomisierung: Darüber hinaus wird jede Hälfte vor der Weitergabe noch mit zufälligem Rauschen versehen, sodass Sicherungen im Rahmen der Differential Privacy (differentielle Privatsphäre) eingebaut sind, bei denen mathematisch dafür gesorgt wird, dass aus Trends in den aggregierten Daten nicht auf individuelle Aktivitäten geschlossen werden kann.
  • Zusammenführung: Divvi Up und Mozilla senden dann ihre unkenntlich gemachten Werte im Ganzen an den Werbetreibenden, sodass daraus zusammengeführte informative Kenngrößen gebildet werden können. Dies sind aggregierte Kenngrößen zu allen Usern, die keinerlei Informationen zu Einzelpersonen offenbaren.

Durch die Verwendung fortschrittlicher Verschlüsselungsmethoden stellt PPA sicher, dass die Userdaten während des gesamten Werbemessungsprozesses privat und sicher bleiben. An keinem Punkt hat eine einzelne Partei Zugang zur individuellen Surfaktivität von bestimmten Usern – eine tiefgreifende Verbesserung im Vergleich zum derzeitigen Modell.

Zu erfüllende Vorgaben

Ein entscheidender Gesichtspunkt bei der Entwicklung der PPA war die Beachtung der Rechtsvorschriften zum Datenschutz, wie etwa der Datenschutz-Grundverordnung (DSGVO). Im Folgenden sind einige Gründe aufgeführt, warum wir glauben, dass die PPA den strengen Anforderungen dieser Rechtsvorschriften entspricht.

  1. Anonymisierung: Die von PPA genutzte Verbindung von IP-Schutz, Aggregation und differentieller Privatsphäre bricht die Verbindung zwischen einem Messungsereignis und einer bestimmten Einzelperson. Wir sind der Ansicht, dass dies die hohen Anforderungen der DSGVO zur Anonymisierung erfüllt.
  2. Datensparsamkeit: Für die vom Browser übermittelten Informationen gelten strenge Praktiken zur Datensparsamkeit. Die einzige in Berichten enthaltene Information ist ein einzelnes, begrenztes Histogramm.
  3. Unsichtbare Deaktivierung: Wenn PPA inaktiv ist, lässt es Attributionsberichte von Websites zu und verwirft sie unbemerkt. Das bedeutet, dass diese Websites nicht erkennen können, ob jemand PPA aktiviert hat oder nicht. Mit dieser Maßnahme wird eine Ungleichbehandlung oder Identifizierung (Fingerprinting) durch Websites aufgrund der Verfügbarkeit der Funktion verhindert.

Prototyp-Implementierung und User-Tests

Die aktuellen Implementierung von PPA in Firefox ist ein Prototyp, der das Konzept validieren und die aktuellen Arbeiten an Standards beim World Wide Web Consortium (W3C) unterstützen soll. Diese begrenzte Implementierung ist erforderlich, um das System unter Realbedingungen zu testen und wertvolle Rückmeldungen zu erhalten.

Der Prototyp ist mit einem aktivierten Origin Trial versehen – so wird verhindert, dass das API in irgendeiner Weise irgendeiner Website gegenüber sichtbar ist, sofern dies nicht spezifisch von Mozilla erlaubt wurde. Für den anfänglichen Test sind einzig von Mozilla betriebene Sites erlaubt – genauer, Werbung für Mozilla VPN, die im Mozilla Developer Network (MDN) angezeigt wird. Wir haben diesen Ansatz gewählt, um genügend Teilnahme zur Bewertung der Systemleistung und des Datenschutzes zu gewährleisten und gleichzeitig sicherzustellen, dass er unter streng kontrollierten Bedingungen getestet wird.

Nächste Schritte und Pläne für die Zukunft

Besucht ein User in relevanten Märkten während des Prototyp-Tests die MDN-Website mit Firefox und kommt die Person auf eine Werbung für Mozilla VPN, die Teil dieses Tests ist, so werden im Hintergrund alle im vorigen Abschnitt beschriebenen technischen Schritte durchgeführt, damit wir die Technik testen können. Weder verlassen dabei Daten zu individuellen Surfaktivitäten das Gerät, noch werden diese eindeutig identifizierbar. Wie stets haben die User die Möglichkeit, diese Funktion in ihren Firefox-Einstellungen abzuschalten.

Im weiteren Verlauf wird unser unmittelbarer Fokus darauf liegen, die PPA anhand der Rückmeldungen aus diesem ersten Prototyp zu verfeinern und zu verbessern. Dies werden die Themen der nächsten Monate sein:

  1. Ausweitung der Tests: Abhängig von den ersten Ergebnissen nehmen wir möglicherweise weitere Websites in der Testphase hinzu und überwachen sorgsam die Ergebnisse, um sicherzustellen, dass das System wie gewollt arbeitet. Wegen der laufenden Standard-Entwicklung nutzt der Prototyp ein nicht standardkonformes API und wird daher nie in seiner derzeitigen Form im Netz insgesamt zu sehen sein.
  2. Transparenz und Kommunikation: Wir stehen für Transparenz hinsichtlich der Funktionsweise der PPA und des Schutzes der Nutzerdaten. Wir werden weiterhin Updates veröffentlichen und die Community in Bezug auf etwaige Bedenken einbeziehen.
  3. Zusammenarbeit und Entwicklung von Standards: Mozilla wird weiterhin mit anderen Unternehmen und öffentlichen Normungsstellen daran arbeiten, Technologien zu entwickeln und zu standardisieren, die die Privatsphäre achten. Unser Ziel ist eine robuste, branchenweite Lösung, die allen Nutzern zugutekommt.

Schließlich ist es unsere Vision, privatsphärenfreundliche Technologien wie PPA mit dem Ziel zu entwickeln, zu validieren und bereitzustellen, am Ende invasive Trackingpraktiken überflüssig zu machen. Indem wir deren Machbarkeit nachweisen, wollen wir eine Online-Umgebung mit mehr Sicherheit und Privatsphäre für alle schaffen. Eine Organisation alleine kann diese Herausforderungen nicht meistern. Uns sind dabei Rückmeldungen immer willkommen, und wir hoffen, dass unsere Anstrengungen weitere Organisationen veranlassen, in ähnlicher Weise innovativ tätig zu werden. Vielen Danke für Ihre Unterstützung auf dieser Reise. Gemeinsam können wir ein besseres Internet schaffen, in dem die Privatsphäre stärker geachtet wird.

The post Datenschutzfreundliche Werbemessung: Testen für einen neuen Weg beim Datenschutz in der digitalen Werbung appeared first on Open Policy & Advocacy.

The Mozilla BlogCelebrating an important step forward for open source AI

TL;DR: Mozilla is excited about today’s new definition of open source AI, and we endorse it as an important step forward.

This past year has been marked by more and more people recognizing the societal benefits of open source AI. In October, a large coalition of people signed onto our statement emphasizing how openness and transparency are critical ingredients to safety and security in AI. In February, Mozilla and the Columbia Institute of Global Politics convened AI experts, who emphasized how openness in AI could help advance key societal goals. Policymakers have also been embracing open source AI. The U.S. National Telecommunications and Information Administration (NTIA) recently issued a seminal report embracing openness in AI. Even companies like Google, Microsoft, Apple, and Meta are beginning to open certain aspects of their AI systems.

The growing focus on open source AI makes it all the more important that we establish a shared understanding of what open source AI is. A definition should outline what must be shared and under what terms or conditions. Without this clarity, we risk a fragmented approach, where companies label their products as “open source” even when they aren’t, where civil society doesn’t have access to the AI components they need for testing and accountability, and policymakers create regulations that fail to address the complexities of the issue.

The Open Source Initiative (OSI) has recently released a new draft definition of open source AI, marking a critical juncture in the evolution of the internet. This moment comes after two years of conversations, debates, engagements, and late-night conversations across the technical and open source communities. It is critical not just for redefining what “open source” means in the context of AI; it’s about shaping the future of the technology and its impact on society.

The original Open Source Definition, introduced by the OSI in 1998, was more than just a set of guidelines; it was a manifesto for a new way of building software. This definition laid the foundation for open systems that have since become the backbone of the modern internet. From Linux to Apache, open source projects have driven innovation, collaboration and competition, enabling the internet to grow into a diverse and dynamic ecosystem. By ensuring that software could be freely used, modified, and shared, the original open source movement helped to expand access to technology, breaking down barriers to entry and fostering a culture of innovation and transparency, while making software safer and less vulnerable to cyberattacks.

This is a significant step toward bringing clarity and rigor to the open source AI discussion. It introduces a binary definition of “open source,” akin to the existing definition. While this is just one of several approaches to defining open source AI, it offers precision for developers, advocates and regulators who benefit from clear definitions in various working contexts. Specifically, it outlines that open source AI revolves around the ability to freely use, study, modify and share an AI system. And it also promotes the importance of access to key components needed to recreate substantially equivalent AI systems, like information on data used for training, the source code for AI development and the AI model itself.

And, this definition also offers an initial attempt to wrestle with the complex issue of whether and how training data for AI models should be shared as part of open source AI. The definition acknowledges that sharing full training datasets can be challenging in practice, and, therefore, avoids disqualifying a significant amount of otherwise open source AI development from being considered “open source.” We are working to change this state of play by making open datasets a more commonplace part of the AI ecosystem. Mozilla and Eleuther AI recently brought together experts to outline best practices for open datasets to support AI training, and we intend to publish a paper soon that promotes norms that support AI training data being more widely available.

We acknowledge that some may disagree with aspects of OSI’s definition, such as its treatment of training data, and that the definition will need refinement over time. However, we believe that the OSI’s community-driven process — which involved over a year of stakeholder engagement — has established a crucial reference point for discussions on open source AI. For instance, this definition will become a valuable resource to combat the widespread practice of “openwashing” that is becoming quite rampant, where non-open models (or even open-ish models like Meta’s Llama 3) are promoted as leading “open source” options without contributing to the commons. Researchers have shown that “the consequences of open-washing are considerable” and affect innovation, research and the public understanding of AI.

At its core, this effort embodies the open source community at its best — engaging in open discussions, addressing differences, acknowledging shortcomings and refining this definition together, to build something better. It effectively incorporates many key aspects of openness that the open source community has been grappling with, such as going beyond just considering openness in model weights and including broader model components, documentation, and licensing approaches as outlined in the Columbia Convening. In contrast, the closed source ecosystem operates in secrecy, with limited access and behind-the-scenes deals where large tech companies exchange compute power and talent. We prefer our sometimes imperfect but consistently transparent approach any day.

We, and many others, are eager to continue collaborating with OSI and the broader open source community to bring greater clarity to the open source AI discussion and continue unlocking the potential of open source AI for the benefit of society.

Get Firefox

Get the browser that protects what’s important

The post Celebrating an important step forward for open source AI appeared first on The Mozilla Blog.

Mozilla ThunderbirdVIDEO: How to Answer Thunderbird Questions on Mozilla Support

Not all heroes wear capes. Some of our favorite superheroes are the community members who provide Thunderbird support on the Mozilla Support (SUMO) forums. The community members who help others get to the root of their problems are everyday heroes, and this video shows what it takes to become one of them. Spoiler – you don’t need a spider bite or a tragic origin story! All it takes is patience, curiosity, and a little work.

In our next Office Hours, we’ll be chatting with our Thunderbird Council! One week before we record, we’ll put out a call for questions on social media and on the relevant TopicBox mailing lists. And if you have an idea for an Office Hours you’d like to see, let us know in the comments or email us at officehours@thunderbird.net.

Office Hours: Thunderbird Support (Part 2)

In the sleeper sequel hit of the summer, we sat down to chat with Wayne Mery, who in addition to his work with releases, is our Community Manager as well. Like Roland, Wayne has been with the project practically from the start, and was one of the first MZLA employees. If you’ve spent any time on SUMO, our subreddit, or Bugzilla, chances are you’ve seen Wayne in action helping users.

In this chat and demo, Wayne walks us through the steps to becoming a support superhero. The SUMO forums are community-driven, and (every additional contributor means more knowledge and hopefully fewer unanswered questions.) (This would be a good sport for something about the power of community in open source, and how many of us who got into open source as a career started as volunteers in forums like these.)

The video includes:

  • The structure and markup language of the SUMO Forums
  • How to find questions that need answering
  • Where to meet and chat with other volunteers online
  • A demonstration of the forum’s workflow
  • A very helpful DOs and DON’Ts guide
  • A demo where Wayne answers new questions to show his advice in action

Watch, Read, and Get Involved

This chat helps demystify how we and the global community provide support for Thunderbird users. We hope it and the included deck inspire you to share your knowledge, experience, and problem-solving skills. It’s a great way to get involved with Thunderbird – whether you’re a new or experienced user!

VIDEO (Also on Peertube):

WAYNE’S PRESENTATION:

The post VIDEO: How to Answer Thunderbird Questions on Mozilla Support appeared first on The Thunderbird Blog.

Mozilla Privacy BlogPrivacy-Preserving Attribution: Testing for a New Era of Privacy in Digital Advertising

The internet has become a massive web of surveillance, with advertisers and advertising platforms collecting detailed information about people’s online activity. At Mozilla, we believe this information belongs only to the individual and that its unfettered collection is an unacceptable violation of privacy. We have deployed and continue to deploy advanced anti-tracking technology in Firefox, but believe the ecosystem will continue to develop novel techniques to track users as long as they have a strong economic incentive to do so.

We are also deeply concerned by developments in some jurisdictions to restrict anti-tracking features in browsers. In a world where regulators have to balance competing interests, it is dangerous to have advertising and privacy in a zero-sum conflict.

To address these technical and regulatory threats to user privacy while advancing Mozilla’s mission, we are developing a new technology called Privacy-Preserving Attribution (PPA). The technology aims to demonstrate a way for advertisers to measure overall ad effectiveness without gathering information about specific individuals.

The Technology Behind PPA

Rather than collecting intimate information to determine when individual users have interacted with an ad, PPA is built on novel cryptographic techniques designed to protect user privacy while enabling aggregated attribution. This allows advertisers to obtain aggregate statistics to assess whether their ads are working. It does not enable any kind of ad targeting. At its core, PPA uses a Multi-Party Computation (MPC) system called the Distributed Aggregation Protocol (DAP), in partnership with the Divvi Up project at the Internet Security Research Group (ISRG), the organisation behind Let’s Encrypt.

Here’s how it works:

Instead of exposing individual browsing activity to determine who sees an ad, PPA uses mathematics to keep consumer information private. When a user interacts with an ad or advertiser, a record of that interaction is split into two indecipherable pieces on their device – each of which is encrypted and then sent to two independently operated services. Similar pieces from many users are then combined by these services to produce an aggregate number. This number represents how many people carried out an action (such as signing up for a newsletter) after seeing the ad — all without revealing any information about the activity of any individual to either service or to the advertiser. The precise steps are as follows:

  • Data Encryption: When a user interacts with an ad or advertiser, an event is logged in the browser in the form of a value. That value is then split into partial, indecipherable pieces and then encrypted. Each piece is addressed to a different entity — one to Divvi Up at ISRG and one to Mozilla — so that no single entity is ever in possession of both pieces.
  • Masking: As an additional protection, the pieces are submitted to Divvi Up and Mozilla using an Oblivious HTTP relay operated by a third organisation (Fastly). This ensures that Divvi Up and Mozilla do not even learn the IP address of the indecipherable piece they receive. The traffic is opaque to Fastly and intermixed with other kinds of requests such that they cannot learn any information either.
  • Aggregation: Divvi Up and Mozilla each combine all the indecipherable pieces they receive to produce a (still-indecipherable) aggregate value. This means that the data from many users is combined without any party learning the contents or source of any individual data point.
  • Randomisation: Random noise is also added to each half before being revealed to provide  differential privacy guarantees, which mathematically enforce that individual activity cannot be inferred from trends in the aggregate data.
  • Recombination: Divvi Up and Mozilla then send their indecipherable values in aggregate to the advertiser, leading to a combined statistic of interest. This is an aggregate statistic across all users and does not reveal any information about an individual.

By using these advanced cryptographic methods, PPA ensures that user data remains private and secure throughout the advertising measurement process. At no point does any single entity have access to a specific user’s individual browsing activity – making this a radical improvement to the current paradigm.

Rules of the Road

One of the critical considerations in developing PPA was alignment with privacy legislation, such as the General Data Protection Regulation (GDPR). Here are a few ways that we believe PPA meets the stringent requirements in these laws:

  1. Anonymization: The combination of IP protection, aggregation, and differential privacy used by PPA breaks the link between an attribution event and a specific individual. We believe this meets the high standards of the GDPR for anonymization.
  2. Data Minimization: The information reported by the browser follows strict data minimization practices. The only information included in reports is a single, bounded histogram.
  3. Undetectable Opt-Out: When PPA is inactive, it accepts attribution reports from sites and then silently discards them. This means that sites are unable to detect whether an individual has either enabled or disabled PPA. This measure prevents discrimination or fingerprinting by sites on the basis of the feature’s availability.

Prototype Rollout and User Testing

The current implementation of PPA in Firefox is a prototype, designed to validate the concept and inform ongoing standards work at the World Wide Web Consortium (W3C). This limited rollout is necessary to test the system under real-world conditions and gather valuable feedback.

The prototype is enabled with an Origin Trial — which prevents the API from being exposed in any form to any website unless it’s specifically allowed by Mozilla. For the initial test, the only allowed sites are operated by Mozilla – specifically ads for Mozilla VPN displayed on Mozilla Developer Network (MDN). We chose this approach to ensure sufficient participation to evaluate the system’s performance and privacy protections while ensuring that it is tested in tightly-controlled conditions.

Next Steps and Future Plans

During the prototype test, if a user visits the MDN website on Firefox in relevant markets and comes across an ad for Mozilla VPN that is a part of this trial, all of the technical steps in the previous section will occur in the background to allow us to test the technology. All this while individual browsing activity will never leave the device nor be uniquely identifiable. As always, users have the ability to turn off this functionality in their Firefox settings.

As we move forward, our immediate focus is on refining and improving PPA based on the feedback from this initial prototype. Here’s what to expect in the coming months:

  1. Expansion of Testing: Depending on initial results, we may expand the number of sites involved in the testing phase, carefully monitoring the results to ensure the system operates as intended. Due to ongoing standards development, the prototype uses a non-standardized API and thus will never be exposed in its current form to the web at large.
  2. Transparency and Communication: We are committed to being transparent about how PPA works and how user data is protected. We will continue to provide updates and engage with the community to address any concerns.
  3. Collaboration and Standards Development: Mozilla will continue to work with other companies and public standards bodies to develop and standardise privacy-preserving technologies. Our goal is to create a robust, industry-wide solution that benefits all users.

Ultimately, our vision is to develop, validate, and deploy privacy-preserving technologies like PPA with the goal of ultimately eliminating the need for invasive tracking practices. By proving their viability, we aim to create a more secure and private online environment for everyone. One organisation alone cannot solve these challenges. We invite feedback along the way and we hope that our efforts inspire more organisations to innovate in similar ways. Thank you for your support as we embark on this journey. Together, we can build a better, more private internet.

The post Privacy-Preserving Attribution: Testing for a New Era of Privacy in Digital Advertising appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 561

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is discret, a graphQL-based peer-to-peer implementation library.

Thanks to adsalais for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

426 pull requests were merged in the last week

Rust Compiler Performance Triage

A fairly noisy week (though most of that has been dropped from this report). Overall we saw several improvements, and ended the week on a net positive. Memory usage is down around 1.5-3% over the course of the week, primarily due to RawVec polymorphization and CloneToUninit impl expansion.

Triage done by @simulacrum. Revision range: 9cb1998e..4fe1e2bd

1 Regressions, 1 Improvements, 3 Mixed; 1 of them in rollups 53 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo Language Team
  • No Language Team Tracking Issues or PRs entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-08-21 - 2024-09-18 🦀

Virtual
Africa
Asia
Europe
North America

Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I'm trying to round up to next power of two (for fun).

I know that's perhaps not a lot of fun, but there's next_power_of_two() on all integer types.

That is indeed less fun.

Edeadlink and Riccardo Borgani on rust-users

Thanks to Jonas Fassbender for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla Privacy BlogMozilla, EleutherAI, and Hugging Face Provide Comments on California’s SB 1047

Update as of August 30, 2024: In recent weeks, as SB 1047 has made its way through the CA legislature, Mozilla has spoken about the risks the bill holds in publications like Semafor, The Hill, and the San Francisco Examiner. In light of the bill’s passage through the legislature on August 29, 2024, Mozilla issued a statement further detailing concerns about the legislation as it heads to the Governor’s desk. We hope that as Governor Newsom considers the merits of the legislation he considers the serious harms that this bill may do to the open-source ecosystem.

 

In early 2024, Senator Wiener of California introduced SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. The Act is intended to address some of the most critical, and as of now theoretical, harms resulting from large AI models.

However, since the bill was introduced, it has become the target of intense criticism, mainly due to its potential harmful impact on the open-source community and the many users of open-source AI models. Groups and individuals ranging from Y Combinator founders to AI pioneer Andrew Ng have publicly expressed concerns about the state of the legislation and its potential impact on the open-source and startup ecosystem.

As a champion of openness in the AI and broader tech ecosystem, Mozilla appreciates the open and constructive dialogue with Senator Wiener’s team regarding potential changes to the legislation which could mitigate some of the potential harms the bill is likely to cause and assuage fears from the open-source community. However, due to deep concerns over the state of the legislation, Mozilla, Hugging Face, and EleutherAI sent a letter to Senator Wiener, members of the California Assembly, and to the Office of Governor Newsom on August 8, 2024. The letter, in full below, details both potential risks and benefits of the legislation, options for the legislature to mitigate potential harms to the open-source community, and our desire to support the legislative process.

Open-source software has proven itself to be a social good time and again, speeding innovation, enabling public accountability, and facilitating the development of new research and products. Mozilla has long pushed to Accelerate Progress Towards Trustworthy AI and is highly aligned with the goals of mitigating risks from AI. Our research and a broad swath of historical evidence points to open-source being one of the clearest pathways towards mitigating risk, bias, and creating trustworthy AI.

 

August 8 Letter to Senator Wiener

The Honorable Scott Wiener

California State Senate

1021 O Street

Suite 8620

Sacramento, CA 95814-4900

 

Dear Senator Wiener,

We, the group of undersigned organizations, Mozilla, EleutherAI, and Hugging Face, are writing to express our concerns regarding SB 1047, the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,” as currently written. While we support the goals of all draftees to ensure that AI is responsibly developed and deployed, and appreciate the willingness of your team to engage with external parties, we believe that the bill has significant room to be improved so that it does not harm the open-source community.

As you noted in your open letter, “For decades, open sourcing has been a critical driver of innovation and security in the software world,” and we appreciate your commitment to ensure that openness can continue. Open source is already crucial to many of AI’s most promising applications in support of important societal goals, helping to solve critical challenges in health and the sciences. Open models reduce the barriers for startups, small businesses, academic institutions, and researchers to utilize AI, making scientific research more accessible and businesses more efficient. By advancing transparency, open models are also crucial to protecting civil and human rights, as well as ensuring safety and security. Researchers, civil society, and regulators can more easily test and assess open models’ capabilities, risks, and compliance with the law.

We appreciate that some parts of SB 1047 stand to actively support open science and research. Specifically, we applaud the bill’s proposal to create CalCompute to provide access to computational resources necessary for building AI and foster equitable innovation.

We also appreciate that ensuring safe and responsible development and deployment of AI is a shared responsibility.

At the same time, responsibility must be allocated in a way that is tailored and proportionate by taking into account the potential abilities of developers and deployers to either cause or mitigate harms while recognizing relevant distinctions in the role and capabilities of different actors. We believe that components of the legislation, as written and amended, will directly harm the research, academic, and small business communities which depend on open-source technology.

We thank your team for their willingness to work with stakeholders and urge you to review several pieces of the legislation which are likely to contribute to such unintended harms, including:

 

Lack of Clarity and Vague Definitions: In an ecosystem that is evolving as rapidly as AI, definitional specificity and clarity are critical for preventing unintended consequences that may harm the open AI ecosystem and ensuring that all actors have a clear understanding of the expected requirements, assurances, and responsibilities placed on each. We ask that you review the current legislation to ensure that risk management is proportionally distributed across the AI development process as determined by technical feasibility and end user impact.

In particular, we ask that the definition of “Reasonable assurance,” be further defined in consultation with the open-source, academic, and business community, as to exactly what the legislature requires from covered developers as the current definition of “…does not mean full certainty or practical certainty,” is open-ended.

 

Undue Burdens Placed on Developers: As written, SB 1047 places significant burdens on the developers of advanced AI models, including obligations related to certifying specific outcomes that will be difficult if not impossible to responsibly certify. The developer of an everyday computer program like a word processor cannot reasonably provide assurance that someone will not use their program to draft a ransom note that is then used in a crime, nor is it reasonable for authorities to expect that general purpose tools like open-source AI models should be able to control the actions of their end users without serious harms to fundamental user rights like privacy.

We urge you to consider emerging AI legislative practices and to re-examine how certain obligations within the bill are structured and the likelihood of an individual developer acting in good faith being able to reasonably apply with such obligations. This includes the requirement to identify specific tests and test results that would be sufficient to provide reasonable assurance of not causing or enabling a critical harm, especially as this requirement applies to covered model derivatives.

 

FMD Oversight of Computing Thresholds: In its current form, the legislation gives the Frontier Model Division (FMD) broad latitude after January 1, 2027, to determine which AI models should be considered covered under the proposed regulation. Given rapid advances in computing, it is likely that in a short time the current threshold set by the legislation will be surpassed, including by startups, researchers, and academic institutions. As such, these thresholds will quickly become obsolete.

We urge you to create clear statutory requirements for the FMD to ensure that the agency regularly updates the criteria for what is considered to be a covered model in consultation with academia, civil society, the open source community, and businesses. As AI advances and proves not to cause “critical harms,” regulators should quickly follow suit to ensure that innovation is not unnecessarily stymied.

 

Current Definition of Open-Source: As Mozilla research has noted, defining AI open source for foundation models is tricky. However, the current definition of an “Open-source artificial intelligence model,” in the legislation does not include the full spectrum of how researchers and businesses currently release openly available AI models. Today, developers often do so with some legal or technical limitations in place in an effort to make sure their work is used legally and safely. We urge you to broaden the definition and consider working with a body such as the Open Source Initiative to create a legal definition that fully encapsulates the spectrum of openly available AI.

Open-source has been a proven good for the health of society and the modern web, creating significant economic and social benefits. In early 2024, Mozilla and the Columbia Institute of Global Politics brought together over 40 leading scholars and practitioners working on openness and AI – where one of the key findings of the group was that “Openness in AI has the potential to advance key societal goals, including making AI safe and effective, unlocking innovation and competition in the AI market, and bring underserved communities into the AI ecosystem.”

We are strong proponents of effective AI regulation, but we believe that AI risk management and regulatory requirements should be proportionally distributed across the development process based on factors such as technical feasibility and end user impact.

We are committed to working with you to improve SB 1047 and other future legislation. However, as the bill currently stands, we believe that it requires significant changes related to the legislation’s fundamental structure in order to both achieve your stated goals and prevent significant harm to the open-source community.

 

Sincerely,

Mozilla,

EleutherAI,

Hugging Face

 

cc:

The Honorable Ash Kalra, Chair of the California Assembly Committee on Judiciary

The Honorable Rebecca Bauer-Kahan, Chair of the California Assembly Committee on Privacy

and Consumer Protection

The Honorable Buffy Wicks, Chair of the California Assembly Committee on Appropriations

Christine Aurre, Secretary of Legislative Affairs for the Honorable Governor Gavin Newsom

Liz Enea, Consultant, Assembly Republican Caucus

The post Mozilla, EleutherAI, and Hugging Face Provide Comments on California’s SB 1047 appeared first on Open Policy & Advocacy.

Adrian Gaudebert18 days of selling Dawnmaker

We released Dawnmaker 18 days ago, and I'm due for a report on numbers. Everybody loves numbers, right? Here are ours!

Dawnmaker's capsule

First a little context: Dawnmaker is a turn-based, solo strategy game mixing city building and deckbuilding. Basically, it's like a board game but digital and solo. We've been working on this title for 2.5 years, as a team of two people: myself, doing game design and programming, and Alexis, doing everything art-related. We've had some occasional help from feelancers, agencies and short-term hires, but it's mostly been just us 2. Dawnmaker is our second game, the first one being Phytomancer, a small game we made in 6 months and released on itch.io only.

We did not find a publisher for Dawnmaker — not for lack of trying — and thus had a very limited budget. The main consequence of this is that we skipped the production phase. We had a very long preproduction (about 2 years) and then went straight to postproduction in order to release what we had in a good state. Effects of this decision can be felt in some reviews of the game, complaining about the lack of content. We had big plans for new mechanics, but cut most of these in order to ship.

Marketing on Hard Mode

The second consequence of not having a publisher is that we did all the marketing ourselves. It was hard, not very good and not very efficient, but we did our best. We did not have a well-defined go-to-market strategy, and did things a bit organically. I'm comfortable with Twitter so I started using it, joining some communities like #TurnBasedThursday. I also did a bunch of reddit publications that worked quite well, though none of them went viral. Alexis is more of an instagram person so he handled that, as well as tiktok. Reddit is really the only social network that brought us actual wishlists and sales, the others had no impact that I could see.

Scratch that: YouTube is the platform that actually brought us wishlists and sales. We had a few videos, some by medium-sized youtubers, that brought big spikes in wishlists — see the graph below. And surprisingly, our launch trailer is currently being shown by YouTube on their front page, which is bringing us a nice boost in visibility! But that's pure luck: as far as I know, we have absolutely no control over the YouTube algorithm, and are all subject to its whims.

OK, let's start showing some numbers. Here's our lifetime wishlist actions graph:

Dawnmaker's wishlist actions graph on Steam

The spike at launch is free visibility offered by Steam: we did nothing other than making the page public on Steam. I assume it happened because we had tags that work well on Steam: city builder and deckbuilder mainly. At that time, the page only had screenshots and a basic description. No trailer, no demo.

I feel like we got lucky with our marketing. As I told earlier, we had no real go-to-market strategy, we just tried things. I spent a lot of time in the last 3 years reading about marketing, from howtomarketagame.com, GameDiscoverCo and other such sources. Basically I've been applying lessons learned from these sources, trying to make as little mistakes as possible — though we still made a lot of them, like: not having a go-to-market strategy… The reason why I feel we got lucky is that most of the spikes shown above came from unsolicited sources. Nookrium and Orbital Potato just happened to pick up our demo because they saw it during the Deckbuilders Fest. automaton-media.com, a popular Japanese website, made an article about Dawnmaker totally out of the blue — we did not even have a Japanese translation at the time. And when we did send keys of the game to youtubers and streamers, almost none of them responded. I feel like we just made our best to exist, being in festivals and social networks, and then waited for the Universe to notice.

Considering the lack of marketability of Dawnmaker, I'm still pretty proud that we reached Popular Upcoming on the front page of Steam a day before the release. We had a tad less than 6k wishlists when we reached that Holy Grail, and 7029 wishlists when we hit the release button.

Launching into… the neighbor's garden

Pricing the game was difficult. Our initial intention was to sell it for $20. But we never did our production phase, so our content was way too lacking to justify that price point. We decided to lower the price to $15, but then talked about it with a few French publishers. All of them agreed that it should be a $10 game, not because of the game's quality, but because in today's market, that's what players are ready to pay for the content we have. Pricing the game less also meant that players would feel less resistance in buying the game, hopefully leading to more sales, compensating for the money gap. And it would lower their expectations, leading to better reviews. We actually saw that: quite a few comments talk about the lack of content, but still give a positive review thanks to the low price.

Considering all this, here's how Dawnmaker sold:

Dawnmaker's summary of sales on Steam

These are our numbers after 18 days of being on Steam. We're currently sitting on 8.8k wishlists, with a conversion rate of 5.8%. We are getting close to 900 units really sold (total sold minus refunds). These numbers are very much in the range of estimations based on surveys from GameDiscoverCo. We'll be selling about 1k units in the first month, just like anticipated. It's good that we did not do less than that, but it's still far from what we would need to recoup. No surprises here, neither bad nor good.

The game shipped with English, French and Japanese localizations. The Japanese translation came really late in the process, the Steam page coming just 3 days before the release. Bit of a missed opportunity here that we didn't have it before we "went big in Japan" (the automaton-media.com article), I guess? We'll never know! Anyway, here are our sales per country:

Dawnmaker's sales by country graph

Quick side-note: we also put the game on itch.io, where we sold… 2 units of the game!

On a positive note

These numbers are not high, and are not nearly enough to make a studio of 2 financially stable. I intend to write a postmortem of Dawnmaker where I'll go deeper into all our failures. But for now, let's finish this section with more positive things. First, the reception of the game has been quite great! We have 94% positive reviews, with 53 reviews at the time of writing, giving us a "Very positive" rating on Steam, which I am very proud of. It is incredibly heartwarming to see that the game we spent 2.5 years of our lives on is loved by players. We have 50 players who played the game for more than 20 hours, and that's, seriously, so so cool:

Dawnmaker's lifetime play time graph

And if we did not have a big spike at launch, our players are still playing today:

Dawnmaker's daily number of players graph

That's it for the current state of Dawnmaker! We intend to ship a content update by the end of September, adding a bit more replayability, and then we'll likely move on to other projects. Hopefully more lucrative ones!

I'm happy to answer any questions you have, so shoot them in the comments.

The Mozilla BlogAt the Rise25 Awards, the future of AI is ethical, inclusive and accountable

The second annual Rise25 Awards in Dublin wasn’t just about celebrating 25 AI leaders. It was about mapping out the future.

The Gardiner Brothers, known for bringing Irish dance into the social media spotlight, kicked off the night with a performance that mixed tradition and innovation. Siobhán McSweeney of “Derry Girls” fame hosted the ceremony, and she kept the crowd engaged with humor, quipping, “AI touches everything like a child with sticky fingers that comes around to the house, just after you cleaned it.”

Then, the honorees took the stage to lay out the principles guiding their work. Here are the highlights:

The stories we tell about AI shape its future

Sinéad Bovell, one of the 2024 Rise25 honorees in the artist category, works on preparing young people for a future driven by advanced tech. She emphasized that the narratives we craft around AI are crucial; they frame public understanding and ultimately influence the direction of AI development. 

“It’s such an honor to be recognized in the artist category, because the stories that we tell about artificial intelligence matter deeply,” Sinéad said. She pointed out that it’s easy to feel trapped in a binary narrative about AI, with dangers and risks on one side and benefits and possibilities on the other. “But the truth is, these stories aren’t separate. They’re intertwined,” she said. 

Citing technologist Jaron Lanier, Sinéad argued that to be a true optimist about AI, you also need to be a fierce critic. “We have to continue to tell the stories of a future where we get AI right and where it transforms humanity for the better. But we also have to tell the stories of how we got there, the challenging decisions we made in the present, and where we chose to keep humanity at the center of technological advancements.” For Sinéad, understanding and telling these nuanced stories is essential for guiding AI toward an ethical and inclusive future.

AI’s effects on individuals can be profound

Gemma Galdon-Clavell, an honoree in the entrepreneur category, is focused on finding and fixing bias and flaws in predictive and large language model (LLM) tools. She shared a deeply personal story that underscored the far-reaching impact AI can have on individuals:

“If my school had used an AI system to assess my chances, I wouldn’t be here today. My mom was 14 when she had me. I had huge behavioral problems growing up. If you had inserted all that data into an AI system and asked, ‘Should this girl go to school? Should we invest in her?’ The answer would have been no.” 

Gemma highlighted the dangers of relying solely on algorithms to determine someone’s potential, as these systems often reduce complex lives to mere data points. “I am here because I managed to beat the odds — because no one set my odds in an algorithm.” 

Her story serves as a powerful reminder of the need for rigorous oversight and auditing of AI systems to ensure they don’t limit the futures of those who, like her, might defy expectations. “People, like everyone else, deserve a chance,” she concluded, advocating for a future where AI supports rather than stifles human potential.

Design choices in AI have far-reaching consequences

Philosopher and AI Ethics Lab founder Cansu Canca, a change agent honoree, spoke passionately about the critical importance of ethical design in AI, highlighting how every decision made in the design process has the potential to shape society. 

“When we design AI systems, we’re not just making technical choices. We’re making moral and philosophical decisions,” she said. Cansu challenged developers to consider questions that go beyond code: “What is a good life? What is a better society?” These questions, she argued, should guide every step of AI development. 

“The design choices we make today will determine whether AI becomes a tool for justice or a mechanism that perpetuates inequality,” Cansu warned. She called for an approach to AI that integrates ethical considerations from the outset, ensuring that systems are designed to promote fairness, transparency and respect for human dignity. “Ethical design isn’t an afterthought — it’s the foundation on which AI should be built,” she said, stressing the far-reaching impact of these decisions on our collective future.

AI’s untapped potential lies in open collaboration

Researcher Aaron Gokaslan, an honoree in the builder category, aims to keep generative model development open. He highlighted the immense, largely untapped potential of AI, particularly within the realm of open-source development. 

“We’re in the very early innings of AI today,” he remarked, pointing out that while AI has already made significant strides, its full potential is still on the horizon. Aaron emphasized that the true power of AI will be unlocked through collaboration and accessibility, which would enable a diverse range of innovators to contribute to its development. 

“By sharing knowledge and resources, we can drive AI forward in ways that benefit society as a whole,” Aaron said.

Thoughtful AI policy is essential for a fair future

Philip Thigo, an advocate honoree and the special envoy on technology for the Republic of Kenya, underscored the critical need for thoughtful and proactive AI policy, warning that without it, AI could deepen existing inequalities and erode public trust. 

“AI has the potential to revolutionize society, but without robust and thoughtful regulation, it could also exacerbate inequalities and undermine public trust,” he cautioned. Philip argued that AI policy must prioritize fairness, transparency and accountability to ensure that AI development benefits everyone, not just a privileged few. 

“We need policies that don’t just react to AI’s challenges, but anticipate them — setting clear guidelines for ethical development and use,” he said. Philip called for a collaborative approach to AI governance, involving not only policymakers and technologists, but also the broader public: “By engaging a diverse range of stakeholders, we can create a framework that guides AI toward serving the common good.”

As the honorees made clear, AI will leave its mark wherever it touches, much like those “sticky fingers.” The challenge ahead is making sure that mark is a positive one. The evening was a powerful reminder that the future of AI is not just about innovation — but about inclusivity, ethics and accountability. 

Get Firefox

Get the browser that protects what’s important

The post At the Rise25 Awards, the future of AI is ethical, inclusive and accountable appeared first on The Mozilla Blog.

Firefox Developer ExperienceFirefox DevTools Newsletter — 129

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 129 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla, like Sebastian Zartner who added multiple warnings in the Rules view when resize (#1551579) and float related properties (#1551580) are used incorrectly, when box-sizing is used on elements that ignore width / height (#1583894) and when table-related CSS properties are used on non-table-related elements (#1868788). Thanks a lot Sebo!

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Performance boost ⚡

We’re very happy to report massive performance improvements throughout the whole toolbox:
displaying lots of logs in the console can be 60% faster, console reload 12%. 70% less time is spent sending the console messages to the client, opening the debugger got 10% faster, showing the variable tooltip takes 40% less time than before, reloading the debugger 15%, stepping in a new source 17%, reloading the inspector 50% and the network monitor can be used 50% earlier than what it used to be.

How did we achieve such impressive (in my eyes) number you may ask? And the answer is throttling. For a lot of panels, the DevTools server (i.e. the code that runs in the web page) sends events to the client (i.e. the DevTools panel) to indicate when a resource is available, updated or removed. A resource is a broad term and can cover console messages, CSS stylesheets or Javascript sources. We used to send a single event for each update the client wanted to be notified about. The webpage is logging a variable in a 10000 for-loop? 10000 events were sent and consumed by the client. Even if we’d then throttle the resources on the client side to avoid stressing out the UI, we were still paying a high cost for transmitting and receiving this high number of events. In Firefox 129, we now group updates that are made within a 100ms range and only send one event (#1824726), which really improve the cases where we are consuming a lot of resources in a small amount of time.

@starting-style support

Firefox 129 adds support for @starting-style rules:

The @starting-style CSS at-rule is used to define starting values for properties set on an element that you want to transition from when the element receives its first style update, i.e. when an element is first displayed on a previously loaded page.

https://developer.mozilla.org/en-US/docs/Web/CSS/@starting-style

This makes it super easy to add animation for an element being added to a page, where you would have need to use CSS animation before. Here, when a div is added to the page, it’s transparent and transition to fully opaque in half a second:

div {
  opacity: 1;
  transition: opacity 0.5s;

  @starting-style {
    opacity: 0;
  }
}

The @starting-style rules are displayed in the Inspector, alongside regular rules, and you can add/remove/edit declarations and values too (#1892192). The transition can be visualized and replayed using the animations panel, like any other transitions.

Firefox DevTools Inspector showing a @starting-style rule on an h1 element. The rule has a `background-color: transparent` declaration. A regular rule for h1 is displayed below it. It also has a `background-color` declaration, but the value is `gold`. There's also a `transition` declaration, animating the background-color. On the right of the image, the animation panel is displayed, and we can see a visualization of the transition applied to the h1 element<figcaption class="wp-element-caption">Slowly transition h1 background-color from transparent to gold on page load</figcaption>

One thing to be mindful of is that declarations inside @starting-style rules are impacted by order and specificity. This means that with the following rules:

div { 
  color: red !important; 
  transition: all 1s;
}

@​starting-style {
  div { 
    color: blue; 
    background: cyan;
  }
}

div { 
  background: transparent;
}

the declaration for color and background in the @starting-style rule are overridden, and there won’t be any visible transition. In such case, as we already do for regular rules, the overridden declaration will have a distinct style that should make it obvious why a propery isn’t being transitionned.

Firefox DevTools Inspector showing a @starting-style rule on an element. The rule has a `outline-color: blue` declaration, which is greyed out and striked-through, indicating that it's unused. A regular rule is also displayed below it, with a `outline-color: black !important` declaration.<figcaption class="wp-element-caption">There will be no transition applied to outline-color, as the @starting-style declaration is overridden by the regular one.</figcaption>

Custom properties (aka CSS variables) can also be declared into @starting-style rules and be animated. We thought it could be helpful to display the @starting-style value of a variable in the tooltip that is displayed when hovering a variable name in a regular rule (#1897931)

Firefox DevTools Inspector focusing on a declaration: `opacity: var(--vars-x)`. A tooltip is displayed, pointing to the css variable. The tooltip has a header with `--vars-x = 1`. Under it is a `@starting-style` section with `--vars-x = 0.5`<figcaption class="wp-element-caption">The new @starting-style section in the CSS variable tooltip makes it easy to understand that the opacity will be transitionned from 0.5 to 1</figcaption>

Invalid at Computed Value Time in computed panel

In Firefox 128, we added an icon next to Invalid At Computed Value Time registered custom property declarations

One of the main advantage of registered property is to be able to have type checking directly in CSS! Whenever a variable is set and doesn’t match the registered property syntax, it is invalid at computed value time. In such case, a new type of error icon is displayed in the rules view, and its tooltip indicate why the value is invalid and what is the expected syntax

https://fxdx.dev/firefox-devtools-newsletter-128/

In this release, we added the same icon and tooltip in the Computed panel, so it’s easier to understand the custom property computed value, be it the registered property initial value, or a valid inherited declaration (#1900070).

Firefox DevTools Computed panel showing 2 CSS variables, --a and --b<figcaption class="wp-element-caption">--a computed value is picked up from the registered property initial value, as the 1em set on body doesn’t match the expected registered property syntax.
--b computed value is the rgb value for the gold color, as picked up by the declaration on body. The h1 declaration is invalid as 10000rem doesn’t match the expected <color> syntax.</figcaption>


Accessibility fixes

If you’re a regular reader of our newsletter, you might remember that we had a big accessibility project at the end of last year, focusing on the most impactful issues we saw in DevTools. The project ended in the beginning of 2024, but there are still smaller things we need to address, so we took some time during this release to squash a few bugs:

  • prevent losing focus state in Debugger Scopes panel when blurring Firefox (#1843325)
  • add focus indicator on Debugger Watch expressions panel inputs (#1904339)
  • properly communicate Webconsole input filter state to screen readers (#1844087)
  • in the Inspector, add keyboard focus-ability to stylesheet location link (#1844054),flex and grid highlighter toggle buttons (#1901508), shape editor button (#1844264) and the link to container query element (#1901713)

We’re planning another couple-months long accessibility project by the end of the year to fix more issues and add High Contrast Mode support, so stay tuned!

And that’s it for this months folks, Thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂


Full list of fixed bugs in DevTools for the Firefox 129 release:

The Rust Programming Language BlogRust Project goals for 2024

With the merging of RFC #3672, the Rust project has selected a slate of 26 Project Goals for the second half of 2024 (2024H2). This is our first time running an experimental new roadmapping process; assuming all goes well, we expect to be running the process roughly every six months. Of these goals, we have designated three of them as our flagship goals, representing our most ambitious and most impactful efforts: (1) finalize preparations for the Rust 2024 edition; (2) bring the Async Rust experience closer to parity with sync Rust; and (3) resolve the biggest blockers to the Linux kernel building on stable Rust. As the year progresses we'll be posting regular updates on these 3 flagship goals along with the 23 others.

Rust’s mission

All the goals selected ultimately further Rust's mission of empowering everyone to build reliable and efficient software. Rust targets programs that prioritize

  • reliability and robustness;
  • performance, memory usage, and resource consumption; and
  • long-term maintenance and extensibility.

We consider "any two out of the three" to be the right heuristic for projects where Rust is a strong contender or possibly the best option, and we chose our goals in part so as to help ensure this is true.

Why these particular flagship goals?

2024 Edition. 2024 will mark the 4th Rust edition, following on the 2015, 2018, and 2021 editions. Similar to the 2021 edition, the 2024 edition is not a "major marketing push" but rather an opportunity to correct small ergonomic issues with Rust that will make it overall much easier to use. The changes planned for the 2024 edition include (1) supporting -> impl Trait and async fn in traits by aligning capture behavior; (2) permitting (async) generators to be added in the future by reserving the gen keyword; and (3) altering fallback for the ! type. The plan is to finalize development of 2024 features this year; the Edition itself is planned for Rust v1.85 (to be released to beta 2025-01-03 and to stable on 2025-02-20).

Async. In 2024 we plan to deliver several critical async Rust building block features, most notably support for async closures and Send bounds. This is part of a multi-year program aiming to raise the experience of authoring "async Rust" to the same level of quality as "sync Rust". Async Rust is widely used, with 52% of the respondents in the 2023 Rust survey indicating that they use Rust to build server-side or backend applications.

Rust for Linux. The experimental support for Rust development in the Linux kernel is a watershed moment for Rust, demonstrating to the world that Rust is indeed capable of targeting all manner of low-level systems applications. And yet today that support rests on a number of unstable features, blocking the effort from ever going beyond experimental status. For 2024H2 we will work to close the largest gaps that block support.

Highlights from the other goals

In addition to the flagship goals, the roadmap defines 23 other goals. Here is a subset to give you a flavor:

Check out the whole list! (Go ahead, we'll wait, but come back here afterwards!)

How to track progress

As the year progresses, we will be posting regular blog posts summarizing the progress on the various goals. If you'd like to see more detail, the 2024h2 milestone on the rust-lang/rust-project-goals repository has tracking issues for each of the goals. Each issue is assigned to the owner(s) of that particular goal. You can subscribe to the issue to receive regular updates, or monitor the #project-goals channel on the rust-lang Zulip. Over time we will likely create other ways to follow along, such as a page on rust-lang.org to visualize progress (if you'd like to help with that, reach out to @nikomatsakis, thanks!).

It's worth stating up front: we don't expect all of these goals to be completed. Many of them were proposed and owned by volunteers, and it's normal and expected that things don't always work out as planned. In the event that a goal seems to stall out, we can either look for a new owner or just consider the goal again in the next round of goal planning.

How we selected project goals

Each project goal began as a PR against the rust-lang/rust-project-goals repository. As each PR came in, the goals were socialized with the teams. This process sometimes resulted in edits to the goals or in breaking up larger goals into smaller chunks (e.g., a far-reaching goal for "higher level Rust" was broken into two specific deliverables, a user-wide build cache and ergonomic ref counting). Finally, the goals were collated into RFC #3672, which listed each goals as well as all the asks from the team. This RFC was approved by all the teams that are being asked for support or other requests.

Conclusion: Project Goals as a "front door" for Rust

To me, the most exciting thing about the Project Goals program has been seeing the goals coming from outside the existing Rust maintainers. My hope is that the Project Goal process can supplement RFCs as an effective "front door" for the project, offering people who have the resources and skill to drive changes a way to float that idea and get feedback from the Rust teams before they begin to work on it.

Project Goals also help ensure the sustainability of the Rust open source community. In the past, it was difficult to tell when starting work on a project whether it would be well-received by the Rust maintainers. This was an obstacle for those who would like to fund efforts to improve Rust, as people don't like to fund work without reasonable confidence it will succeed. Project goals are a way for project maintainers to "bless" a particular project and indicate their belief that it will be helpful to Rust. The Rust Foundation is using project goals as one of their criteria when considering fellowship applications, for example, and I expect over time other grant programs will do the same. But project goals are useful for others, too: having an approved project goal can help someone convince their employer to give them time to work on Rust open source efforts, for example, or give contractors the confidence they need to ensure their customer they'll be able to get the work done.

The next round of goal planning will be targeting 2025H1 and is expected to start in October. We look forward to seeing what great ideas are proposed!

The Talospace ProjectBaseline JIT patches available for Firefox ESR128 on OpenPOWER

It's been a long hot summer at $DAYJOB and I haven't had much time for much of anything, but I got granted some time this week to take care of an unrelated issue and seized the opportunity to get caught up.

The OpenPOWER Firefox JIT still crashes badly in Wasm and Ion for reasons I have yet to ascertain, but the Baseline Interpreter and Baseline Compiler stages of the JIT continue to work great and are significantly faster than the baseline Interpreter (even in a PGO-LTO build), so I did the needful and finally got them pulled up to the new Extended Support Release which is Firefox 128.

I then spent the last two days bashing out crashes and bugs, including a regression from Firefox's new WebAssembly-based in-browser translation engine. The browser chrome now assumes that WebAssembly is always present, but on JIT-less tier-3 machines (or partially implemented JITs like ours, and possibly where Wasm is disabled in prefs) it isn't, so it hits an uncaught error which then blows up substantial portions of the browser UI like the stop-reload button and context menus. The Fedora official ppc64le build of Firefox 128.0.3 is affected as well; I filed bug 1912623 with a provisional fix. Separately all JIT and JavaScript tests completely pass in multiple permutations of Baseline Interpreter and Baseline Compiler, single- and multi-threaded.

As a sign of confidence I've been dogfooding it for the last 24 hours with my typical massive number of tabs and add-ons and can't get it to crash anymore, so I'm typing this blog post in it and using it to upload its own changesets to Github. Grab the ESR source from Mozilla (either pull a tree with Mercurial or just download an archive) and apply the changesets in numerical order, though after bug 1912623 is fixed you won't need #823094. The necessary .mozconfig for building an LTO-PGO build, which is what I'm using, is also in that issue; it's pretty much the same as earlier ones except for --enable-jit.

Little-endian POWER9 remains the officially supported architecture. This version has not been tested on POWER8 or big-endian POWER9, though the JIT should still statically disable itself even if compiled with it on, so the browser should still otherwise work normally. If this is not the case, I consider that a bug, and will accept a fix (I don't have a POWER8 system here to test against). There are no Power10 specific instructions, but I don't see any reason why it wouldn't work on a Power10 machine or on a SolidSilicon S1 whenever we get one of those.

Comments always solicited, though backtraces and reliable STRs are needed to diagnose any bug, of course. Meanwhile I've got more work cut out for me but at least we're back in the saddle for another go.

Don Martihow to break up Google

Everybody* is on about plans for how to break up Google, so here’s my version. I’m trying to keep two awkward considerations in mind.

  • Any Google breakup plan has to fit in a tweet. Google will have more total lawyer time over more years to find the gaps in a complicated plan than could ever be invested in making the plan. Keep it simple, or Google will re-consolidate the way that AT&T did. (All right, maybe not fit in a tweet, but at least get it down to one side of a piece of paper.)

  • Leave Google with the ability to preserve shareholder value. Google is a big company that does a lot of things, so don’t drag it down with pointless micromanagement. Make as few breakup rules as possible but otherwise give them the ability to achieve the important goals in their own way.

The main point of the breakup is to protect users, not to protect any of the competing companies. A breakup does need to happen, though. Google’s tying of client and server products in an anticompetitive way enables the company to harm its users by funding illegal sites and serving fraudulent search ads while limiting the ability of their client software to protect people.

The common feature of all Google’s most problematic anticompetitive schemes is control of both the client and the server. For example, the reason that Google Chrome has such weird, clunky in-browser ad features is that it’s made by the same company that also owns YouTube. When the browser company owns a video sharing site with its own ad system, and the company as a whole earns more from YouTube than from open web ads, they have an incentive to develop in-browser ads in a way that a company that didn’t own both YouTube and Google Chrome would not.

So all right, here’s the break-up plan. Should fit on one page. Google is split into two companies, call them clientGoogle and serverGoogle for now.

  1. serverGoogle can’t do clients. The first company, call it serverGoogle, may not sell or rent any hardware, or release any proprietary software that runs outside a serverGoogle data center. Any code that this company makes available outside a data center must be licensed without any limitations on reverse engineering, and distributed in the preferred form for making modifications. No software released by serverGoogle may be a technological protection measure under section 1201 of Title 17 of the United States Code (DMCA anticircumvention).

  2. clientGoogle can’t do servers. The second company, call it clientGoogle, cannot operate any Internet services, except those necessary for the development and distribution of client software.

  3. clientGoogle and serverGoogle can’t communicate confidentially with each other. The two companies can’t enter into an NDA with each other or contract with the same third parties (such as directors or consulting firms) in such a way as to create a confidential communications channel between them. (Consultants will have to pick one company to work for.)

The reason to do it this way is that most of Google’s anticompetitive behavior is based on control of both the client and the server. Splitting client and server would force a flip from an anticompetitive collusion approach to an adversarial interoperability situation. Separating the client and server would address the problems with Google’s browser, now hard-coded to advantage Google’s YouTube, and Google’s ad blocking support designed to bypass Google’s ads. In those two examples, the ads and YouTube would be part of serverGoogle, and the browser and mobile platform would be clientGoogle.

The main monitoring that would be needed is enforcement of rule 3: keep the two companies from colluding. How long does a director or consultant have to sit out before going to work for the other company, that kind of thing. A whistleblower program with rewards big enough to retire on will help.

The two companies would need to coordinate, of course, but any communication would have to happen in open source projects and in organizations such as the Linux Foundation, W3C, IAB, and IETF. Opening up what had been intra-Google conversations to outsiders would not just be an antitrust win, it would also help avert some of the weird groupthink rat holes that all big companies tend to go down.

What about JavaScript? When serverGoogle operates a site with JavaScript, the license for the JavaScript code may not prohibit reverse engineering, the site must provide JavaScript Source Maps, and the terms of service for the site may not prohibit the use of the site with modified JavaScript.

What about servers for version control, CI, bug tracker, and downloads? The servers required to develop and release client software are the one exception to the no servers rule for clientGoogle. (That doesn’t mean clientGoogle gets to run any other servers. For example, if clientGoogle supports a browser with the ability to sync bookmarks, users must configure it to use their account with serverGoogle or some other party, as part of an add account process that users already go through to set up calendar or email accounts today.)

What about Google Fiber? (and other businesses that aren’t client software or Internet services?) Let Google management pick based on what is good for them—we don’t want to micromanage business unit by business unit, just make rules to prevent the known problems.

What about AI? Considering that Google is all on about Integration and Android now? AI is a good example of a win from a client/server split. Mobile devices won’t be stuck talking to a laggy AI server for anticompetitive tying reasons, and Internet services won’t be held back by underpowered on-device AI for anticompetitive tying reasons. Both client and server will be able to make the best implementation choices.

What about the Google Play Store? serverGoogle could run a mobile app store but not release its own apps, which run on the client. clientGoogle could release mobile devices or platforms that enable users to connect to and use an app store, and also release apps.

Could serverGoogle spin off the YouTube service, clientGoogle spin off the YouTube apps, then the service and app companies merge to re-form a standalone YouTube? Yes, if it passes normal FTC merger review. Some post-breakup splitting and trading is going to happen, so the FTC still has to keep an eye on things.

What about my 401(k)? Google is a big part of the stock market, and without anticompetitive collusion they’ll be making less money. But relax. You’re probably invested in an index fund that owns shares in both parasites and hosts—as the legit economy recovers from all this negative-sum value extraction, your total portfolio will do better.

Would this work for [other company] too? Probably not. (Let’s do Google first, which will make the web a lot more fun, then we’ll be on a roll and can move on to whatever other big company is giving everybody grief.)

Don’t cut soup with a knife, people

Here’s how not to break up Google: Some people are suggesting that the breakup plan should be a careful dividing of the big bowl of adtech alphabet soup. (Where on Ari Paparo’s simplified chart do you cut, exactly?) That would be a waste of time—if that’s all you do, Google will just tweak their clients, Chrome and Android, to move the profits out of whatever slice of the soup they have to get rid of, and keep the money flowing into whatever they get to keep.

Related

“Google is a Monopolist” – Wrong and Right Ways to Think About Remedies by Cristina Caffarra and Robin Berjon

A Brief List of Business Units Google Could Be Separated Into by Aram Zucker-Scharff

Breaking up Google would offer a chance to remodel the web by Natasha Lomas

Pluralistic: The paradox of choice screens by Cory Doctorow

What Should We Do About Google? by Tim Wu

Break up the Browsers.  A Proposal to Save the Open Web. - Movement For An Open Web (Interesting ideas but leaves native mobile apps and smart TVs out of the plan, which would be bad news)

Bonus links

This one important fact about current AI explains almost everything The simple fact is that current approaches to machine learning (which underlies most of the AI people talk about today) are lousy at outliers, which is to say that when they encounter unusual circumstances, like the subtly altered word problems that I mentioned a few days ago, they often say and do things that are absurd.

Malware scam on GitHub impersonates Google Authenticator ad A cybersecurity software provider has uncovered fraudulent advertising branded as Google, which links to a malicious version of Authenticator.

Does everyone hate Google now?

The DOJ Wins Its Search Antitrust Case Against Google. Next Up Is Ad Tech

Google loses its massive antitrust case against the DOJ

New Research: So Far, AI Is Not Disrupting Search or Making a Dent in Google

Here is another reason why you should never click on ads to download software The link looks good even though it is listed as sponsored. It shows Google’s official site as the URL. When you check the advertiser, which you can on Google Search, you get confirmation that Google has verified the advertisers identity. All good then?

Google AI fails the taste test

A Google Ads Glitch Likely Triggered A Data Breach Within Google Merchant Center

Should web browsers be regulated?

Google says “informed choice” is the future. We’re holding them to it.

Hacks.Mozilla.Org0Din: A GenAI Bug Bounty Program – Securing Tomorrow’s AI Together

Introduction

As AI continues to evolve, so do the threats against it. As these GenAI systems become more sophisticated and widely adopted, ensuring their security and ethical use becomes paramount. 0Din is a groundbreaking GenAI bug bounty program dedicated specifically to help secure GenAI systems and beyond. In this blog, you’ll learn about 0Din, how it works, and how you can participate and make a difference in securing our AI future.

What is 0Din?

0Din is an innovative GenAI bug bounty program that seeks to identify and mitigate vulnerabilities in AI systems. By harnessing the collective expertise of the global security community, 0Din aims to build a more secure AI landscape. The program rewards individuals who discover and report security flaws, ensuring that AI systems remain robust and trustworthy.

How the 0Din Bug Bounty Program Works

Participating in the 0Din bug bounty program is straightforward. Here’s a step-by-step overview:

  1. Identify Vulnerabilities: Participants search for security flaws within the scope defined by 0Din.
  2. Submit Reports: When a vulnerability is found, participants submit a detailed report outlining the issue.
  3. Review Process: 0Din’s team reviews the submission, verifies the vulnerability, and assesses its impact.
  4. Receive Rewards: Verified vulnerabilities are rewarded based on their severity and impact.

For detailed information on the vulnerability scope and processing policy, visit  the 0Din Policy Page 

Types of Vulnerabilities Covered

0Din covers a broad range of vulnerabilities. Here are some examples:

  1. Guardrail Jailbreak: Bypassing safety measures to make the AI perform harmful actions.
  2. Prompt Injection: Inserting malicious input to subvert the AI’s intended operations.
  3. Training Data Leakage: Extracting sensitive information from the training data used to build the AI.

Each type of vulnerability has a specific reward based on its severity, ranging from low to high. The Disclosure Mappings Guideline provides a comprehensive list of vulnerabilities and their corresponding rewards.

Eligibility and Participation

0Din welcomes participants from around the world. Here’s who can participate:

– Security Researchers: Professionals dedicated to discovering and mitigating security risks.

– Developers: Individuals with a strong understanding of AI and its underlying technologies.

– Tech Enthusiasts: Anyone with a keen interest in AI security and the technical skills to identify vulnerabilities.

To ensure a fair and effective program, participants must adhere to 0Din’s Vulnerability Processing and Disclosure Policy. This policy outlines the proper procedures for reporting vulnerabilities and ensures that all submissions are handled with integrity and respect.

Vulnerability Processing and Disclosure Policy

0Din’s vulnerability processing and disclosure policy is designed to ensure transparency and fairness. Key points include:

  1. Submission Review: Each submission is reviewed by a team of experts to verify the vulnerability and assess its impact.
  2. Response Time: 0Din commits to responding to submissions promptly, typically within a few days.
  3. Reward Allocation: Rewards are allocated based on the severity and impact of the vulnerability, following a predefined scale.
  4. Responsible Disclosure: Participants are expected to adhere to responsible disclosure practices, ensuring that vulnerabilities are reported privately and not exploited.

For a detailed policy overview, refer to the 0Din Policy Page 

Conclusion

In an era where AI plays an increasingly vital role in our lives, ensuring its security is paramount. 0Din offers a unique opportunity to contribute to this critical field while being rewarded for your expertise. By participating in the 0Din bug bounty program, you can help build a safer and more secure AI future. Join us today and make a difference in the world of GenAI security.

The post 0Din: A GenAI Bug Bounty Program – Securing Tomorrow’s AI Together appeared first on Mozilla Hacks - the Web developer blog.

Mozilla ThunderbirdMaximize Your Day: Templates to the Rescue

Hello! We’re back for the summer edition of our productivity series, and we’re here with a productivity tip that can save you time AND reduce email anxiety-induced procrastination. We’re talking about email templates.

Marketing and Comms Manager Natalie Ivanova shared why she’s a huge fan of email templates. When one of her three kids are sick, their school requires an email with lots of important details – their teacher’s name, their class number, and class division. She’d hunt through her sent messages for the last sick day email, then have to look up any new info for those key details. More often than not, this search led to procrastinating, which led to an annoyed phone call from her kid’s school.

To take the stress out of these emails, Natalie turned to templates. Templates take the hard work out of writing an email. Instead of facing a dreaded blank page, you have a structure you created, and all you have to do is fill in the blanks. In her case, she made a template for each kid, filled it with the info the school needed, and left blanks for any fields that would change.

Whether you’re updating teachers, sending regular updates to colleagues, or otherwise sending something over and over, let Thunderbird and the power of templates do the heavy lifting.

Creating a Template

Creating a template is a lot like writing an email. Click on ‘New Messages’ to get started. If your template is meant for one recipient – for example, your kid’s school – go ahead and enter the address. Your Subject Line will be how you find your template later – and can be part of the template itself! For a monthly report I send about Thunderbird in the media, I use ‘Media Sentiment Summary [MONTH YEAR]. It’s easy to find AND easy to change. You could almost say it’s magic!

The body of your email is where you put the power of templates to work. For the sick kid template, most of the information is already there. All you need to do is literally hit send. For that monthly report, I put the fields I need to fill in brackets (with text in ALL CAPS to help me notice it and avoid the shame of sending an unedited template), both in the subject and the body.

Writing an Email from Templates

So, you’ve made a template. Yay!

Now, how do you use it?

Thunderbird makes it very easy to find your new template. It lives in the ‘Templates’ folder in the Folder Pane window, just below the Drafts folder. Click on the Templates folder to open it, and click on the Message Menu in the upper right corner. Click ‘New Message from Template’, and your template is ready to edit and send. And every time you use your template, YOU are ready to have more time and less stress.

More Resources!

The post Maximize Your Day: Templates to the Rescue appeared first on The Thunderbird Blog.

Mozilla Performance BlogPerformance Testing Newsletter, Q2 Edition

Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products. See below for highlights from the changes made in the last quarter.

Highlights

Contributors

  • Myeongjun Go [:myeongjun]

  • Mayank Bansal [:mayankleoboy1]

If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.

P.S. If you’re interested in including updates from your teams in a quarterly newsletter like this, and you are not currently covered by another newsletter, please reach out to me (:sparky). I’m interested in making a more general newsletter for these.

The Rust Programming Language BlogAnnouncing Rust 1.80.1

The Rust team has published a new point release of Rust, 1.80.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, getting Rust 1.80.1 is as easy as:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website.

What's in 1.80.1

Rust 1.80.1 fixes two regressions that were recently reported.

Miscompilation when comparing floats

In addition to the existing optimizations performed by LLVM, rustc is growing its own set of optimizations. Rust 1.78.0 added a new one, implementing "jump threading" (merging together two adjacent branches that perform the same comparison).

The optimization was also enabled on branches checking for floating point equality, but it didn't implement the special rules needed for floats comparison (NaN != NaN and 0.0 == -0.0). This caused the optimization to miscompile code performing those checks.

Rust 1.80.1 addresses the problem by preventing the optimization from being applied to float comparisons, while retaining the optimization on other supported types.

False positives in the dead_code lint

Rust 1.80.0 contained refactorings to the dead_code lint. We received multiple reports that the new lint implementation produces false positives, so we are reverting the changes in Rust 1.80.1. We'll continue to experiment on how to improve the accuracy of dead_code in future releases.

Contributors to 1.80.1

Many people came together to create Rust 1.80.1. We couldn't have done it without all of you. Thanks!

Firefox NightlyFirefox Sidebar and Vertical tabs: try them out in Nightly Firefox Labs 131

We are excited to share that vertical tabs and a new sidebar experience are now available in Nightly 131. This update has been highly anticipated and requested by the community, and we are looking forward to seeing how it enhances your browsing and productivity. To give this in-progress work a try:

  • update to the latest Nightly,
  • go to Settings > Firefox Labs,
  • activate Sidebar and Vertical tabs experiments,
  • click Customize toolbar in the toolbar right-click menu, and drag the sidebar icon to your toolbar. Clicking on the sidebar icon will allow you to expand vertical tabs.

We have designed the new sidebar and vertical tabs experience to make two core browsing workflows more seamless – context-switching and multitasking.

  • The new sidebar allows you to quickly cross-reference all of your favorite tools – be it tabs on your mobile phone, your favorite extension, or bookmarks with your main task at hand.
  • Vertical tabs make it easier to scan information and switch between tasks.

This work is still very much in progress. You’ll see us refine things in the coming months, and we appreciate your feedback to help bring these features to life.

We will also be sharing our backlog of improvements on Mozilla Connect, so you can get a sense for where we ultimately want these features to be.

What’s Next? 

We’re calling on our community to test the new sidebar and vertical tabs experience and your constructive feedback is crucial as we refine these features. Please share your thoughts on Mozilla Connect – we take your input into account and it helps us create a browsing experience that meets your needs 🦊.

If you’re a web extension developer and your extension uses sidebar APIs or works with tabs, we’d love you to test it with the new sidebar and vertical tabs enabled. While there are no changes to Web Extension APIs tied to these features, this is a good opportunity to anticipate unforeseen issues resulting from the updated UI.

Hacks.Mozilla.OrgAnnouncing Official Puppeteer Support for Firefox

We’re pleased to announce that, as of version 23, the Puppeteer browser automation library now has first-class support for Firefox. This means that it’s now easy to write automation and perform end-to-end testing using Puppeteer, and run against both Chrome and Firefox.

How to Use Puppeteer With Firefox

To get started, simply set the product to “firefox” when starting Puppeteer:

import puppeteer from "puppeteer";

const browser = await puppeteer.launch({
  browser: "firefox"
});

const page = await browser.newPage();
// ...
await browser.close();

As with Chrome, Puppeteer is able to download and launch the latest stable version of Firefox, so running against either browser should offer the same developer experience that Puppeteer users have come to expect.

Whilst the features offered by Puppeteer won’t be a surprise, bringing support to multiple browsers has been a significant undertaking. The Firefox support is not based on a Firefox-specific automation protocol, but on WebDriver BiDi, a cross browser protocol that’s undergoing standardization at the W3C, and currently has implementation in both Gecko and Chromium. This use of a cross-browser protocol should make it much easier to support many different browsers going forward.

Later in this post we’ll dive into some of the more technical background behind WebDriver BiDi. But first we’d like to call out how today’s announcement is a great demonstration of how productive collaboration can advance the state of the art on the web. Developing a new browser automation protocol is a lot of work, and great thanks goes to the Puppeteer team and the other members of the W3C Browser Testing and Tools Working Group, for all their efforts in getting us to this point.

You can also check out the Puppeteer team’s post about making WebDriver BiDi production ready.

Key Features

For long-time Puppeteer users, the features available are familiar. However for people in other automation and testing ecosystems — particularly those that until recently relied entirely on HTTP-based WebDriver — this section outlines some of the new functionality that WebDriver BiDi makes possible to implement in a cross-browser manner.

Capturing of Log Messages

A common requirement when testing web apps is to ensure that there are no unexpected errors reported to the console. This is also a case where an event-based protocol shines, since it avoids the need to poll the browser for new log messages.

import puppeteer from "puppeteer";

const browser = await puppeteer.launch({
  browser: "firefox"
});

const page = await browser.newPage();
page.on('console', msg => {
  console.log(`[console] ${msg.type()}: ${msg.text()}`);
});

await page.evaluate(() => console.debug('Some Info'));
await browser.close();

Output:

[console] debug: Some Info

Device Emulation

Often when testing a reactive layout it’s useful to be able to ensure that the layout works well at multiple screen dimensions, and device pixel ratios. This can be done by using a real mobile browser, either on a device, or on an emulator. However for simplicity it can be useful to perform the testing on a desktop set up to mimic the viewport of a mobile device. The example below shows loading a page with Firefox configured to emulate the viewport size and device pixel ratio of a Pixel 5 phone.

import puppeteer from "puppeteer";

const device = puppeteer.KnownDevices["Pixel 5"];

const browser = await puppeteer.launch({
  browser: "firefox"
});

const page = await browser.newPage();
await page.emulate(device);

const viewport = page.viewport();

console.log(
  `[emulate] Pixel 5: ${viewport.width}x${viewport.height}` +
  ` (dpr=${viewport.deviceScaleFactor}, mobile=${viewport.isMobile})`
);

await page.goto("https://www.mozilla.org");
await browser.close();

Output:

[emulate] Pixel 5: 393x851 (dpr=3, mobile=true)

Network Interception

A common requirement for testing is to be able to track and intercept network requests. Interception is especially useful for avoiding requests to third party services during tests, and providing mock response data. It can also be used to handle HTTP authentication dialogs, and override parts of the request and response, for example adding or removing headers. In the example below we use network request interception to block all requests to web fonts on a page, which might be useful to ensure that these fonts failing to load doesn’t break the site layout.

import puppeteer from "puppeteer";

const browser = await puppeteer.launch({
  browser: 'firefox'
});

const page = await browser.newPage();
await page.setRequestInterception(true);

page.on("request", request => {
  if (request.url().includes(".woff2")) {
    // Block requests to custom user fonts.
    console.log(`[intercept] Request aborted: ${request.url()}`);
    request.abort();
  } else {
    request.continue();
  }
});

const response = await page.goto("https://support.mozilla.org");
console.log(
  `[navigate] status=${response.status()} url=${response.url()}`
);
await browser.close();

Output:

[intercept] Request aborted: https://assets-prod.sumo.prod.webservices.mozgcp.net/static/Inter-Bold.3717db0be15085ac.woff2
[navigate] status=200 url=https://support.mozilla.org/en-US/

Preload Scripts

Often automation tooling wants to provide custom functionality that can be implemented in JavaScript. Whilst WebDriver has always allowed injecting scripts, it wasn’t possible to ensure that an injected script was always run before the page started loading, making it impossible to avoid races between the page scripts and the injected script.

WebDriver BiDi provides “preload” scripts which can be run before a page is loaded. It also provides a means to emit custom events from scripts. This can be used, for example, to avoid polling for expected elements, but instead using a mutation observer that fires as soon as the element is available. In the example below we wait for the <title> element to appear on the page, and log its contents.

import puppeteer from "puppeteer";

const browser = await puppeteer.launch({
  browser: 'firefox',
});

const page = await browser.newPage();

const gotMessage = new Promise(resolve =>
  page.exposeFunction("sendMessage", async message => {
    console.log(`[script] Message from pre-load script: ${message}`);
    resolve();
  })
);

await page.evaluateOnNewDocument(() => {
  const observer = new MutationObserver(mutationList => {
    for (const mutation of mutationList) {
      if (mutation.type === "childList") {
        for (const node of mutation.addedNodes) {
          if (node.tagName === "TITLE") {
            sendMessage(node.textContent);
          }
        }
      }
    };
  });

  observer.observe(document.documentElement, {
    subtree: true,
    childList: true,
  });
});

await page.goto("https://support.mozilla.org");
await gotMessage;
await browser.close();

Output:

[script] Message from pre-load script: Mozilla Support

Technical Background

Until recently people wishing to automate browsers had two main choices:

  • Use the W3C WebDriver API, which was based on earlier work by the Selenium project.
  • Use a browser-specific API for talking to each supported browser such as Chrome DevTools Protocol (CDP) for Chromium-based browsers, or Firefox’s Remote Debugging Protocol (RDP) for Gecko-based browsers.

Unfortunately both of those options come with significant tradeoffs. The “classic” WebDriver API is HTTP-based, and its model involves automation sending a command to the browser and waiting for a response. That works well for automation scenarios where you load a page and then verify, for example, that some element is displayed, but the inability to get events ­— e.g. console logs — back from the browser, or run multiple commands concurrently, makes the API a poor fit for more advanced use cases.

By contrast, browser-specific APIs have generally been designed around supporting the complex use cases of in-browser devtools. This has given them a feature set far in advance of what’s possible using WebDriver, as they need to support use cases such as recording console logs, or network requests.

Therefore, browser automation clients have been forced to make the choice between supporting many browsers using a single protocol and providing a limited feature set, or providing a richer feature set but having to implement multiple protocols to provide functionality separately for each supported browser. This obviously increased the cost and complexity of creating great cross-browser automation, which isn’t a good situation, especially when developers commonly cite cross-browser testing as one the main pain points in developing for the web.

Long time developers might notice the analogy here to the situation with editors before the development of Language Server Protocol (LSP). At that time each text editor or IDE had to implement bespoke support for each different programming language. That made it hard to get support for a new language into all the tools that developers were using. The advent of LSP changed that by providing a common protocol that could be supported by any combination of editor and programming language. For a new programming language like TypeScript to be supported across all editors it no longer needs to get them to add support one-by-one; it only needs to provide an LSP server and it will automatically be supported across any LSP-supporting editor. The advent of this common protocol has also enabled things that were hard to imagine before. For example specific libraries like Tailwind getting their own LSP implementation to enable bespoke editor functionality.

So to improve cross-browser automation we’ve taken a similar approach: developing WebDriver BiDi, which brings the automation featureset previously limited to browser-specific protocols to a standardized protocol that can be implemented by any browser and used by any automation tooling in any programming language.

At Mozilla we see this strategy of standardizing protocols in order to remove barriers to entry, allow a diverse ecosystem of interoperable implementations to flourish, and enable users to choose those best suited to their needs as a key part of our manifesto and web vision.

For more details about the design of WebDriver BiDi and how it relates to classic WebDriver, please see our earlier posts.

Removing experimental CDP support in Firefox

As part of our early work on improving cross-browser testing, we shipped a partial implementation of CDP, limited to a few commands and events needed to support testing use cases. This was previously the basis of experimental support for Firefox in Puppeteer. However, once it became clear that this was not the way forward for cross-browser automation, effort on this was stopped. As a result it is unmaintained and doesn’t work with modern Firefox features such as site isolation. Therefore support is scheduled to be removed at the end of 2024.

If you are currently using CDP with Firefox, and don’t know how to transition to WebDriver BiDi, please reach out using one of the channels listed at the bottom of this post, and we will discuss your requirements.

What’s Next?

Although Firefox is now officially supported in Puppeteer, and has enough functionality to cover many automation and testing scenarios, there are still some APIs that remain unsupported. These broadly fall into three categories (consult the Puppeteer documentation for a full list):

  • Highly CDP-specific APIs, notably those in the CDPSession module. These are unlikely to be supported directly, but specific use cases that currently require these APIs could be candidates for standardization.
  • APIs which require further standards work. For example page.accessibility.snapshot returns a dump of the Chromium accessibility tree. However because there’s currently no standardized description of what that tree should look like this is hard to make work in a cross-browser way. There are also cases which are much more straightforward, as they only require work on the WebDriver BiDi spec itself; for example page.setGeolocation.
  • APIs which have a standard but are not yet implemented, for example the ability to execute scripts in workers required for commands like WebWorker.evaluate.

We expect to fill these gaps going forward. To help prioritize, we’re interested in your feedback: Please try running your Puppeteer tests in Firefox! If you’re unable to get them in Firefox because of a bug or missing feature, please let us know using one of the methods below so that we can take it into account when planning our future standards and implementation work:

  • For Firefox implementation bugs, please file a bug on Bugzilla
  • If you’re confident that the issue is in Puppeteer, please file a bug in their issue tracker.
  • For features missing from the WebDriver BiDi specification, please file an issue on GitHub
  • If you want to talk to us about use cases or requirements, please use the #webdriver channel on Mozilla’s Matrix instance or email dev-webdriver@mozilla.org.

The post Announcing Official Puppeteer Support for Firefox appeared first on Mozilla Hacks - the Web developer blog.

Mozilla ThunderbirdThunderbird goes to GUADEC 2024

GUADEC is the annual GNOME conference and this year it was in beautiful Denver, Colorado. Why are we writing about this on the Thunderbird blog? I’m so glad you asked. Thunderbird was there and our very own Ryan Sipes gave a compelling keynote talk!

Ryan’s GUADEC 2024 Keynote

Ryan gave a brief history lesson of Thunderbird, detailed how we survived tough times, and what exciting new things we are working on, including our recent Supernova release.

<figcaption class="wp-element-caption">Thunderbird’s Ryan Sipes presenting at Guadec (Photo by: Dayne Pillow, 2024)</figcaption>

While Thunderbird is cross platform, Ryan highlighted our current focus on native integration with Linux systems, starting with an initial implementation of a Linux system tray icon. We are committed to our Linux users more than ever, no matter their choice of desktop environment, packaging type, or flavor of Linux.

<figcaption class="wp-element-caption">Thunderbird’s Ryan Sipes presenting at Guadec (Photo by: Dayne Pillow, 2024)</figcaption>

Thunderbird in the Hallway Track

Besides Ryan’s talk, there were several meaningful conversations, relevant to various aspects of Thunderbird.

  • There are shared struggles between our calendar and GNOME calendar, revealing opportunities to work together towards a common solution.
  • Since the Thunderbird flatpak is one of our supported packages for Linux systems, it was great to hear an update from the flatpak and xdg-desktop-portals developers. We can start to think of how we can leverage recent and upcoming changes to portals to improve the Thunderbird flatpak.
  • Ryan’s talk pointed out our need for privacy respecting telemetry, and it turns out that is shared by the GNOME app developers as well. Expect to hear more about this in future blog posts, as events develop.

Overall, this year’s GUADEC was an excellent week of collaboration, where we shared many wonderful ideas and strengthened our comradery. Thunderbird’s presence at this conference showed us where we can work with the broader GNOME community and support one another in a way that benefits all of our users. We thank the GNOME foundation for the excellent organization.

<figcaption class="wp-element-caption">Guadec 2024 attendees (Photo by: Dayne Pillow, 2024)</figcaption>

Let the collaboration continue!

The post Thunderbird goes to GUADEC 2024 appeared first on The Thunderbird Blog.

Firefox Developer ExperienceGeckodriver 0.35.0 Released

We are proud to announce the next major release of geckodriver 0.35.0. It ships with two new features: support for “Permissions” and a new flag to enable the crash reporter.

Contributions

With geckodriver being an open source project, we are grateful to get contributions from people outside of Mozilla:

  • Razvan Cojocaru added a command line flag to enable the crash reporter.
  • James Hendry updated the SwitchToFrame command to raises an “invalid argument” error when the id parameter is missing.
  • James Hendry removed support for session negotiation using the deprecated desiredCapabilities and requiredCapabilities.

Geckodriver code is written in Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for geckodriver.

New Features

Support for “Permissions”

Support for Permissions that allow controlling permission prompts within the browser. This enables automated tests to handle scenarios involving permissions like geolocation, notifications, and more.

Added flag to enable the crash reporter

The command line flag --enable-crash-reporter has been added, to allow the crash reporter in Firefox to automatically submit crash reports to Mozilla’s crash reporting system if a tab or the browser itself crashes.

Note that this feature is disabled by default and should only be used when a crash situation needs to be investigated. See our documentation for crash reports in how to share these with us.

Improved unhandledPromptBehavior capability

The validation of the unhandledPromptBehavior capability has been enhanced to support finer configuration options for the User Prompt Handler which are particularly used by WebDriver BiDi.

Removals

  • Removed support for session negotiation using the deprecated desiredCapabilities and requiredCapabilities.
  • Removed support for the moz:useNonSpecCompliantPointerOrigin capability, which has not been supported since Firefox 116.

Bug Fixes

  • The Switch To Frame command now correctly raises an “invalid argument” error when the id parameter is missing.

The Mozilla BlogFirefox hacks for everyone: From cozy gamers to minimalists and beyond

Illustration of a web browser window with multiple tabs, icons for search, security, plugins, and multimedia files, and a large cursor clicking a stack of buttons. Various abstract shapes and dotted lines connect the icons, representing online activities and interactions.

Firefox users, we’ve got tips for you. The Mozilla team has gathered some of our favorite tricks to help you get the most out of your browser – from customizing the look of Firefox and managing tabs, to watching videos on the sly and staying cozy while gaming. Let’s dive in.

For the cozy gamer

Our senior web UX designer, Elise, loves a cozy game. She finds player guides on her desktop during the day and accesses them on her phone at night through Firefox tab syncing. That way, she doesn’t have to leave her late-night gaming cocoon. 

Read more: Firefox tips and tricks for gamers

For the creative

Being a content creator is fun but demanding. For Steve, Mozilla’s video lead, Firefox features like the eyedropper tool, the built-in PDF editor and picture-in-picture come in handy.

Read more: Firefox tips and tricks for creatives

For the online shopper

Fakespot social producer Hannah is an eBay hawk, a casual Amazon browser and a Sephora VIB insider. Her tips, of course, include avoiding unreliable product reviews with Fakespot. She shares other tricks like how to discreetly shop for gifts online and finding deals without manually searching for coupon codes.

Read more: Firefox tips and tricks for online shopping

For the minimalist

As Mozilla’s blog editor, I do a lot of reading and research, so a minimal browser helps me stay focused. I have a step-by-step guide to turn Firefox into a distraction-free workspace.

Read more: Transform Firefox into the ultimate minimalist browser

A Firefox browser window displaying a new tab with a clean, minimalistic interface. The screen is primarily white with a search bar at the top center labeled "Search with Google or enter address" and a cogwheel icon on the right side for settings. The window’s tabs and menu options are also visible, indicating no distractions or additional elements on the screen.

For the tab maximalist

Tyler, a global product marketing manager at Mozilla, may not keep 7,000 tabs open. But you can find her with 50+ tabs open across multiple Firefox windows on any given day. From closing duplicate tabs to searching for that one tab you lost in the haystack, here’s her list of tricks to manage tabs.  

Read more: Top 5 Firefox features for tab maximalists

For the newshound

It’s Alex’s job as recommendations editor to find great content for users across Mozilla’s products. He uses Pocket to save and organize articles, plus a number of extensions to stay productive. Alex’s background in journalism also makes him particularly keen on Mozilla’s commitment to security and privacy in making products, including Firefox. 

Read more: Firefox tips and tricks for journalists

For the college student

As a master’s student, Gian has spent too much time searching online for free PDF editors – giving out his email address or downloading dubious software so that he can annotate lecture notes, complete projects and more. Enter Firefox’s built-in PDF editor.

Read more: Streamline your schoolwork with Firefox’s PDF editor

There are endless ways to make Firefox your own, however you choose to navigate the internet. We want to know how you customize Firefox. Let us know and tag us on X or Instagram at @Firefox.

Get Firefox
Get the browser that protects what’s important

The post Firefox hacks for everyone: From cozy gamers to minimalists and beyond appeared first on The Mozilla Blog.

Firefox Developer ExperienceFirefox WebDriver Newsletter 129

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 129 release cycle.

Contributions

Firefox – including our WebDriver implementation – is developed as an open source project, and everyone is welcome to contribute. There were no external contributions during the Firefox 129 release cycle, but we already have 3 bugs fixed by external contributors for the next release! If you ever wanted to contribute to an open source project used by millions of users, or are interested in some experience in software development, jump in. We have many beginner-friendly available over at https://codetribute.mozilla.org/ and the documentation to get started is easy to follow.

General

Disabling CDP (Chrome DevTools Protocol) by default

As announced in our previous blog post (Deprecating CDP support in Firefox), with Firefox 129 CDP is now disabled by default. Our WebDriver BiDi implementation now provides more features than the experimental CDP support, so we strongly encourage all users relying on CDP to try out WebDriver BiDi. Please reach out to us if you stumble on any issue while trying to migrate to WebDriver BiDi. For the time being, you can still re-enable CDP by setting the remote.active-protocols preference to 2 (CDP only) or 3 (CDP+BiDi).

WebDriver BiDi

Added the network.setCacheBehavior command

The network.setCacheBehavior command allows to change the network cache behavior either globally or for a set of top-level browsing contexts. Disabling the network cache during tests can be useful if you want to ensure consistent results between test runs, without having to worry about the heuristics used by the browser in order to use the cache or not. It’s also recommended to disable the cache when using network interception features. This command expects a cacheBehavior parameter which can either be "default" or "bypass". Using "bypass" will instruct the browser to bypass the cache, in other words it will disable the network cache. You can also provide an optional contexts parameter, which should be an array of top level context ids.

-> { 
  "method": "network.setCacheBehavior", 
  "params": { 
    "cacheBehavior": "bypass", 
    "contexts": [ "08706bf7-1b79-4f92-bbd1-d066c469ed8f" ] 
  }, 
  "id": 61 
}

<- { "type": "success", "id": 61, "result": {} }

Note that you can call the command several times to apply different cache behaviors to different top-level contexts. You can for instance set the cache behavior to "bypass" globally first, and then selectively reset it do "default" for one or more contexts. However note that calling network.setCacheBehavior globally (ie. with no contexts argument) will override any specific behavior previously set for a context.

Support for “beforeUnload” type prompts

We now support the user prompts of type "beforeUnload", which can be triggered from the "beforeunload" event on a webpage. The usual user prompt events will be emitted for those prompts: browsingContext.userPromptOpened, browsingContext.userPromptClosed. And you can handle the prompts using the existing browsingContext.handleUserPrompt command.

In the example below, the client listens for user prompt events, and will dismiss the "beforeUnload" user prompt detected:

<- {
  "type": "event",
  "method": "browsingContext.userPromptOpened",
  "params": {
    "context": "08706bf7-1b79-4f92-bbd1-d066c469ed8f",
    "handler": "ignore",
    "message": "This page is asking you to confirm that you want to leave — information you’ve entered may not be saved.",
    "type": "beforeunload"
  }
}

-> {
  "method": "browsingContext.handleUserPrompt",
  "params": {
    "context": "08706bf7-1b79-4f92-bbd1-d066c469ed8f",
    "action": "dismiss"
  },
  "id": 63
}

<- {
  "type": "event",
  "method": "browsingContext.userPromptClosed",
  "params": {
    "context": "08706bf7-1b79-4f92-bbd1-d066c469ed8f",
    "accepted": true,
    "type": "beforeunload"
  }
}

<- {
  "type": "success",
  "id": 63,
  "result": {}
}

Support optional arguments for network.provideResponse

For requests intercepted in the beforeRequestSent phase, we now support all the optional arguments for the network.provideResponse command. The newly supported arguments are body, cookies, headers, reasonPhrase and statusCode. With this, you can now easily return mocked responses for any intercepted request. This request will not actually reach the network and instead the information you provided will be used to build a functional response. The names are quite self-explanatory: body allows to set the response body, cookies is a shortcut to add Set-Cookie headers, headers sets the regular response headers, reasonPhrase is the response’s status text (for instance "OK") and statusCode is the response’s status code (for instance 200).

In the following example, the client intercepts requests to a "script.js" URL and will use network.provideResponse to return a custom script which will log a message. We will then navigate to a page which tries to load this "script.js" file and we expect to receive a log.entryAdded event corresponding to the mock response.

First we setup the intercept and navigate to the test page.

-> {
  "method": "network.addIntercept",
  "params": {
    "phases": [
      "beforeRequestSent"
    ],
    "urlPatterns": [
      {
        "type": "string",
        "pattern": "https://test-provideresponse-mock-script.glitch.me/script.js"
      }
    ]
  },
  "id": 67
}

<- { "type": "success", "id": 67, "result": { "intercept": "876a5b3d-045b-49ca-80e4-c61de1d0d00d" } }

-> { 
  "method": "browsingContext.navigate", 
  "params": { 
    "context": "0f38a9a6-912c-4e83-9945-9aa8b9ee1f1f", 
    "url": "https://test-provideresponse-mock-script.glitch.me", 
    "wait": "none" 
  }, 
  "id": 70 
}

<- { 
  "type": "success", 
  "id": 70, 
  "result": { 
    "navigation": "89535996-58ea-4b18-b95e-43af6c137ecf", 
    "url": "https://test-provideresponse-mock-script.glitch.me/" 
  } 
}

Several network events will be received, but here we are only interested in the blocked one, which will be resumed using network.provideResponse:

<- {
  "type": "event",
  "method": "network.beforeRequestSent",
  "params": {
    "context": "0f38a9a6-912c-4e83-9945-9aa8b9ee1f1f",
    "isBlocked": true,
    "request": {
      "request": "41",
      "url": "https://test-provideresponse-mock-script.glitch.me/script.js",
      [...]
    },
    "intercepts": ["876a5b3d-045b-49ca-80e4-c61de1d0d00d"],
    [...]
  }
}

-> {
  "method": "network.provideResponse",
  "params": {
    "request": "41",
    "body": {
      "type": "string",
      "value": "console.log('Log from provideResponse script')"
    },
    "headers": [
      {
        "name": "Content-Type",
        "value": {
          "type": "string",
          "value": "text/javascript"
        }
      }
    ],
    "reasonPhrase": "OK",
    "statusCode": 200
  },
  "id": 73
}

<- { "type": "success", "id": 73, "result": {} }

Finally we receive the expected log.entryAdded event, which shows that the script was correctly received and handled by the test page.

<- {
  "type": "event",
  "method": "log.entryAdded",
  "params": {
    "type": "console",
    "method": "log",
    "source": {
      "realm": "765fbec6-14e0-427c-9984-527010763fcd",
      "context": "0f38a9a6-912c-4e83-9945-9aa8b9ee1f1f"
    },
    "args": [
      {
        "type": "string",
        "value": "Log from provideResponse script"
      }
    ],
    "level": "info",
    "text": "Log from provideResponse script",
    "timestamp": 1722860802687
  }
}

New handler field for browsingContext.userPromptOpened

The browsingContext.userPromptOpened event now contains an additional "handler" field which contains the user prompt handler type ("accept", "dismiss" or "ignore") configured for the prompt which triggered the event. The user prompt handler type can be configured via the unhandledPromptBehavior capability.

New originalOpener field for the BrowsingContextInfo type

The BrowsingContextInfo type now includes an "originalOpener" field which contains the context id of the context from which the browsing context was opened. For instance if the browsing context was created using window.open or by clicking on a link from another page, the "originalOpener" field will contain the context id of this page (even if the link has rel=noopener). If the context was not created from another page (for instance using the browser UI, or using the browsingContext.create command), the "originalOpener" field will have the value null.

See the example below, with a first window created manually, and a second one created using a rel=noopener link.

Two Firefox windows. The one in the background contains a link with the label "Link with rel=noopener". The one in the foreground contains the text "Page opened with rel=noopener". The DevTools console is opened on the second window, and shows that "window.opener" evaluates to null<figcaption class="wp-element-caption">Window opened via a rel=noopener link, window.opener is null in the Console</figcaption>

As you can see in the browsingContext.getTree result below, the first window has "originalOpener" set to null, because it was created using the Firefox UI. And the second window has the same field set to the context id of the first window (even though window.opener is null because we used a rel=noopener link).

{
  "type": "success",
  "id": 6,
  "result": {
    "contexts": [
      {
        "children": [],
        "context": "dab18807-4b2e-4b10-b046-5c60c2b15811",
        "originalOpener": null,
        "url": "https://test-bidi-openerfield.glitch.me/",
        "userContext": "default",
        "parent": null
      },
      {
        "children": [],
        "context": "6142ff68-1185-4a9f-b08e-0932d797ef15",
        "originalOpener": "dab18807-4b2e-4b10-b046-5c60c2b15811",
        "url": "https://test-bidi-openerfield.glitch.me/page2.html",
        "userContext": "default",
        "parent": null
      }
    ]
  }
}

Added support for data URLs with network events

Starting with Firefox 129, you will receive network events (network.beforeRequestSent, network.responseStarted and network.responseCompleted) for requests using data URLs. Note that those requests cannot be intercepted. At the moment we only emit events for navigation requests, but this limitation should be lifted in Firefox 130.

Firefox window with a single tab opened on a data url, which reads "Hello I am a data URL"<figcaption class="wp-element-caption">A page loaded using a data URL</figcaption>

A snippet from the network.responseCompleted event corresponding to the page displayed above can be found below:

{
  "type": "event",
  "method": "network.responseCompleted",
  "params": {
    "context": "307aaff5-164a-4e88-9453-cdddd26ad748",
    "isBlocked": false,
    "navigation": "25df080a-5492-4f53-b68c-b2dc685c202f",
    "redirectCount": 0,
    "request": {
      "request": "7",
      "url": "data:text/html,Hello I am a data URL!",
      "method": "GET",
      [...]
    },
    "timestamp": 1722873995763,
    "response": {
      "url": "data:text/html,Hello I am a data URL!",
      "protocol": "data",
      "status": 200,
      "statusText": "OK",
      "fromCache": false,
      [...]
    }
  }
}

Support for the promptUnload argument for browsingContext.close

The new argument promptUnload for browsingContext.close allows to bypass "beforeUnload" type prompts. This argument is a boolean, set it to true in order to show "beforeUnload" prompts or to false to bypass them. This parameter is optional, and if omitted it will default to false (meaning "beforeUnload" prompts will be bypassed).

A firefox window with a single tab showing a "beforeunload" popup.<figcaption class="wp-element-caption">Use promptUnload=true to allow "beforeUnload" prompts with browsingContext.close </figcaption>

Bug fixes

Mozilla Localization (L10N)L10n report: August 2024 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet.

New content and projects

What’s new or coming up in Firefox desktop

Last month you may have seen “Firefox Labs” while translating in the Firefox project. In the coming months a number of new experimental features are being made available in Firefox through Firefox Labs, allowing users to test out and provide feedback (through Mozilla connect) on in-development features. You will be able to turn those features on and off by navigating to your about:settings page and clicking ”Firefox Labs.” You can test it out yourself in Nightly right now.

Starting from the upcoming Firefox version 131 you should start seeing strings to localize for a number of new experimental features.

AI Chatbot

You may have noticed this feature in the current version of Nightly already. With this enabled, you can add AI chatbots such as ChatGPT to the sidebar. When added, users can also select text on a page and use the context menu to choose a pre-generated prompt. This feature is being opened for localization in version 131, and in addition to the regular UI strings you would expect, the prompts for sending to the chatbot will also be available to localize.

Localizing chatbot prompts

You can localize these prompts as usual, but you may want to test potential prompts out to see the quality of the results returned and tweak if necessary. Please find some additional background information from the development team to help you when localizing these:

Starting with Firefox version 130, users can choose to add an AI chatbot to their browser. This feature will be added to the Settings > Firefox Labs page, where interested users can choose to try it out. The chatbots users can choose from: Anthropic Claude, ChatGPT, Google Gemini, Hugging Chat, Le Chat Mistral.

In addition to having the chatbot in the sidebar, when users select text on a webpage, we will suggest several actions the user can ask the chatbot to perform. Selecting an action sends the selection, the page title, and a prompt that we have written to the chatbot provider.

Prompts are the plain language ‘instructions’ for the chatbot and will be visible in the provider’s interface.

About our prompts

This table lists the actions, their purpose, and the prompt.

Action Purpose Prompt 
Summarize Help users understand what a selection covers at a glance Please summarize the selection using precise and concise language. Use headers and bulleted lists in the summary, to make it scannable. Maintain the meaning and factual accuracy. 
Explain this Help users understand unfamiliar words and topics Please explain the key concepts in this selection, using simple words. Also, use examples.
Simplify language Make a selection easier to read Please rewrite the selection using short sentences and simple words. Maintain the meaning and factual accuracy.
Quiz me Test understanding of selection in an interactive way Please quiz me on this selection. Ask me a variety of types of questions, for example multiple choice, true or false, and short answer. Wait for my response before moving on to the next question

Writing style of prompts

In English, we have made the prompts concise and direct for a few reasons:

Some providers have character restrictions around how much can be input into their chat interface (the ‘context window’). The length of the prompt plus the length of the selection are included in this character count.

Being direct provides less room for misinterpretation of the instructions.

When localizing, please strive also for being concise and direct, but not at the expense of losing meaning. We understand this style may feel more “formal” than some of our other strings.

Sidebar customization / Vertical tabs

In addition to the AI chatbot mentioned above, more changes to the sidebar are in the works including the addition of vertical tabs. Keep your eye out for this experiment and associated strings coming in 131.

Upcoming features

In addition to the experiments planned for 131, there are more new features we can look forward to in later versions. Currently in active development are features related to profile management as well as creation of encrypted backups of your Firefox data.

What’s new or coming up in mobile

Firefox for Android has two exciting new features, and we’d love your help testing them out! Please use the Nightly version in both cases (which is the version you should be using anyways in order to test your localization work).

The first one is the Translation feature, which you can access by navigating to any website, and then going to Settings > Translate page. Play around with the feature, for example you can translate a page from English to French, and then from French to another language you may speak.

If you encounter any problems whatsoever, please file a bug here, under the Component “Translations”. Under “Type”, chose “Defect”.

Secondly, there is an entire toolbar menu redesign! This is not available by default on Nightly yet, so you will have to enable it through Secret Settings. To do so, go to Settings > About Firefox Nightly, and click 5 times on the Firefox Nightly logo. This will enable the Secret Settings, which you can access by clicking on the back arrow (which brings you back to Settings). Scroll down until you see “Secret Settings”. Then select both “Enable Navigation Toolbar” and “Enable Menu Redesign”. You’ll immediately notice the difference once you navigate via the bottom toolbar.

Please play around with this new feature as much as possible in your language – look out especially for truncations, as we expect to see quite a few.

If you encounter any problems whatsoever, please file a bug here, under the Component “Toolbar”. Under “Type”, chose “Defect”.

Firefox for iOS is expected to incorporate these changes in the future; however, that work has not started yet.

What’s new or coming up in SUMO

The next community call is coming up on August 7, 2024. We’ll talk about what’s coming in Firefox 129 as well as have a discussion with the lead editor of the IRL podcast to talk about their next season, “AI and Me.” Join us on Wednesday, August 7, 5pm UTC!

If you want to get updated on the upcoming Firefox release, check out our release wiki page for Firefox 129 to stay updated with known issues/dot releases. We’ve been doing this since Firefox 126 and it’s pretty well-received by the community.

Recently, we also teamed up with the Firefox team to organize the Firefox third-party installer campaign. As a result, we received 1,844 reports in total, identified 683 unique third-party websites and 105 unique download links. The Firefox team is currently conducting further investigations with the QA team based on these reports.

Apart from that, check out the contributor spotlight content that we published recently, and learn more about what we’ve done in Q2 from this blog post.

Events

This month we hosted Erik Nordin, Marco Castelluccio, and Greg Tatum from the Firefox Translations team for a virtual interview. We covered topics such as how the Firefox translation feature works, privacy features, incorporating LLMs and AI, and more. The stream recording will be available to view at any time. You can watch the recording on Air Mozilla or YouTube.

Please provide your feedback on this event through this form so we can make our future events even better!

In June we also hosted a Pontoon demo, which covers all the basic functionality you’ll need to get started translating on Pontoon, plus handy tips and tricks to help you get the most out of this easy to use tool.

Come check out all our event videos here!

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Firefox Developer ExperienceFirefox DevTools Newsletter — 128

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 128 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla

  • Sebastian Zartner added warnings in the Rules view when properties only applying to replaced elements are used on non-replaced elements (#1583903), and when column-span is used on elements outside of multi-column containers (#1848705)
  • Pier Angelo Vendrame made the new request data in the Network Monitor not being persisted in private windows (#1892052)

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

@property

Firefox 128 adds support for the @property at-rules.

It allows developers to explicitly define their CSS custom properties, allowing for property type checking and constraining, setting default values, and defining whether a custom property can inherit values or not.

https://developer.mozilla.org/en-US/docs/Web/CSS/@property

All registered properties (either via the at-rule or from the CSS.registerProperty function) are displayed in the Inspector Rules view, in a dedicated @property section (#1841023).

Firefox DevTools Rules view. There's a collapsible "@property" section. In the section, there are multiple custom properties declaration (showing the variable name, its syntax, inherits and initial-value properties)

The registered property information are also displayed in the variable tooltip that is displayed when hovering a CSS variable in the rule view (#1899489). When a registered property is not set, its initial value is used, and the tooltip does use it to show the variable value (#1857526)

Firefox DevTools rules view focusing on a single rule. There's a `color: var(-css-inherits)` declaration. A tooltip is displayed showing the CSS variable value and a new section below, showing the registered custom property syntax, inherits and initial-value properties

One of the main advantage of registered property is to be able to have type checking directly in CSS! Whenever a variable is set and doesn’t match the registered property syntax, it is invalid at computed value time. In such case, a new type of error icon is displayed in the rules view, and its tooltip indicate why the value is invalid and what is the expected syntax (#1866712).

Firefox DevTools rules view. There's a rule with a `--css: 1em` declaration, which has a purple error icon at the end. The tooltip for the icon is displayed, and has the following text: "Property value does not match expected "<color>" syntax" Below it, we can see the @property section, showing that the --css syntax indeed is "<color>".

We also added reference to @property rules in the Style Editor (#1886392), and all registered properties will be displayed in the Rules view var() autocomplete (#1867595).

Other Inspector updates

On top of registered properties, Firefox 128 also adds support for relative color syntax, which makes it easy to create a new color from an existing one. For example hsl(from red h 10 l) will create a lightly-saturated, red-ish grey color. The resulting color will be displayed before the color function, like we already do for regular colors.

And since we’re talking color swatches, we also display them for the light-dark() parameters (#1899106).

The specificity of a CSS rule impacts if the rule declarations will take precedence over other rules matching the same element. This is already indirectly visible in the Inspector, as the rules are displayed from the most to the least specific ones. You can now see the specificity of a rule by hovering its selectors in the Rules view (#977098).

Finally we identify two important bugs in the Inspector:

  • Inspecting some elements could cause a browser crash in very specific conditions (#1901233)
  • Entering a single quote in a declaration value in the Rules view could break the style of the page (regressed in Firefox 127)

Both issues are now fixed, we apologies for any inconvenience this might have caused.

Debugger

There wasn’t much (visible) activity in the Debugger for this release, but we also addressed a couple issues that could make the Debugger frustrating. A change in Firefox 126 was preventing to load credentials-protected source maps file and we did fix it in 128. There was also a long problem where the Debugger would stay blank on websites with workers using Atomics.wait (for example Stackblitz). The issue was addressed and everything should run smoothly now.

And that’s it for this months folks, Thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂


Full list of fixed bugs in DevTools for the Firefox 128 release:

The Mozilla BlogStreamline your schoolwork with Firefox’s PDF editor

As a student pursuing a master’s degree, I’ve spent too much time searching for PDF editors to fill out forms, take notes and complete projects. I discovered Firefox’s built-in PDF editor while interning at Mozilla as a corporate communications intern. No more giving out my email address or downloading dubious software, which often risks data. The built-in PDF tool on Firefox is a secure, efficient solution that saves me time. Here’s how it has made my academic life easier. 

Fill out applications and forms effortlessly

Remember those days when you had to print a form, fill it out and then scan it back into your computer? I know, tedious. With Firefox’s PDF editor, you can fill in forms online directly from your browser. Just open the PDF in Firefox on your smartphone or computer, click the “text” button, and you’re all set to type away. It’s a gamechanger for all those scholarship applications and administrative forms, or even adult-life documents we consistently have to fill. 

Using the text tool in a PDF editor to add and edit text with options for color and size.

Highlight and annotate lecture slides for efficient note-taking

I used to print my professors’ lecture slides and study materials just to add notes. Now, I keep my annotations within the browser – highlighting key points and adding notes. You can even choose your text size and color. This capability not only enhances my note-taking, it saves some trees too. No more losing 50-page printed slides around campus. 

Highlighting text and adding notes in a PDF using the highlight tool.

Sign documents electronically without hassle

Signing a PDF document was the single biggest dread I had as a millennial, a simple task made difficult. I used to have to search “free PDF editor” online, giving my personal information to make an account in order to use free software. Firefox makes it simple. Here’s how: Click the draw icon, select your preferred color and thickness, and draw directly on the document. Signing documents electronically finally feels like a 21st century achievement. 

Using the underline tool in a PDF editor to underline and correct text with options for color, thickness, and opacity.

Easily insert and customize images in your PDFs

Sometimes, adding an image to your PDF is necessary, whether it’s a graph for a report or a picture for a project. Firefox lets you upload and adjust images right within the PDF. You can even add alternative text or alt-text to make your documents more accessible, ensuring everyone in your group can understand your work.

A PDF editor displaying a red fox photo with an alt-text box open, suggesting "A red fox looking into the distance."

There are endless ways to make Firefox your own, however you choose to navigate the internet. We want to know how you customize Firefox. Let us know and tag us on X or Instagram at @Firefox.

Get Firefox

Get the browser that protects what’s important

The post Streamline your schoolwork with Firefox’s PDF editor appeared first on The Mozilla Blog.

About:CommunityHow community helped to shed the lights on Firefox unknown funnel

Community has always been a vital part of Firefox, from the first version, and even more so now. The recent Firefox third-party installer campaign, held online, underscores the importance of community participation to us. The main goal of the campaign was to help the Firefox team gather as much information as possible about third-party websites that offer Firefox desktop downloads.

As a disclaimer, for the best browsing experience, we recommend that users download Firefox through our official distribution channels on Mozilla.org  or the Microsoft store, even though Firefox is available for download on many third-party websites.

Anyone may distribute unaltered copies of Mozilla softwares from Mozilla.org without expressed permission, as long as they comply with our distribution policy. However, we noticed that the quality of these distributions can suffer from lack of maintenance.

The Firefox team aimed to audit these third-party websites in order to work with them towards higher-quality distribution. This is crucial because these installers can put Firefox users at risk by providing an outdated version, a build with the wrong locale, or even malicious installations, leading to security risks and a poor user experience. We also noticed that users who install Firefox from unofficial sources tend to have lower new user retention.

Knowing that the team couldn’t solve this alone, the Firefox team joined forces with the Customer Experience team, a.k.a. support.mozilla.org, to design a community campaign for this endeavor.

Preparation began around the end of May 2024. We used a past similar campaign as a blueprint for this activity. Finally, we launched the campaign on the community portal and used Alchemer to host our submission form, allowing us to localize it. Despite the short time to prepare the campaign, we engaged with around 25 community localizers to help translate the campaign materials into different languages. This enabled us to offer the campaign in 20 locales, including en-US, pt-BR, pt-PT, es-MX, es, fr, de, nl, it, pl, el, tr, hi-IN, id, zh-CN, zh-TW, tl, ko, ja, vi, and ru.

As a result of this campaign, we received 1,844 reports in total. From these reports, we identified 683 unique third-party websites and 105 unique download links. The Firefox team is currently conducting further investigations with the QA team based on these reports.

We would like to extend our appreciation to the community members who participated in this campaign. We identified 47 people who submitted at least 10 valid reports. Special thanks to jonas-w, Preet Vaishnav, ngoclong19, Santiago FN, Zeb, Virus Killer, MathDesigns, Shashank Shekhar, Igor Maciejewski, Paul Heil, J.D., HAKANKOKCU, DJ F.T.S, Berk Demirag, Cristian E. Rodriguez, twistqj, Sebastian Paczoski, Ella Akkaya, Deepak Kumar, Ali Fleih, linjingsong666, Marcelo Ghelman, Nathan Verkerk, Woksup604, Pavel “NaTRenKO” Bernatski, VIKRAM.S, Sören Hentzschel, Cody Ortt, Wedone, Adri, Josh S, Kuvam Bhanot, Caleb Hawkins, Yutaro U, william A, aquaponieee, williammmm, Magnus Bache, Khalid Duel, Ryan Pratt, Sean B, Léo (Leeo) P, Bentouati Imadeddine, AyJay, Romar Mayer Micabalo, and wrkus.

Additionally, we’d like to sincerely thank our community localizers who helped translate the campaign landing page and submission form. Thank you to Marcelo, Luis, Gerardo, Pierre, Artist, Cláudio, Mark Heijl, Wim, Tim Maks, Michele, Chris, Jim, Selim, Mahtab, Fauzan, Lidya, Haoran, Wxie, Irvin, Ronx, Bob, Hyeonseok, Daisuke, Quế Tùng, and Dmitry. We couldn’t have done this without your contributions, and we cannot thank you enough.

Of course, many other parties were involved, including many other participants whom we can’t mention one by one. You’ve all been awesome, and we sincerely appreciate all your contributions to this project.

Mozilla Privacy BlogNTIA Affirms the Importance of Openness in AI

This week, we got an eagerly anticipated look at how the US Government is thinking about openness and AI, a question that we’re focusing on a lot at Mozilla. On Tuesday, the National Telecommunications and Information Administration (NTIA) published their review of “open-weight” foundation models, and the risks, benefits, and potential policy approaches. NTIA’s report is the result of a public consultation process coming out of the Biden Administration’s Executive Order on AI. Mozilla weighed in with our views, sharing our history in the open source movement and the value of openness in AI.

We welcome NTIA’s recommendation that the US government take no current action to restrict open foundation models; this approach supports the ability of open source AI to flourish, promoting the benefits that openness can bring, like innovation, competition, and accountability.

In their report, NTIA rightly notes that “current evidence is not sufficient to definitively determine either that restrictions on such open-weight models are warranted or that restrictions will never be appropriate in the future.” Instead of recommending restrictions, NTIA suggests that the government “actively monitor…risks that could arise from dual-use foundation models with widely available model weights and take steps to ensure that the government is prepared to act if heightened risks emerge.”

NTIA’s recommendations for collecting and evaluating relevant evidence include support for external research, increasing transparency, and bolstering federal government expert capabilities. We welcome this approach, and in our comments we called for governments to play a role in “promoting and funding research”; we agree that it will help us all better understand and navigate the AI landscape.

Competition is also featured in NTIA’s report. While it’s recognized that open-weight models alone aren’t sufficient to bring about lasting and meaningful competition in the ecosystem, NTIA notes that foundation models can lower market barriers to entry and “decentralize AI market control from a few large AI developers.” Mozilla’s story began when we sought to bring much-needed competition to the browser market. We’ve watched with concern as similar patterns of concentration emerged in the AI ecosystem, and agree that openness in AI can contribute to a solution.

As we noted in our submission to NTIA, “[g]ood policymaking on AI, and on openness in AI in particular, therefore requires a careful balancing of benefits and risks as well as analytical rigor in taking into account the various dimensions and actors in the AI ecosystem. Rash decisions and ill-considered solutions may cause irreparable damage to the ‘open source’ AI ecosystem.”

We appreciate that NTIA has heard these concerns in their work and wholeheartedly support the broader lens through which they considered these tough questions around openness and AI – including recognizing the spectrum of openness, and centering analysis on marginal risk. Building on NTIA’s own recommendations, we also reiterate the importance of providing robust support to the open source AI ecosystem and involving a broad range of stakeholders in making decisions about its governance, including through measures such as:

  • Supporting the open source AI community in developing norms and practices around responsibly developing and openly releasing AI models and components.
  • Investing in and providing resources for the development and maintenance of ‘open source’ AI.
  • Involving federal agencies responsible for protecting civil rights, promoting competition, and advancing scientific research in the development of policy touching on openness in AI.

Mozilla was grateful for the opportunity to share our views as part of this important process. We believe that supporting openness in AI and tackling the challenging questions that the technology presents – including as a partner to policymakers in efforts like this one – can create a safer, more trustworthy, and better AI future for all.

The post NTIA Affirms the Importance of Openness in AI appeared first on Open Policy & Advocacy.

Don Martia new browser feature?

The Web’s hottest new feature is Privacy-Preserving Corporate Information Sharing (PPCIS).

When a corporate employee uses a PPCIS browser to log in to any of their employer’s web applications, such as

  • shared document editor

  • webmail

  • bug tracker

  • Slack

PPCIS automatically uses its built-in AI to make a totally privacy-preserving summary of the employee’s work activity, then posts the summary to a PPCIS server using really cool math that makes it possible to identify the employer but not the individual.

The PPCIS server then aggregates all PPCIS summaries from all the users at a company to make a report that is shared with any customer or prospective customer who visits the company’s public web site.

PPCIS is not a real feature, but do you think that, if it existed, corporate IT departments would leave it turned on? If the answer is no, why would people want privacy-preserving tracking of their personal web activity? More: PET projects or real privacy?

Related

Return of the power user Growth hacking and endshittification have gone far enough that the gain from computer dinking skills is now greater than it was in the days of X modelines and autoexec.bat files.

Bonus links

San Francisco to ban software that “enables price collusion” by landlords (oh, no! somebody think of the innovation!)

Using the term ‘AI’ in product descriptions reduces purchase intentions Researchers also discovered that negative response to AI disclosure was even stronger for high-risk products and services, those which people commonly feel more uncertain or anxious about buying, such as expensive electronics, medical devices or financial services. Because failure carries more potential risk, which may include monetary loss or danger to physical safety, mentioning AI for these types of descriptions may make consumers more wary and less likely to purchase…

Amazon forced to recall 400K products that could kill, electrocute people An Amazon spokesperson told Ars that Amazon plans to appeal the ruling.

The Mozilla BlogBrowsers, cookies and surfing the web: The quirky history of internet lingo

An illustration of speech balloons

A smiling woman with long dark hair, wearing colorful earrings and a navy blue polka dot top, in front of a turquoise background.
Dr. Erica Brozovsky is a sociolinguist, a public scholar and a lover of words. She is the host of Otherwords, a PBS series on language and linguistics, and a professor of writing and rhetoric at Worcester Polytechnic Institute. You can find her at @ericabrozovsky on most platforms. Photo: Kelly Zhu

The internet is ubiquitous: on our desks, in our pockets, even in the air around us, as radio waves transmit between devices so we can be online on the move. It’s a sprawling web of interconnectivity, linking people and gadgets around the world. When computer scientist Tim Berners-Lee wrote his first proposal for a hypertext project called WorldWideWeb in 1989, there’s no way he could have known the impact his invention would have on billions of people across the globe, which he confirmed in a 2014 Reddit AMA

In the 35 years since the invention of the World Wide Web, an explosion of new internet words has emerged. As new technologies develop, we adopt words or create novel ones to fill in the linguistic gaps. For example, to describe one of the advancements of the Industrial Revolution, the word train was extended from its older definitions as a procession or sequence of objects in a row. And the steam-powered vehicle pulling a train of railway cars? That brand new technology needed an innovative name: locomotive. People often incorrectly think locomotives and trains are synonymous, and are similarly mistaken with the internet and the World Wide Web. To keep the transportation analogy going, the internet is the railway system, the data that moves between sites or sends emails is the train, and the World Wide Web is the scenery and points of interest along the route.

A stylized white "W" with green shadows on a blank background. Text: "Let's share what we know. World Wide Web."<figcaption class="wp-element-caption">WWW’s “historical” logo, created by Robert Cailliau in 1990. Source: Wikimedia Commons</figcaption>

As far as names go, internet and World Wide Web make sense. The words visualize interconnectedness. Other internet terms like bookmark, which functions digitally the same way as a tangible piece of material is used to denote a place in a book, and email (an abbreviation of electronic mail), show clear parallels in meaning to their analog counterparts. Websites are locations, or sites, on the web. Domains are subsets of the internet under the control of a single authority, much like a physical territory that a ruler would have dominion over. And if you know that the prefix hyper- means above or beyond, you’ll understand that hypertext and hyperlinks essentially go beyond the constraints of normal text and links. But not all internet words are so etymologically evident, and some even come with stories. Let’s start at the very beginning.

Illustration of a large "Click me" button with a hand-shaped cursor hovering over it, surrounded by retro-style web browser windows in pink, blue, and purple hues.<figcaption class="wp-element-caption">If you know that the prefix hyper- means above or beyond, you’ll understand that hypertext and hyperlinks essentially go beyond the constraints of normal text and links. </figcaption>

When you access the internet, you open a browser (which first appeared as the acronym BROWSER for BRowsing On-Line With SElective Retrieval) and begin to navigate around, otherwise known as surfing the internet. The term is often attributed to librarian Jean Armour Polly, who wanted a pithy metaphor for the fun and chaos of navigating the online world for her 1992 article’s title. Polly wasn’t the only one with a penchant for riding waves online: a 1991 comic book “The Adventures of Captain Internet and CERF Boy” published by CERFnet depicted a superhero who literally surfed around on a surfboard answering internet cries for help.

Keeping it oceanic, the term phishing is attributed to hacker Khan C. Smith in the mid 1990s, allegedly based on the homophone fishing: trawling for sensitive information from a sea of internet users. The alternative spelling is a nod to phreaking, which was a way of hacking telephones (hence “ph”) to avoid paying long-distance phone charges (remember those?). And speaking of pesky things, the word spam comes from an iconic 1970 Monty Python’s Flying Circus sketch wherein a horde of Viking cafe-goers repeatedly sing the menu item Spam, drowning out all other conversation.

That’s not the only internet food you’ll encounter. It seems like every website you access will ask you to accept cookies in order to personalize your experience, but weren’t we all raised not to accept sweets from strangers? So where did the name come from? Programmer Lou Montulli got the idea for the web version of cookies from the Unix data token term magic cookie, which sounds even more questionable to accept. There has been no confirmed origin of “magic cookie,” but three main theories prevail: drugs, fairy tales, and literal cookies. Perhaps it comes from the 1960s comic strip “Odd Bodkins” that uses magic cookie as a euphemism for LSD. Or maybe much like the Hansel and Gretel crumb trail, browsing the internet leaves a stream of cookie data in your wake. Or potentially the connection is less imaginative: cookie jars store cookies the way browsers store information.

When you go to delete cookies on your machine, you’ll also be asked if you want to clear your cache. Cache has been around since the turn of the 18th century as a hiding place for goods and treasures, from the French cacher meaning “to hide.” The word was first applied to computing in 1967 by IBM Systems Journal editor, Lyle R. Johnson. Apparently no one had any suggestions for a substitute for the clunky phrase “High Speed Buffer” so Johnson sat down with a thesaurus and came up with cache. 

Another cache on your device is the download cache. Downloading was originally used in military contexts to refer to unloading people or goods from various military vehicles (and uploading was the reverse). By 1968, the US Air Force extended the meaning to computers, as discussed in a quantitative study that referenced downloading records from the IBM 305 RAMAC computer to the newer IBM 1050, which took almost two weeks.

“While we stare at our phones and computer screens, it’s a nice reminder that the intention behind these technologies was to connect us together.”

Dr. Erica Brozovsky, sociolinguist

Thirty years later, Jorn Barger coined the term WebLog, a portmanteau of web and log, to refer to online personal journals. In 1999, perhaps as a joke, Peter Merholz posted in the sidebar of his own website: “”For What It’s Worth, I’ve decided to pronounce the word “weblog” as wee’- blog. Or “blog” for short.”” And now blog has generated other new words like vlog, blogosphere, and blogger. 

Avatar derives from the Sanskrit avatāra, meaning “descent,” which in Hinduism referred to the manifestation of a deity into an Earthly terrestrial form. The 1985 computer game Ultima IV: Quest of the Avatar was the first application of the concept of an on-screen character as the digital incarnation of the human user. Neal Stephenson’s novel Snow Crash popularized the idea, which continues to be applied across a wide variety of genres: video games, social media, virtual worlds, even Hollywood blockbusters.

Speaking of width, bandwidth was initially very literal in the 1800s (the width of a band of color or material) and then evolved significantly over 200 years. We can follow the logical progression to physics and mathematics (a range of values within a limited band), to physics and telecommunications (the difference between two frequencies which represents transmission capacity) to computers and telecommunications (data transfer capacity) to general life (emotional or physical capacity). It’s curious how the term moved beyond computers and technology back to the human experience.

We’re humans after all, and the internet and World Wide Web are tools for expanding our human experience. While we stare at our phones and computer screens, it’s a nice reminder that the intention behind these technologies was to connect us together. After all, Tim Berners-Lee said the thing he’s most proud of about the World Wide Web is “the wonderful global collaborative spirit of all the people who turned up to help build it and build things on it.”

Get Firefox

Get the browser that protects what’s important

The post Browsers, cookies and surfing the web: The quirky history of internet lingo appeared first on The Mozilla Blog.

Tantek ÇelikChoosing Tools

One of the biggest challenges with tools for making things, even specific to making web things, is there are so many tools to choose from. Nearly every tool has a learning curve to overcome before being able to use it efficiently. With proficiency, comes the ability to pursue more efficient use of tools, and find limitations, papercuts, or outright bugs in the tools. If it’s an open source tool or you know its creator you can file or submit a bug report or feature request accordingly, which might result in an improved tool, eventually, or not. You have to decide whether any such tool is good enough, with tolerable faults, or if they’re bad enough to consider switching tools, or so bad that you are compelled to make your own.

This post is my entry for the 2024 July IndieWeb Carnival theme of tools, hosted by James G., and also syndicated to IndieNews.

Criteria

I have many criteria for how I choose the tools I use, but nearly all of them come down to maximizing trust, efficiency, and focus, while minimizing frustration, overhead, and distraction. Some of these are baseline criteria for whether I will use a tool or not, and some are comparative criteria which help me decide which tool I choose from several options.

Trustworthy tools should be:

  • Predictable — it should be clear what the tool will do
  • Dependable — the tool should “just work” as close to 100% of the time as possible
  • Acting as you direct it — the tool should do exactly what you direct it to do, and not what other parties, such as its creator or service provider, direct it to do
  • Forgiving — if you make a mistake, you should be able to undo or otherwise correct your mistake
  • Robust enough to keep working even when not used for a while

Efficient tools should:

  • Be quick and easy to start using
  • Be responsive, with as low a latency as possible, ideally zero perceptible latency
  • Reduce the work necessary to complete a task, or complete multiple tasks with same amount of work otherwise
  • Reduce the time necessary to complete a task, or complete multiple tasks in the same amount of time otherwise
  • Be quick and easy to shut down, or otherwise put away
  • Use little or no energy when not in use

Focused and focusing tools should

  • Provide clear features for accomplishing your goals
  • Encourage or reinforce focusing on your tasks and goals
  • Never interrupt you when you are using the tool to accomplish a task

Bad tools can have many sources of frustration, and nearly all of these involve inversions of the desirable qualities noted above. Frustrating tools are less predictable, work only some of the time, randomly do things because some other party directed them to (like auto-upgrade), ask you to confirm before doing actions because they have no capability to undo, or stop working for no particular reason.

Inefficient tools take too long to be “ready” to use, are unresponsive of otherwise have a delay when you provide input before they respond, cause you more work to complete a task, or make you take more time than simpler older tools would, require waiting for them to shut down, or use energy even when you are not doing anything with them.

Unfocused tools have many (nearly) useless features that have nothing to do with your goals, encourage or distract you with actions irrelevant to your tasks or goals, or interrupt you when you are working on a task.

Baseline Writing Tools

Examples of tools that satisfy all the above:

  • Pencil (with eraser) and paper
  • A typewriter (ideally with a whiteout band) and paper

That’s it, those are the baseline. When considering an alternative tool for similar tasks, such as writing, see if it measures up to those.

Tools I Like Using

For myself, I prefer to use:

Tools I Tolerate Using

I do also use the iOS and MacOS “Notes” app notes to sometimes write parts of posts, and sync text notes across devices, both of which have unfortunately become just frustrating enough to be barely tolerable to use.

iOS Notes (as of iOS 17.5) are buggy when you scroll them and try to add to or edit the middle of notes. MacOS Notes have a very annoying feature where it tries to autocomplete names of people in your contacts when you type even just the first letter of their name or an @-sign, when you rarely if ever want that. MacOS Notes also forces anything that starts with a # (hash or pound) sign into a weird auto-linked hashtag that is nearly useless and breaks text selection.

There are no options or preferences to turn off or disable these annoying supposedly “helpful” automatic features.

There’s definitely an opportunity for a simple, reliable, easy to sync across devices, plain text notes application to replace iOS and MacOS notes, that doesn’t require signing up to some third-party service that will inevitably shut down or sell your information to advertisers or companies training their LLMs or leak your information due to poor security practices.

Similarly I also frequently use Gmail and Google Docs in my day-to-day work, and I’ve grown relatively used to their lagginess, limitations, and weird user interface quirks. I use them as necessary for work and collaboration and otherwise do my best to minimize time spent in them.

Better Tools

I have focused primarily on writing tools, however I have made many distinct choices for personal web development tools as well, from writing mostly directly in HTML and CSS, to bits in PHP and JavaScript, rather than frameworks that demand regular updates that I cannot trust to not break my code. I myself try to build tools that aspire to the criteria listed above.

At a high level, new tools should provide at least one of three things:

  1. Higher efficiency and/or quality: tools should help you do what you already could do, but faster, better, cheaper, and more precisely
  2. Democratization: tools should help more people do what only a few could do before
  3. Novelty: tools should help you do new things that were either impossible before or not even imagined

Mostly I prefer to focus on the first of those, as there are plenty of “obvious” improvements to be made beyond existing tools, and such improvements have much more predictable effects. While democratization of tools is nearly always a good thing, I can think of a small handful of examples that demonstrate that sometimes it is not. That’s worth a separate post.

Lastly, tools that help accomplish novel tasks that were previously impossible or not even imagined perhaps have the greatest risks and uncertainty, and thus I am ok with postponing exploring them for now.

I wrote a few general thoughts on what tools and innovations to pursue and considerations thereof in my prior post: Responsible Inventing.

Referencing Articles

Articles also written about this topic that reference this article.

  1. 2024-08-01 Tom Morris: A world run by tools

Mozilla ThunderbirdThunderbird Monthly Development Digest: July 2024

Graphic with text "Thunderbird Dev Digest July 2024," featuring abstract ASCII art of a dark Thunderbird logo background.

Hello Thunderbird Community! As we say goodbye to the month of July, we look back at our major accomplishments and the release of a new ESR version.

ESR Released!

Thunderbird 128 “Nebula” is finally out and we couldn’t be more thrilled. 

We fixed more than 1400 bugs, included multiple new features, cleaned up a lot of old code, and enabled Rust development. There’s too much to list so if you’re interested please visit our fancy 128 What’s New Page, blog post, and Release notes to get a much deeper overview of all the juicy things you will get.

We do lots of QA and beta testing, but sometimes major issues are only exposed after significant public testing. That’s why we always roll out a new ESR release gradually. Once we’re confident no problems or regressions exist, we’ll turn on automatic updates — probably towards the end of August.

However, we have enabled manual updates for Windows and macOS users. If you open the About dialogue, you should receive a prompt to update. 

If you’re using Flatpak or Snap on Linux, you are probably on version 128 already. For those who receive Thunderbird updates through their Linux distribution’s repositories, the experience may vary depending on the package maintainer. We don’t have control over that, so please reach out to your distro’s maintainer and ask if they have a timeline.

Linux System Tray

A 25-year-old bug was finally fixed!

If you’re running Daily on Linux, you probably noticed a fancy new system tray icon with a quick action to shut down Thunderbird. This is merely the first step towards a more native integration of Thunderbird inside your operating system, not just Linux.

Stay tuned for more improvements and expansion of this new feature. We promise we’ll try to not take another 25 years!

Folder Compaction Cleanup

Our fight to improve folder compaction and solve for good the issue of tmp files bubbling up in size seems to have come to an end. It was challenging to identify the problem, and even more to create a consistent reproducible scenario.

As all the users affected seem to confirm the disappearance of the issue, we took some time to create a migration to clean up those large leftover temporary files polluting your profile.

We’re gonna run this code in Daily and Beta for a few more weeks to make sure it’s safe and tested properly before uplifting it to ESR.

Exchange

As we continue the implementation of a few more features to make sure the full experience is reliable and complete-looking, we decided to switch the preference ON by default on Daily, in order to invite more testing and gather feedback and bugs as early as possible.

If you’re running Daily and have an Exchange account, please consider setting it up in Thunderbird and report any bug you might encounter.

As usual, if you want to see things as they land you can always check the pushlog and try running daily, which would be immensely helpful for catching bugs early.

See ya next month,

Alessandro Castellani (he, him)
Director, Desktop and Mobile Apps

If you’re interested in joining the technical discussion around Thunderbird development, consider joining one or several of our mailing list groups here.

The post Thunderbird Monthly Development Digest: July 2024 appeared first on The Thunderbird Blog.

The Servo BlogThis month in Servo: console logging, parallel tables, OpenXR, and more!

Servo displaying WebXR content on a Quest 3 in Quest Link mode <figcaption>Figure 1: Servo can now render to XR headsets via OpenXR. Image: Daniel Adams (Twitter)</figcaption>

Servo has had several new features land in our nightly builds over the last month:

We’ve also landed an experimental OpenXR backend (@msub2, #32817), allowing Servo to display WebXR content on actual headsets like the Quest 3 in Quest Link mode. You can enable it with --pref dom.webxr.openxr.enabled, though the backend currently only works on Windows.

Servo nightly showing a table with a caption, containing demos of several other new features <figcaption>Figure 2: a table with a caption, containing demos of several other new features.</figcaption>

Rendering changes

Parallel table layout is now enabled (@mrobinson, #32477), spreading the work for laying out rows and their columns over all available CPU cores. This change is a great example of the strengths of Rayon and the opportunistic parallelism in Servo’s layout engine.

We‘ve also made progress on our new flexbox layout engine (--pref layout.flexbox.enabled), landing support for ‘min-height’ and ‘max-height’ on row containers (@delan, @mrobinson, @mukilan, #32785), as well as baseline alignment of row containers with their siblings (@mrobinson, @mukilan, @delan, #32841, #32810) and for their items by setting ‘align-items’ or ‘align-self’ to ‘baseline’, ‘first baseline’, or ‘last baseline’ (@delan, @mrobinson, @mukilan, @nicoburns, #32787, #32790).

We’ve landed support for generic font families like ‘sans-serif’ and ‘monospace’ (@mrobinson, @mukilan, #32673), as well as commas in <font face> (@mrobinson, #32622) and fixes for font matching on Android and OpenHarmony (@jschwe, #32725, #32731).

For replaced elements like <img> and <canvas>, the ‘min-width’, ‘max-width’, ‘min-height’, and ‘max-height’ properties now respect the aspect ratio of the element (@valadaptive, #32777), and you can now change that aspect ratio with the ‘aspect-ratio’ property (@valadaptive, #32800, #32803).

Firefox devtools connected to Servo, showing several console errors <figcaption>Figure 3: console logging is now supported when using the Firefox devtools.</figcaption>

Devtools and servoshell changes

When debugging in Servo with the Firefox devtools, you can now see console messages from the page (@eerii, #32727), as shown in Figure 3, and you can even debug the devtools connection itself with our new devtools protocol analyzer (@eerii, #32684).

servoshell now has experimental OpenHarmony support (@jschwe, #32594), in addition to our experimental Android support and nightly releases for Windows, macOS, and Linux. We’ve also landed directory listings for local files (@Bobulous, @mrobinson, #32580), made the location bar behave more consistently on Android (@jschwe, #32586), and servoshell no longer quits when you press Escape (@mrego, #32603).

Version and build config servo binary size
Before #32651 126364k
With #32651 110111k (−12.8%)
With #32651
• Without debug symbols
102878k (−18.5%)
With #32759
• Without layout_2013
107652k (−14.8%)
With #32759
• Without debug symbols
• Without layout_2013
100886k (−20.1%)
<figcaption>Figure 4: servoshell binary size improvements on Linux (amd64).</figcaption>

To reduce servoshell’s binary size, we now build our nightly releases with ThinLTO (@jschwe, #32651), and you can go even further by building Servo without debug symbols (@jschwe, #32651) or without the legacy layout engine (@jschwe, #32759). Note that these builds use the production profile in Cargo, not the release profile.

Changes for Servo developers

The Servo book is now the place to go for Servo’s documentation (@delan, #32743). It includes our architecture and design docs, a link to our API docs, as well as docs on building, running, testing, debugging, and profiling Servo.

Servo now builds without the crown linter by default (@jschwe, #32494), simplifying the build process in some cases. If you’re working on DOM code, you can enable it again with ./mach build --use-crown.

GitHub checks popup showing the “DCO” check failing and a link to “Details” <figcaption>Figure 5: the DCO check will now fail unless you sign off your commits.</figcaption>

When contributing to Servo, your commits must now be “signed off”, which is essentially a promise that you own (or are allowed to contribute) the code in your patch. If the DCO check fails, click Details for help on signing off your commits (Figure 5).

Donations

Thanks again for your generous support! We are now receiving 2955 USD/month (+32.6% over June) in recurring donations.

Servo is now on thanks.dev, and already three GitHub orgs that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

We are still receiving donations from 14 people on LFX, but we will stop accepting donations there soon — please move your recurring donations to GitHub or Open Collective. In the meantime, we’ve transferred 2723 USD of donations from LFX to our Open Collective account.

2955 USD/month
10000

As always, use of these funds will be decided transparently in the Technical Steering Committee. Our updated proposal for a dedicated server for CI runners (@delan, @sagudev, @nicoburns) was accepted, which should reduce build times significantly, and this is just the start!

For more details, head to our Sponsorship page.

Conferences and blogs

Alan Jeffrey (1967–2024)

Alan Jeffrey, an early member of the Servo team and a key part of helping the Servo project find a new life outside of Mozilla, passed away on 4 July.

His research has furthered a wide range of fields, including concurrent and distributed systems, programming languages, formal verification, software semantics, typesetting, protocol security, and circuit design.

Alan’s family have also written about his kindness, curiosity, and persistence on his LinkedIn page.

Firefox NightlyConcise and compact – These Weeks in Firefox: Issue 165

Highlights

  • Thanks to Alexandre Poirot for the latest DevTools Toolbox JS tracer improvements. The tracer will likely evolve in the next few weeks, so stay tuned for more updates. You can check out this meta bug as well to see what’s planned.
    • The tracer toggle icon was moved from the Debugger to the Toolbox for easier access (#1873060)

A checkbox for toggling JavaScript Tracer under a menu header "Available Toolbox Buttons", as well as a Toolbox icon at the top right corner of DevTools to access JavaScript Tracer

JavaScript Tracer data for a DOM mousedown event logged within a DevTools Debugger side panel. There is also a context menu item called "Trace in the debugger sidebar" to enable or disable this panel.

  • The Search and Navigation team recently landed some Address Bar bug fixes
    • Marco improved handling of copying and editing the scheme parts of URLs
    • Yazan fixed an issue where omnibox keywords (add-on customized behaviour in the Address Bar) could be triggered when in search mode

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Sebastian Zartner [:sebo]

New contributors (🌟 =  first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Host permissions subsumed by optional host permissions and granted by the user on permissions.request() calls are now going to still be listed for the entire browsing session after the users have revoked them from the Desktop and Android add-ons manager UI – Bug 1899674

WebExtensions Framework
  • MV3 host permissions:
    • The notification dot (shown on the extensions button and/or on the extension browser action toolbar button) is not going to be shown anymore when the current tab url is matching only MV3 optional host permissions – Bug 1905931
    • Fixed potential race between notification dot internals and extensions startup – Bug 1905392
  • Fixed a browser desktop UI accessibility issue hit by users when an extension sidebar panel is opened (regressed in Firefox 127, fixed in Firefox >= 128) – Bug 1905771
  • Fixed race condition that could potentially lead to issues with event page respawning behaviours – Bug 1905153
WebExtension APIs
  • Align browser.runtime.getURL() result when called with a non-extension absolute urls as agreed with other vendors in the WebExtensions W3C Community Group – Bug 1795082
  • Improved error messages logged on invalid application paths found in native messaging manifests – Bug 1908201

DevTools

DevTools Toolbox
WebDriver BiDi
  • External:
    • Andrew Nicols fixed a bug where the SendElementKeys command in WebDriver classic (http) would unnecessarily scroll elements into view, even if they were already visible in the viewport. (#1906095)
    • Balarama Raju reviewed and updated all the error messages for WebDriver classic and BiDi to make sure they all used pretty printing when necessary, and that they all looked consistent. Those messages are displayed to clients when they use commands with invalid or unexpected arguments, so it’s important to make sure the error messages are clear and understandable. (#1788437)
  • Updates:
    • Sasha removed a restriction for our initial implementation for the network.setCacheBehavior command, to allow clients to set the cache behavior both globally and for specific contexts. Previously, we only allowed clients to use one or the other. Now you can for instance disable the network cache for all contexts as a default setting, but still enable the cache in some of them. (#1905307)
    • Sasha fixed a bug with the user prompt events on Android. When several user prompts were displayed simultaneously, we were always picking the first one to emit new events. (#1902264)
    • Julian updated the vendored puppeteer version to 22.13.0, which is the first puppeteer release for which Firefox with WebDriver BiDi is passing 100% of the prioritized puppeteer unit tests.(#1907533)
    • Henrik fixed a bug with the unsupportedPromptBehavior capability, which could only be used on WebDriver classic sessions (or classic + BiDi). It can now be used on BiDi-only sessions, such as the one used by puppeteer. (#1907935)

Lint, Docs and Workflow

Picture-in-Picture

  • Thanks to niklas for resolving uncaught exception: unknown (can’t convert to string) errors whenever we close the PiP window for sites using custom PiP wrappers (Bug 1816756)

Search and Navigation

  • Search
    • Cleanup of the previous search configuration after having switched to v2 is now under way.
    • Moritz has removed most of the old code, including various add-on related support files for tests that are no longer required.
    • Standard8 has simplified most of the search service related tests – we no longer need to start the add-on manager by default, and when we do, our test utils will do it for you.
  • Places

Support.Mozilla.OrgContributor spotlight – Jefferson Scher

Hey folks,

In this edition of the contributor spotlight, you’ll hear from Jefferson Scher, one of our top contributors in our community forums. Jefferson possesses extensive knowledge about Firefox, having contributed since version 0.8. Over the years, he has built numerous tools that help both users and fellow contributors. His dedication and expertise have been invaluable to the community, providing support and creating resources that enhance the Firefox experience. Stay tuned to learn more about Jefferson’s journey, his contributions, and the advice he has for new contributors.

… just because I think I know a lot about the product, users constantly surprise me with new uses and requirements that never occurred to me.

Q: Please tell us about yourself!

I’m a native Californian and have lived nearly all of my life in the San Francisco Bay Area. I am naturally nerdy and enjoy taking deep dives into topics. It’s great when this intersects with being able to help others in a meaningful way, and gratitude is always appreciated.

Q: You’ve been involved with Mozilla for a long time. Can you tell us more about how you got started?

I originally started posting replies on technology forums for Microsoft Office, and gradually expanded to other programs. I never used Mozilla Suite, but I learned about Firefox around version 0.8, and first posted on mozillaZine toward the end of 2004. Initially I was more focused on web development topics, but gradually I grew more confident about answering browser questions as I came to know the product better. I can’t recall when I started posting on the official forum (it was a few years later).

Q: Your job as an intellectual property attorney doesn’t seem to correlate much with your volunteering activity in SUMO. Can you tell us if you see any benefits of contributing to Mozilla in your career?

Reading and researching support questions provides a lot of insights that you can’t get by listening to podcasts or reading tech news. It helps me understand the products and features a wide range of users find useful (or annoying), and how they react to different company service cultures. Some users also make assertions about how the law works in other countries, which is interesting because we do serve clients around the world and it’s useful to understand their expectations.

Q: You’re one of the contributors with the highest solution rate in the forum. What do you think is the most important thing to keep in mind when helping users in the forum? Are there any best practices you can share?

It’s difficult to pick one most important thing. Certainly staying on the case (following up) is important because it would be overwhelming to provide a five page flow chart of “if this, then that” all in one reply. I think it’s useful to express concern, and I have to maintain humility: just because I think I know a lot about the product, users constantly surprise me with new uses and requirements that never occurred to me.

Q: You’ve built many tools to help users (like the tool for reading compressed files) and fellow contributors (like the SUMO advanced search), and even developed extensions over the years. Can you tell us how you decide to start a new project or invent a new tool?

Almost all of these have arisen from problems that I first spotted on the forum that were difficult or impossible to solve with built-in features or other add-ons. Perhaps you could trace it back to a January 2011 thread in Eileen’s Lounge about a Greasemonkey user script to block unwanted sites in Google results. That led me to create the “Google Hit Hider” script. I find it fun to try to solve puzzles like that one, and am always thinking about the next one (if time and skills permit).

Q: Many people contribute to Mozilla because they love Firefox and resonate with Mozilla’s mission. What about you? Is Mozilla’s mission an important driving force for you to contribute, and can you tell us what you like the most about Firefox?

Over the years, Mozilla has worked hard to advance many projects related to the open web, from net neutrality to a range of privacy protections. Since I’m not an activist, the way I can contribute to the mission is to help users love Firefox. I have used Firefox almost exclusively for the past 20 years, and when I try to use other browsers, I can’t find many features that I look for. I’m sure users switching to Firefox from other browsers feel the same way, and we need to help them get on board smoothly with a good support experience.

Q: You’ve been contributing for more than a decade and have seen many changes. Can you share what excites you the most about the current state of our product?

Like many long-time users, I have mixed feelings. I know it’s not possible to be all things to all people. It’s great that performance ratchets up over time, since no one likes watching pages load and render. It’s interesting hearing from users of other browsers who are looking for improved privacy or a better add-on experience, since they often have completely different priorities. But streamlining the interface can lead to extra steps in the workflow for people used to doing things “the old way.” The Connect site has been quite positive for collecting user sentiment and suggestions and allowing developers to be more in touch with our concerns.

Q: What are the biggest challenges you’re facing as a SUMO contributor at the moment? What do you think is the most critical issue we need to address?

We often would benefit from a screenshot or screen video of a problem, but many users find it difficult to provide this in a manner that is convenient for them. It would be handy if the screenshot feature had the option to capture the toolbar area, and video, with a simple way to share.

Q: What advice would you give to someone new who wants to contribute to the forum?

Look for topics that interest you. Maybe it’s threads about privacy features, or online games, or graphics, or video. Get a sense of the kinds of problems users have and think about the kind of help you would want if you were in the same situation. Then think about resources, like Knowledge Base articles on the topic, or solved threads that cover the same problem. You will have to find your own style for how to write the response, but as long as you have a sympathetic tone, you’ll probably do fine.


I hope you enjoy your read. If you’re interested in joining our product community just like Jefferson, please go to our contribute page to learn more. You can also reach out to us through the following channels:

SUMO contributor discussions: https://support.mozilla.org/forums/
SUMO Matrix room: https://matrix.to/#/#sumo:mozilla.org
Twitter/X: https://x.com/SUMO_Mozilla

 

Don Marticolophon

This site is built with a variety of tools.

I like static site generators but the way this site works I don’t have to learn a static site generator, just incrementally add on tools I already know as I need the site to do more.

All of these do a lot more than just what I use them for on this site.

Related

Automatically run make when a file changes

blog fix: remove stray files

planning for SCALE 2025

Block AI training on a web site

Using GitHub Pages to host a locally built site

The Rust Programming Language Blogcrates.io: development update

Since crates.io does not have releases in the classical sense, there are no release notes either. However, the crates.io team still wants to keep you all updated about the ongoing development of crates.io. This blog post is a summary of the most significant changes that we have made to crates.io in the past months.

cargo install

When looking at crates like ripgrep you will notice that the installation instructions now say cargo install ripgrep instead of cargo add ripgrep. We implemented this change to make it easier for users to install crates that have binary targets. cargo add is still the correct command to use when adding a crate as a dependency to your project, but for binary-only crates like ripgrep, cargo install is the way to go.

We achieved this by analyzing the uploaded crate files when they are published to crates.io. If a crate has binary targets, the names of the binaries will now be saved in our database and then conveniently displayed on the crate page:

Dark Mode Screenshot

After shipping this feature we got notified that some library crates use binaries for local development purposes and the author would prefer to not have the binaries listed on the crate page. The cargo team has been working on a solution for this by using the exclude manifest field, which will be shipped soon.

Dark mode

If your operating system is set to dark mode, you may have noticed that crates.io now automatically switches to a dark user interface theme. If you don't like the dark theme, you can still switch back to the light theme by clicking the color theme icon in the top right corner of the page. By default, the theme will be set based on your operating system's theme settings, but you can also override this setting manually.

Dark Mode Screenshot

Similar to GitHub, we now also have dark/light theme support for images in your README.md files:

<picture>
  <source media="(prefers-color-scheme: dark)" srcset="https://test.crates.io/logo_dark.svg">
  <img src="https://test.crates.io/logo.svg" alt="logo" width="200">
</picture>

RSS feeds

Inspired by our friends at the Python Package Index, we have introduced a couple of experimental RSS feeds for crates.io:

This will allow you to keep track of the latest crate releases and updates in your favorite RSS reader. The original GitHub issue requested a feed for all the crates you "follow" on crates.io, but we decided that per-crate feeds would be more useful for now. If you have any feedback on this feature, please let us know!

API token expiry notifications

Our crates.io team member @hi-rustin has been very active in improving our API tokens user experience. If you create an API token with an expiry date, you will now receive a notification email three days before the token expires. This will help you to remember to renew your token before it expires and your scripts stop working.

Following this change, he also implemented a way to create new API tokens based on the configuration of existing tokens, which will make it much easier to renew tokens without having to reconfigure all the permissions. The user interface on the "API tokens" settings page now shows a "Regenerate" button, which will allow you to copy the permissions of existing tokens. Similarly, the token expiry notifications will now also contain a link that directly fills in the permissions of the expiring token, so you can easily create a new token with the same permissions.

Dark Mode Screenshot

Database performance optimizations

Our latest addition to the crates.io team, @eth3lbert, has been working on optimizing the database queries that power crates.io. He has been working on a couple of pull requests that aim to reduce the load on the database server and make the website faster for everyone. Some of the changes he has made include:

  • #7865: Further speed-up reverse dependencies query
  • #7941: Improve crates endpoint performance
  • #8734: Add partial index on versions table
  • #8737: Improve the performance of reverse dependencies using the default_versions table

In addition to that, we have recently migrated our database servers to a new provider with more memory and faster storage. This has also improved the performance of the website and allowed us to run more complex queries without running into performance issues. It was previously taking multiple seconds to load e.g. https://crates.io/crates/syn/reverse_dependencies, but now the server usually responds in much less than a second.

Another piece of the puzzle was archiving old data that is no longer needed for the website. We have moved the download counts older than 90 days into CSV files that are stored on S3 and will soon be publicly available for download via our CDNs. This has reduced the size of the database significantly and improved the performance of some of our background jobs.

Feedback

We hope you enjoyed this update on the development of crates.io. If you have any feedback or questions, please let us know on Zulip or GitHub. We are always happy to hear from you and are looking forward to your feedback!

Don Martilinks for 28 Jul 2024

Kamala Harris’ $7M support from LinkedIn founder comes with a request: Fire Lina Khan (Today’s IT industry big shots are used to the level of respect that they got from the Blackberry generation of politicians, but that was back when the industry was doing transformative innovation. Now that the industry has pivoted to rent-seeking and crime to keep the numbers going up, they’re not going to get the same treatment. Bonus link: The FTC Orders Companies To Disclose Info On “Surveillance Pricing”)

California Forges Ahead With Social Media Rules Despite Legal Barriers (More First Amendement questions on how recommendation algorithms work. It seems like requiring a Parental Control Protocol and a content-neutral surveillance licensing system would be more likely to hold up in court.)

End Single Family Zoning by Overturning Euclid V Ambler Cities around the country and around the world mix land uses, building heights, and lot sizes with no ill effects on health or safety. Indeed, mixed use cities may have improve health and safety by reducing driving and putting empty lots to use which reduces crime. (icmyi: “You Don’t Own Web3”: A Coinbase Curse and How VCs Sell Crypto to Retail)

The CrowdStrike Outage and Market-Driven Brittleness Read the whole thing. Today’s internet systems are too complex to hope that if we are smart and build each piece correctly the sum total will work right. We have to deliberately break things and keep breaking them. This repeated process of breaking and fixing will make these systems reliable.

The sentiment disconnect on ‘AI’ between tech and the public To many, “AI” seems to have become a tech asshole signifier: the tech asshole is a person who works in tech, only cares about bullshit tech trends, and doesn’t care about the larger consequences of their work or their industry. Or, even worse, aspires to become a person who gets rich from working in a harmful industry. (related: Does AI increase productivity at work? New study suggests otherwise, The average AI criticism has gotten lazy, and that’s dangerous)

Some coverage of the Google Chrome third-party cookies news:

(As a gatekeeper company, they’re not going to be able to get away with a setting that turns off third-party cookies but not tracking/personalization on Google Search or YouTube.)

And finally some random good reads.

California Grid Breezes Through Heat Wave due to Renewables, Batteries

Congress Accidentally Legalized Weed Six Years Ago

Costco in Cancún

Not Lost In Translation: How Barbarian Books Laid the Foundation for Japan’s Industrial Revoluton

Adrian GaudebertThe frustration of (never really) finishing Dawnmaker

We are 5 days away from the release of Dawnmaker! It is a time of excitement, of stress of course, but also of regrets as we realize that there are so, so many things that we will not be able to add to our game. Let's introduce today's topic with a short video that is very à propos:

A game is never truly done. There's always the next thing you want to add, the little detail you want to change, the obvious problem you want to solve. But we cannot work on our game forever, because we simply need to sell it at some point and, hopefully, get some money to pay the bills and the foods. Such is the weight of reality on our dreams.

So we had to just make the cut somewhere, and decide on a release date. July 31 it is! Why? We wanted to release earlier, but there was a Steam Next Fest in early June, quickly followed by the Steam Summer Sale in early July, two events during which it is not advised to release a game. We've read that releasing on a Wednesday is a bit better for indies because there are usually less games coming out. So there we ended up! (That sounds easy but it took us a while to find a date that worked well for us… )

In just 5 days, Dawnmaker will be available for all to buy, to play, and to judge. That's a terribly exciting experience, but also a terribly scary one. Because we know that the game is not perfect. We know that it has weak points, that it is lacking in some places. But we have to release it anyway, we have to put it into your hands, and we have to accept that, yes, this is the game we're going to sell.

Believe me: it is truly heartbreaking to see all those things that could have been, all the ways this game could have been better, if only we had had more time, more money, more people… We are going to release a product that is not exactly what we had in mind, but a product that is what we have been able to create in the time we allowed ourselves.

In an attempt to amuse you, and maybe to grieve these features that will never be, let's go through some of the main elements that we wanted to add to Dawnmaker but couldn't.

Scientific research

The science part of the game has always, in our plans, been so much deeper than it currently is. In the game, science can be spent in some buildings to generate Eclairium. That's fairly basic. We had much bigger ambitions for that aspect of the game: we wanted science to be spent to research new technologies. Some scientific buildings were meant to have a scientific line, each step giving a one-off or a permanent bonus. For example, a research would have improved the production of your fields, another would have made harvesting better. Some would have replaced cards in your hand with a better version.

The reason why this never happened was that it required a lot of programming. We needed to change the way the core of the game worked, to add a big layer of complexity allowing to handle these types of effects. It also required some heavy UI work, which was partly done by Agathe when she worked with us, but that we never integrated. Sorry Agathe, it is very unlikely that this work of yours will ever be in the game.

Drafting cards

Deckbuilding is the poor child of Dawnmaker. When you mix two genres like we did, you usually end up favoring one over the other. We definitely did that with the city building part, at the expense of the deckbuilding part. But I have a hunch that we could have made the cards a lot more interesting for a cost that was not too high. Here's what I had in mind, but never had the time to try.

Some buildings in the game give you a card when you build them. That card is always the same, you know what it will be before choosing to buy the building. I wanted to change that, at least on some of the buildings, to instead make them offer you a choice between 3 different cards of the same type and level. So instead of an Exploration post always giving you an Exploration card, it would let you choose a new card from three random level 1, industry cards. Sometime you would get to pick an Exploration, sometimes you'd get an Optimization, and sometimes, rarely, you would find a card that you had never seen before.

From an economical point of view, this would also have allowed us to produce a lot more content for very little cost: since our cards do not have illustrations, adding a new card to the game would just be a matter of designing it. As the game currently stands, adding a new card means adding a new building, which means creating a new graphical asset for it, which is expensive! I sincerely wished I had thought about this much earlier in the development of Dawnmaker, but I did not, and here we are not having this in the game.

Starting characters

Much like in Slay the Spire, we wanted to have a little cast of characters that you could choose from when starting a new game. They would not have been as distinct as in Slay the Spire, but would allow player to start each game with a different deck and roster of buildings. We had plans for 3 different characters, each opening a different way of playing the game.

Nomad buildings

You might have played with the buildings that give you resources when you build something adjacent to them, and thought that they were weak? Well, that's because we made them planning for another kind of buildings: nomads. We wanted to have buildings that could be moved from one tile to another, triggering the adjacent build effects each time. We think it would have added a more puzzle-y element to the game.

Smog effects

When we added the cards to represent the Smog's behavior, we knew we had an opportunity for more than just Luminoil consumption. Basically, anything that a building could do, we could make the Smog do. We wanted to have the Smog give you Curse cards, half your Luminoil stock, destroy or deactivate some buildings, and so on.


I'm going to stop here because there is a lot more small things that I wished we could have added to Dawnmaker. But the game is what it is, and we're still very proud of all the work we've done over the last 2.5 years!

This piece was initially sent out to the readers of our newsletter. Wanna join in on the fun? Head out to Dawnmaker's presentation page and fill the form. You'll receive regular stories about how we're making this game and the latest news of its development!

Join our community!

Support.Mozilla.OrgContent intake workflow and how you can contribute to SUMO Knowledge Base

Hello, SUMO community members!

If you contribute to the Knowledge Base in SUMO, please read this blog post carefully as we explain how others can request content from the SUMO team.

Historically, we didn’t have a structured workflow for content requests, relying on personal engagement or public groups to act reactively. With a larger content team, establishing a proper workflow is essential for task distribution and transparency within the team.

In general, the content intake workflow can be summarized in 4 steps:

Step 1: Submitting a content request

The process begins with submitting a content request through a Bugzilla form. Typically, feature/product owners make these requests, but anyone with ideas for improving support content can submit, including contributors. Documenting requirements helps us act appropriately.

This is a crucial step, and we require each field in the form to be filled out. Each piece of information helps us determine the necessary steps moving forward. All internal teams must use the Bugzilla form for SUMO content requests, whether for new articles or updates. Exceptions are for minor fixes, which can be submitted directly in the KB article. To learn more about what we consider as minor fixes, please see this.

Step 2: Determining content access restrictions

After submission, the workflow diverges based on the content access restriction chosen:

  • Non-confidential: All bugs and drafts within these requests are visible to anyone with a link and can receive comments and suggestions, benefiting from community contributions.
  • Confidential: These bugs are restricted and will be handled internally by staff members, due to sensitive information, such as upcoming features yet to be publicly announced, or information related to partnerships or other business strategies.

Step 3: Content creation

Once the necessary information is provided, the content team assigns the bug to a responsible person. This usually involves creating a draft in Google Docs before publishing it as a revision. The content team also creates in-product links if needed. Areas of responsibility for SUMO technical writers are:

  • Lucas: Firefox (desktop, Android, and iOS)
  • Dayani: Privacy & Security products, Pocket, Firefox Accounts

Step 4: Publishing & resolution

Once the content draft is ready and approved by all parties, the person responsible for it can submit it as a new revision.

How contributors can help

Contributors remain essential to the article creation process. With this update, we’re aiming to make sure that the contribution workflow is integrated and aligned with our internal workflow.

For non-confidential content requests, contributors are encouraged to get involved. And here’s how you can help:

  • Identify a content request: Keep an eye on new content request bugs. When you find one you’re interested in, please directly comment on the bug to notify the content team that you want to help out with the request. If you’d like to get notifications on new content requests, consider watching the Knowledge Base Content component on Bugzilla. To do this, go to your Bugzilla profile → Edit Profile & Preferences → Component Watching. Choose support.mozilla.org on the product selection and select Knowledge Base Content for the Component field. And don’t forget to click on the Add button to save your changes.
  • Get assigned: Wait for the content team to assign the ticket before starting. Please do not work on the actual content creation before the content team assigns the ticket to you.
  • Content creation:
    • Review the ticket: Make sure to review the ticket thoroughly and understand the request. Also, ensure you can complete the work by the due date for publication. If anything changes, and you can’t finish the content on time, let the content team know as soon as possible.
    • Create a draft: Use Google Docs to start working on the draft. If it’s not possible, you can also share the content file as an attachment in Bugzilla for others to review.
  • Get Feedback:
    • Share the draft: Post the link to your draft in the Bugzilla ticket for review.
    • Open for comments: Ensure that the Google Docs settings allow for comments.
    • Work with the requester: Collaborate closely with the requester to cover all points in the article. If any information you need to complete the work is missing, don’t hesitate to reach out to the requester directly for additional details.
    • Final review: Once the draft is finalized and approved by all parties, including the requester, you can submit the content as a new revision on the actual Knowledge Base article. Once the KB revision is submitted, please also assign the ticket back to the technical writer responsible for the product.
  • Publication: The technical writer will review and publish the content.

If you have questions about this update, please submit your comments in this contributor discussion thread!

The Mozilla BlogPicture-in-Picture lets you watch videos while ‘working’

Some days there’s something extra interesting to watch online — a sports event, election coverage, a certain show is leaving Netflix so you gotta binge — but you’ve got work to do. The Picture-in-Picture feature in Firefox makes multitasking with video content smooth and easy, no window shuffling necessary.

Picture-in-Picture allows a video to be played in a separate, small window, and still be viewable when you switch tabs or away from the Firefox browser.

To use it on videos longer than 45 seconds, hover your mouse over to see a small Picture-in-Picture button. Click the button to pop open a floating window so you can keep watching while working in other tabs.

Screen icon with an arrow pointing from inside the screen to outside, symbolizing screen sharing.

You can also right-click on a video and select “Watch in Picture-in-Picture.” (This will work on shorter videos like the one below.)

Move the video around your screen and drag the corner to shrink or enlarge it. If you need to mute it, just tap the speaker icon on the right.

Check it out. Just don’t blame us if you end up with a gold for procrastination instead of getting that monthly report done.

Get Firefox

Get the browser that protects what’s important

The post Picture-in-Picture lets you watch videos while ‘working’ appeared first on The Mozilla Blog.

Don Martihello page for Don Marti

This is a Hello page for me.

Email

The best way to reach me is probably either personal email mailto:dmarti@zgp.org or my work email.

If you want to get this blog by email, I recommend Feedrabbit. Go to their site and put in this URL: https://blog.zgp.org/feed/

You can email me for Signal or phone info if you want to communicate that way.

Services I check

Besides email, I generally check these fairly often.

In-person events I attend

I can usually make it to Southern California Linux Expo (I missed this year) and it’s a good place to meet up with me in person.

Lately I have been going to W3C TPAC and plan to attend in 2024.

Other services

These are places I have accounts for certain purposes but aren’t a good way to reach me in general.

Services I might have an abandoned account on

I don’t know about the status of my accounts on these. Might have been taken over by spammers by now.

  • YouTube

  • Telegram

  • LiveJournal

  • I made accounts on Meta Instagram and Threads to try something on Threads but don’t check them.

Services I know I don’t have an account on

  • WhatsApp: I know users say it isn’t enshittified yet, but I don’t feel the need to be in suspense about when.

Mozilla ThunderbirdMeet The Thunderbird Team: Sol Valverde, UI/UX Designer

Welcome back to our Meet The Team series! I recently had a very entertaining conversation with Sol Valverde, one of the creative minds behind Thunderbird’s user experience and interface design. During our chat, Sol explained how growing up around developers influenced her career path, and discusses the thought process behind designing and improving Thunderbird’s visuals.

Sol also shared a hilarious and heartwarming anecdote about her family’s reaction to her joining our team. It’s a story that underscores the importance of maintaining core Thunderbird features that long-time users rely on, while still modernizing the interface.

For the best and most complete experience, listen to our entire conversation above. Or, you can read select excerpts below.


Q: Can you start by sharing your origin story? How did you end up in UI/UX design?

A: As a kid I always used to draw a lot. I did want to become uh some sort of artistic area professional. However, I do come from a family of programmers. My dad and uncle are both developers. My uncle, he’s been a huge Thunderbird fan for 20 years. But when he found out I got the job he was terrified. He was like “oh my God that’s cool! And also please don’t change anything. It’s perfect the way it is. Don’t touch it!

Q: What does your role entail?

A: I tend to take the first pass at evaluating how a user is going to interact with something. Like for example the first user experience. When I look at the screen, of course I want to make sure it’s attractive. But I ask things like “will the user understand what they need to do in this screen? Is it intuitive?”

A good experience is potentially one that you will forget. Because if you remember, it probably means that you struggled.

Sol Valverde

Q: How do you ensure that a design is intuitive for users?

A: I love the example of a door. If you have a door without a handle, you can assume it should be pushed. But how do you interact with a door if you don’t know? A lot of doors have “Push” or “Pull” signs. But then you kind of also get the extra interaction with the handle. Sometimes it’s a handle you can grab, but sometimes it’s just a bar that has to be pushed. The design lets you know intuitively what should be done, without needing to read anything. We want to guide the user without hiding anything from the user.

I grew up and learned by grabbing things, breaking things, interacting with them. And that kind of learning for me is crucial. So if the user is going to come into this room and and learn what I want them to learn, I have to make it easy for them to figure it out. I do a lot of research. If I’m working on K-9 Mail, for example, I not only look at other email apps, but also at various social media apps. How easy it to switch accounts? What do I dislike about those applications?

Q: Are there any mobile apps that stand out? Where the user experience is so straightforward there wasn’t any kind of learning curve?

A: The simpler ones tend to be the most intuitive ones. So for example, when you’re using an app to read comics or manga, you tap the book you want, and then you swipe back and forth to turn the pages. Like mimicking the physical actions of reading.

The image shows an email application interface displaying a list of email threads. Here are the details for each email thread shown:      Alessandro Castellani         Subject: Improve your Accounting with AP Process         Date and Time: 2024-07-10, 12:00 p.m.         Replies: 2 replies      Laurel Terlesky         Subject: Let's Fly! It's Time for Thunderbird 128         Date and Time: 2024-07-10, 12:00 p.m.         Replies: 2 replies      Micah Ilbery         Subject: Improve your Accounting with AP Process         Date and Time: 2024-07-10, 12:00 p.m.         Replies: 2 replies      Solange Valverde         Subject: Acme Corp newsletter: December edition         Date and Time: 2024-07-10, 12:00 p.m.         Replies: 2 replies         Following this thread:             Melissa Autumn                 Subject: RE: Acme Corp newsletter: December edition                 Date and Time: 2024-07-10, 12:00 p.m.             Monica Ayhens-Madon                 Subject: RE: Acme Corp newsletter: December edition                 Date and Time: 2024-07-10, 12:00 p.m.  The email from Laurel Terlesky about Thunderbird 128 is highlighted.<figcaption class="wp-element-caption">Sol played a big role in improving Thunderbird’s Cards View. </figcaption>

Q: What has been one of the most rewarding projects you’ve worked on at Thunderbird so far?

A: Definitely the Cards View revamp. We redid the first big chunk of code code, but then realized we hadn’t accounted for high contrast and other accessibility needs. We had to address those because accessibility is a must. So, when Micah and I started reworking the design, we thought, “What if we make it ten times better than we originally planned?” Thankfully, Alex was crazy enough to let us do it.

Q: How important is community feedback in your design process?

A: It’s invaluable! The community has a lot of opinions which is great. We design and extrapolate based on our own experiences and those of people we know. We do our best to put ourselves in others’ shoes and predict how they’ll interact with the design. Some comments were straightforward, like “I wish for this or that because it serves me better” or “I just like how it looks.” For UI, as long as it looks cohesive, I’m happy. However, some users provided deeper insights and explained their use personal use cases and concerns. That kind of feedback is so eye-opening, because it addresses things we hadn’t considered. I’m really grateful that they bring those perspectives forward.

Q: OK, big picture question: What’s your overall vision for the user experience in Thunderbird?

A: My whole desire for Thunderbird is it’s something easy to use. It’s something friendly and inviting. However, it can be as complicated or as easy as you want it to be. Intuitive at first glance, but powerful when you need it to be!

The post Meet The Thunderbird Team: Sol Valverde, UI/UX Designer appeared first on The Thunderbird Blog.

The Rust Programming Language BlogAnnouncing Rust 1.80.0

The Rust team is happy to announce a new version of Rust, 1.80.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.80.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.80.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.80.0 stable

LazyCell and LazyLock

These "lazy" types delay the initialization of their data until first access. They are similar to the OnceCell and OnceLock types stabilized in 1.70, but with the initialization function included in the cell. This completes the stabilization of functionality adopted into the standard library from the popular lazy_static and once_cell crates.

LazyLock is the thread-safe option, making it suitable for places like static values. For example, both the spawn thread and the main scope will see the exact same duration below, since LAZY_TIME will be initialized once, by whichever ends up accessing the static first. Neither use has to know how to initialize it, unlike they would with OnceLock::get_or_init().

use std::sync::LazyLock;
use std::time::Instant;

static LAZY_TIME: LazyLock<Instant> = LazyLock::new(Instant::now);

fn main() {
    let start = Instant::now();
    std::thread::scope(|s| {
        s.spawn(|| {
            println!("Thread lazy time is {:?}", LAZY_TIME.duration_since(start));
        });
        println!("Main lazy time is {:?}", LAZY_TIME.duration_since(start));
    });
}

LazyCell does the same thing without thread synchronization, so it doesn't implement Sync, which is needed for static, but it can still be used in thread_local! statics (with distinct initialization per thread). Either type can also be used in other data structures as well, depending on thread-safety needs, so lazy initialization is available everywhere!

Checked cfg names and values

In 1.79, rustc stabilized a --check-cfg flag, and now Cargo 1.80 is enabling those checks for all cfg names and values that it knows (in addition to the well known names and values from rustc). This includes feature names from Cargo.toml as well as new cargo::rustc-check-cfg output from build scripts.

Unexpected cfgs are reported by the warn-by-default unexpected_cfgs lint, which is meant to catch typos or other misconfiguration. For example, in a project with an optional rayon dependency, this code is configured for the wrong feature value:

fn main() {
    println!("Hello, world!");

    #[cfg(feature = "crayon")]
    rayon::join(
        || println!("Hello, Thing One!"),
        || println!("Hello, Thing Two!"),
    );
}
warning: unexpected `cfg` condition value: `crayon`
 --> src/main.rs:4:11
  |
4 |     #[cfg(feature = "crayon")]
  |           ^^^^^^^^^^--------
  |                     |
  |                     help: there is a expected value with a similar name: `"rayon"`
  |
  = note: expected values for `feature` are: `rayon`
  = help: consider adding `crayon` as a feature in `Cargo.toml`
  = note: see <https://doc.rust-lang.org/nightly/rustc/check-cfg/cargo-specifics.html> for more information about checking conditional configuration
  = note: `#[warn(unexpected_cfgs)]` on by default

The same warning is reported regardless of whether the actual rayon feature is enabled or not.

The [lints] table in the Cargo.toml manifest can also be used to extend the list of known names and values for custom cfg. rustc automatically provides the syntax to use in the warning.

[lints.rust]
unexpected_cfgs = { level = "warn", check-cfg = ['cfg(foo, values("bar"))'] }

You can read more about this feature in a previous blog post announcing the availability of the feature on nightly.

Exclusive ranges in patterns

Rust ranged patterns can now use exclusive endpoints, written a..b or ..b similar to the Range and RangeTo expression types. For example, the following patterns can now use the same constants for the end of one pattern and the start of the next:

pub fn size_prefix(n: u32) -> &'static str {
    const K: u32 = 10u32.pow(3);
    const M: u32 = 10u32.pow(6);
    const G: u32 = 10u32.pow(9);
    match n {
        ..K => "",
        K..M => "k",
        M..G => "M",
        G.. => "G",
    }
}

Previously, only inclusive (a..=b or ..=b) or open (a..) ranges were allowed in patterns, so code like this would require separate constants for inclusive endpoints like K - 1.

Exclusive ranges have been implemented as an unstable feature for a long time, but the blocking concern was that they might add confusion and increase the chance of off-by-one errors in patterns. To that end, exhaustiveness checking has been enhanced to better detect gaps in pattern matching, and new lints non_contiguous_range_endpoints and overlapping_range_endpoints will help detect cases where you might want to switch exclusive patterns to inclusive, or vice versa.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.80.0

Many people came together to create Rust 1.80.0. We couldn't have done it without all of you. Thanks!

The Mozilla BlogBAFTA Award-Winner Siobhán McSweeney to host Mozilla’s 2nd Annual Rise25 Awards in Dublin, Ireland on Aug. 13

Following the news of our 25 honorees for The 2nd Annual Rise25 Awards, Mozilla is thrilled to announce that actress and presenter Siobhán McSweeney will be hosting this year’s ceremony which will celebrate these individuals for leading the next wave of AI. The Irish actress, best known for her BAFTA award-winning performance as Sister Michael in Channel 4’s (Netflix in the U.S.) series “Derry Girls” and most recently in Hulu’s “Extraordinary,” will take the helm during this year’s ceremony which will take place the evening of Tuesday, August 13 at the Convention Centre in Dublin, Ireland

“I’m so excited to host these awards. AI is one of the biggest issues facing us, not only in my industry but across the board. To recognise and award individuals who are working for the benefit of society and not corporations is a great honor,” said McSweeney. She continued: “I’m so looking forward to meeting them. And having them explain what AI is.”

Rise25 is more than an award ceremony—it’s a platform to spark discussion, forge connections and inspire a wave of new ideas that will shape the future of AI. Siobhán’s contributions to television and theater, characterized by depth and charisma, make her an ideal voice to help us highlight these themes.

“The Rise25 awards are committed to bridging the gap between complex technological innovations and the very human stories at their core,” said Mark Surman, President of Mozilla. “Ms. McSweeney’s portrayal of Sister Michael in ’Derry Girls’ has left a lasting impression on audiences around the world (we’re big fans!). Her ability to deliver lines with sharp wit, while maintaining a warm presence, perfectly encapsulates the blend of insight and approachability we covet at our event.”

The awards show will also feature a special performance by Galway, Ireland-based Irish dancers The Gardiner Brothers. The American-born Irish dancers have won over 40 major Irish dancing titles between them and have performed to audiences all over the globe, including with the world famous Riverdance cast. They are known for their fast paced and rhythmic style of dance that they developed after training at the Hession School of Irish Dance in Galway, Ireland.

Bridget Todd, host of Mozilla’s Webby Award-winning “IRL” podcast, will be on hand to present the award categories at this year’s ceremony. Bridget is also the host of the iHeart Radio Podcast Award-winning podcast “There Are No Girls On The Internet,” and is a Shorty Award winner for “Best Podcast Miniseries” for DISINFORMED, a miniseries exploring how misinformation, and conspiracy theories around COVID, gender, and race hurt marginalized communities. Bridget’s writing and work on technology, race, gender and culture have been featured at The Atlantic, Newsweek, “The Daily Show” and more.

Mozilla’s 2nd Annual Rise25 Awards build upon the success of last year’s Rise25 Awards which were held in Berlin, Germany, bringing to life what a future trustworthy Internet could look like.

This year’s awards ceremony will be available on demand on Mozilla’s YouTube channel the morning of Friday, August 16, 2024. For more information, please visit https://rise25.mozilla.org/ 

The post BAFTA Award-Winner Siobhán McSweeney to host Mozilla’s 2nd Annual Rise25 Awards in Dublin, Ireland on Aug. 13 appeared first on The Mozilla Blog.

The Mozilla BlogTop 5 Firefox features for tab maximalists

Illustration of a web browser with a search bar, icons, and connected elements symbolizing features like shopping, bookmarks, and user profile.

I am a tab maximalist. On any given day, you can find me with 50+ tabs open across multiple windows on Firefox. Having this many tabs open can seem chaotic, but rest assured there is a method to the madness.

As a global product marketing manager at Mozilla, a large part of my job is to think critically about various inputs, synthesize and pass information from one team to another. Unsurprisingly, one of my guilty pleasures is being the first to provide a resource when in group conversation (e.g. a link to an insight or framework). These are not just any links. These are links to tabs that have been open for weeks… months … that I can recite like the alphabet.

Now, I may not keep 7,000 tabs open, but I do know five features that can help you manage yours… however many your heart desires.

1. Pinned Tabs

Pinned Tabs are my go-to for keeping essential tabs easily accessible. By pinning tabs, they stay in a fixed position on the left side of the tab bar, saving space and preventing accidental closure. I pin my active work and resources like documents in development, recent insights or my favorite playlist. Unlike bookmarks, which are great for long-term link storage, I use pinned tabs for resources I need to access frequently throughout the day but don’t need to hold onto for longer than a month or two. They also offer reduced page load times since they are technically still open in the tab bar and less likely to be unloaded  when your memory is low. 

To try it out, just right click on the tab you want to pin, and choose “Pin Tab” from the menu.

Browser window with a pinned Gmail tab on the left, displaying the URL mail.google.com in the address bar.

2. Search tabs

Having several pinned tabs can also become overwhelming. That’s when the search tabs feature becomes a lifesaver. When I need to find a specific tab among the dozens I have open, I can search for any open tab by typing a keyword into the address bar. This feature saves me from endlessly navigating tabs and quickly locating the exact information I need, ensuring I stay efficient and productive.

Click the “List all tabs” button in the tab bar, then choose “Search Tabs” from the menu.

Browser tab menu open with the 'Search Tabs' option highlighted, multiple Mozilla tabs open in the background.

3. Pocket integration

If you are a tab maximalist, you probably need a place to get away from the noise. Pocket is a great escape, like your own personal library. Luckily, Pocket is integrated directly into Firefox, allowing me to save articles, videos, and web pages for later. When I need to take a beat from work, this is the perfect place to catch up on my favorite topics – which currently includes House of Dragons fan theory and recaps. This doesn’t fit easily into my workday though, so it is great to revisit later when I have the time to dive into the rabbit hole.

Hit the “Save to Pocket” button in the toolbar.

Pocket extension pop-up in the browser showing a saved article titled 'The mysterious doodles hidden in a 1,300-year-old book' from bbc.com.

4. Close duplicate tabs

Close duplicate tabs is exactly as it sounds, a handy feature that can detect and close duplicate tabs with a simple right click. As of Firefox 127, this feature is directly integrated into the browser for greater ease of use. With this feature, I avoid the clutter and confusion of having multiple tabs open for the same webpage. It’s a small but powerful tool that keeps my browser organized and streamlined. It is no wonder why this was a top requested feature from our community. For those moments when my tab habits become unwieldy, this feature is a real lifesaver.

To try it out, just right click on the tab you want to pin, and choose “Close Duplicate Tabs.”

Browser tab context menu open on the Mozilla website, showing options like 'New Tab,' 'Reload Tab,' 'Mute Tab,' and 'Duplicate Tab.'

5. Multi-Account Containers 

If you have interests you want to keep private, Multi-Account Containers are for you. They allow you to separate different browsing activities into different containers, enhancing privacy and organization. Click here for a quick tutorial on using Multi-Account Containers.

For a tab maximalist, this is a game-changer. With Multi-Account Containers, you can keep your tabs organized by context, making it easier to find what you need without the clutter of unrelated tabs.

Browser window with multiple tabs open, including the Mozilla Blog, MDN Web Docs, and an Etsy shopping cart.

With these features, I hope you explore your greatest curiosities and become the most efficient version of yourself. Never lose a link again. Be a maximalist with Firefox. 😉

There are endless ways to make Firefox your own, whether you’re a tab maximalist, a minimalist or however you choose to navigate the internet. We want to know how you customize Firefox. Let us know and tag us on X or Instagram at @Firefox. 

Get Firefox

Get the browser that protects what’s important

The post Top 5 Firefox features for tab maximalists appeared first on The Mozilla Blog.

Don Martisurveillance licensing in practice

I wrote about how states should avoid free speech questions around Big Tech by establishing a licensing system for surveillance, and got some questions about how that would work.

The problem to watch out for is that state privacy regulators tend to be diligent high achiever types who aren’t afraid of doing a bunch of extra work. But what we want here is for most of the work of the licensing system to be done on the surveillance company side. The people who are getting paid by the taxpayers should spend as little time on it as possible. So here’s a possible way to do it.

  1. Pass a state law with a very general definition of surveillance, and say that anybody who surveils more than 20% of the population (to start with) needs to get a license. Appoint a surveillance licensing board.

  2. Design a surveillance licensing application, a one-page PDF. Name of company, contact person, and so on. Last form field is describe your surveillance practices in detail (attach additional pages if needed)

  3. When a company applies, put their application including the additional pages on the web, and have a public meeting.

  4. The meeting will be full of concerned citizens, NGOs, businesses that use the surveillance in some way, and other random members of the public. (Yes, people who got kicked off of Facebook because of getting hacked will show up at the Facebook meeting to complain.) Ideally this meeting would be organized in such a way that the Big Tech lawyers have to wait in a speakers’ waiting room next to random users of their clients’ platforms with devices off, need to figure this out.

  5. Realistically some speakers at the meeting will come up with something that the surveillance company left out of their application, and some will mention harmful effects of surveillance practices. The board gives the company a temporary surveillance license and tells them to re-submit. While on a temporary license they can’t sign up any new users from this state.

  6. Go to step 3. When the company cleans up their act, then the board can give them a longer term license. If they persist the board might deny them a license and that’s when a lawsuit could kick in. But most of the steps of the process have already worked.

No speech mentioned, it’s all about non-speech conduct, so very difficult for surveillance industry sockpuppet orgs to get a court to block.

Update: pricing

So how much should a surveillance license cost? For a Big Tech company with a double-digit percentage of a state’s residents in their database, say $5-10 per person surveilled.

In general the license should be priced by count of people records, so a company would pay more per person surveilled as they surveil more people. As surveillance licensing comes into effect for smaller firms, they would pay less per record, and licensing would never be required for databases of less than a certain size.

Pricing a surveillance license proportional to (n log n) would help address some of the competition and centralization concerns raised by some kinds of privacy regulation. (see The Antitrust and Privacy Interface: Lessons for Regulators from the Data  by Brijesh Pinto, D. Daniel Sokol, Feng Zhu)

Bonus links

After years of uncertainty, Google says it won’t be ‘deprecating third-party cookies’ in Chrome (not such a big deal. Before the announcement, you needed to turn off the Google Chrome ad features before using it, and after the announcement you need to do the same.)

Why Privacy Badger Opts You Out of Google’s “Privacy Sandbox” Despite sounding like a feature that protects your privacy, Privacy Sandbox ultimately protects Google’s advertising business.

Firefox’s New ‘Privacy’ Feature Actually Gives Your Data to Advertisers (All the major browsers have privacy settings you need to check, not just Google Chrome)

Don MartiSunday Internet optimism

Over on the social media sites there have been a bunch of very serious posts from very serious people explaining how surveillance advertising is here to stay and the best we can do is put some privacy-enhancing technologies on it. This sounds dismal and awful—ads according to the faufreluches so the big shots get ads for sweet cars and good jobs, retirees get precious metals scams, those with money get legit investments, those without get predatory finance, you know, all the same tricks and discrimination but with more math to make it harder to understand. So instead I’m going to do some Internet optimism today. What happens if instead of reimplementing surveillance advertising, we just get rid of it?

Step one: people start buying better stuff. If you figure out how to turn the surveillance advertising off, you start buying goods and services that you are more satisfied with (Lin et. al) and buying less overpriced crap (Mustri et al). The main reason I’m pretty confident about this effect is because of some research that hasn’t been published. If people who use ad blockers and privacy tools were making worse purchases, then someone in the surveillance business would have published research saying so.

Step two: marketers look for alternatives. If I can somehow avoid being exposed to the surveillance ads, that doesn’t mean that people still aren’t going to try to sell me stuff. But instead of surveillance ads, which let them target the most valuable possible audience for the lowest possible ad rates they have to fall back to the next most profitable options, which might be

  • spending more money on better ad-supported content

  • reviewer and influencer programs

  • content marketing

  • increase product quality

  • lower price

Those options probably have less attractive profits or predictability for the company than the surveillance ads do, or the company would have chosen them in the first place. But by removing the surveillance ad option, as a shopper you get more money to flow to more win-win options.

Step three: what happened to the ad-supported content? A lot of ad-supported content does get money from surveillance ads. It could turn out that the legit ad-supported sites end up better off, just by supply and demand. The number of available crap ad spots—that are only saleable because of surveillance—will go down. And after steps one and two, the customers will be sitting on more money, and can put some of it into subscriptions and crowdfunding. And subscription and crowdfunding models tend to send a higher percentage of the money to the content creator than advertising models do.

Of course, the market isn’t going to change because one person is harder to reach with surveillance ads. Ad reform is a collective problem, and needs tool building, education, and lobbying work.

We might be able to get some good data about this soon, thanks to the EDPB decision on Facebook ad tracking. It looks like some users are going to be able to use the exact same social site, but with random ads instead of personalized ones. When the users who picked Facebook’s non-personalized option turn out to own better stuff that they’re more satisfied with, that will help build toward a surveillance advertising ban. It’s a lot easier to justify a ban when it’s not about balancing harms and benefits, but more about stacking consumer benefits on top of the existing privacy and national security benefits.

Related

privacy: I’d buy that for 20 dollars! How much does the content business depend on surveillance advertising anyway?

turn off advertising features in Firefox

Google Chrome ad features checklist

turn off advertising measurement in Apple Safari

There is almost enough material for a PETs are going just great blog by now… Some ad tech vendors are pulling back from Google’s Privacy Sandbox amid uncertainty Ad execs sound the alarm over Google’s risky Privacy Sandbox terms Publishers’ Privacy Sandbox pauses settle into a deep freeze following reports of poor performance ‘It’s in Google’s best interest’: Sources urge more formal Privacy Sandbox legal terms

Bonus links

Google Is Mind-Bogglingly Bad Why not keep agreeing with meaningless metrics instead of fixing the problems? (Result of the search quality crisis: The Real Money In Modern ‘Journalism’ Now Involves Filling The Internet With ‘AI’-Generated Garbage)

Academic Publishing is a Lucrative Scam I think the reason more academics haven’t already migrated to Diamond Open Access journals is that there are relatively few such journals. The reason for that is that although there are lots of people talking about Diamond Open Access there are many fewer actually taking steps to implement it. The initiative mentioned in the Guardian article is therefore very welcome. Although I think in the long run this transition is inevitable, it won’t happen by itself. (Links to Academic journals are a lucrative scam – and we’re determined to change that)

USPS shared customer postal addresses with Meta, LinkedIn and Snap | TechCrunch On Wednesday, the USPS said it addressed the issue and stopped the practice, claiming that it was unaware of it. (via schoolinfosystem.org)

Data Broker Files: How data brokers sell our location data and jeopardise national security, Under Surveillance: Location Data Jeopardizes German Security… We received the data as a free sample, which was intended to serve as a preview for a monthly subscription: For around USD 14,000, the broker offers a continuous stream of fresh location data from millions of smartphones around the world, almost in real time.

Firefox Nightly100% WebDriver BiDi and 101% more! – These Weeks in Firefox: Issue 164

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Sebastian Zartner [:sebo]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
    • As part of follow ups to the Manifest V3 improvements:
      • Investigated and fixed issues related to the event page lifetime and event page ability to be respawned by persisted event listeners, and improve Manifest V3 background script reliability (Bug 1905505, Bug 1905153, Bug 1830144)
      • Fixed a bug related to the extension button’s “attention dot”, which was making it always shown for Manifest V3 extensions with an activeTab permission (Bug 1851083
    • Fixed theme API internal issue that could make the add-on database to grow unnecessarily (Bug 1830136)
    • Fixed zooming on the extension devtools panels (Bug 1583716)
  • Thanks to Gregory Pappas for contributing this fix!
  • Fixed extension sidebar bug leading extension sidebar to always be open automatically on add-on updates and reloads (Bug 1892667)

 

WebExtension APIs
  • Fixed a downloads API regression that was preventing files containing % character from being saved successfully (Bug 1898498)
  • From Firefox 129, declarativeNetRequest API rules will be able to intercept and modify web requests originate from web pages loaded from a file: URI (Bug 1887869)

 

Addon Manager & about:addons
  • Checkbox that allows users to grant access to private browsing windows as part of the install flow has been moved to the first install dialog in Firefox 129 (Bug 1842832).

DevTools

DevTools Toolbox
  • Sebastian Zartner added warning for inactive CSS for resize (#1551579), box-sizing (#1583894), float-related (#1551580) and table-related (#1868788) properties in the Rules view
  • Valentin Gosu fixed a NetworkError that could happen for fetch calls when Responsive Design Mode was enabled (#1885308)
  • Brad Werth fixed a Browser crash that was occurring when displaying the highlighter for flexbox items (#1901233)
  • Arai fixed Debugger pretty-printing when there was escape sequence inside template literals (#1905653)
  • Alexandre is still working on improving the JS tracer. The max depth can now be set through a pref (#1903791), and when recording to the profiler, the stack chart panel is selected instead of the call tree (#1903792)
  • Julian made network blocking from DevTools actually block the request, not only the response (#1756770)
  • Nicolas fixed an issue in the Rules view that could break the style of the page when writing property values with quotes (#1904752)
  • Alexandre fixed a nasty bug that could prevent DevTools to open (#1903980)
  • Nicolas made some interactive elements in the Inspector keyboard focusable:
    • Stylesheet location link (#1844054)
    • Shape editor button (#1844264)
    • Flex/Grid highlighter toggle button (#1901508)
    • Container query “Jump to container node” button (#1901713)
  • Nicolas Watch expression input is missing focus indicator (#1904339)
  • Nicolas landed a few patches to start supporting high Contrast Mode in DevTools (#1904764, #1904765, #1904839)
  • Nicolas Indicate @starting-style CSS custom properties value in var() tooltip (#1897931)
  • Nicolas Don’t retrieve @starting-style rules in the Rules view until Bug 1905035 is fixed (#1906228)
  • Nicolas added information for registered properties (aka @property) in the Computed panel:
    • Show initial value of registered properties (#1900069)
    • Show invalid at computed-value time declarations (#1900070)
  • Hubert is almost done migrating the Debugger to CodeMirror 6. All major features are now supported and we’re only looking at smaller bugs and test failures before enabling it on Nightly (#1904488)
WebDriver BiDi
  • External:
    • Thanks to James Hendry for removing the deprecated “desiredCapabilities” and “requiredCapabilities” from geckodriver (#1823907)
  • Thanks to :haik and to everyone involved on Bug 1893921 for solving a sandboxing issue with the latest macos arm workers provided for Github actions. This was preventing several projects using Github actions to run their CI on Firefox.
  • Updates:
    • Sasha and Henrik implemented the network.setCacheBehavior command, which allows to disable the network cache either globally or for a set of top-level browsing contexts. This is particularly useful to ensure consistent network behavior during repeated tests (#1901032 and #1906100)
    • Sasha added support for the “originalOpener” field in BrowsingContextInfo, which allows clients to find the opener of a given browsing context, even if it was opened using “rel=noopener”. (#1898004)
    • Julian added support for all arguments to the “network.provideResponse” command, for requests blocked in the “beforeRequestSent” phase. Clients can now build a custom response to any request by providing its body, headers, cookies, status code and status phrase. This way users can easily mock backend responses without having to change their server. (#1853882)
    • Sasha added support for network events using data URLs. At the moment we only emit events for data URLs requests used to load a document, but we will follow up to add support for all requests to data URLs. (#1805176)
    • Henrik implemented the handler field of the browsingContext.userPromptOpened event which will indicate the configured prompt handler for the opened prompt (eg “accept”, “dismiss” etc…). (#1904822)
    • Henrik added support for “beforeunload” prompts, which can now be handled as any other prompt in WebDriver BiDi sessions (they are still automatically dismissed in WebDriver classic sessions). (#1824220)
    • Henrik added support for the “promptUnload” argument to the browsingContext.close command, which allows to skip beforeunload prompts. (#1862380)
    • Henrik updated the default value of the “remote.active-protocols” preference to “1”, which means that from now on, CDP (Chrome DevTools Protocol) is no longer enabled by default. If clients still want to enable it, they can either set this preference to “2” (CDP only) or “3” (WebDriver BiDi + CDP). This is a temporary step before CDP is fully disabled and removed from Firefox around the end of the year. (#1882089)
  • Bug fixes (read-only):
    • Julian fixed a bug with network.continueRequest where you could not provide multiple values for a single header name (#1904379)
    • Julian fixed an issue with authentication flows where we would emit too many network.responseCompleted events. This is still being discussed on the spec side and might change in the future, but for now having a single event to mark the end of the authentication is easier to handle for WebDriver BiDi clients. (#1906106)
    • Henrik fixed a bug in browsingContext.navigate where the command would resolve too early if a script was performing a history navigation via a “beforeunload” event listener. (#1879163)
    • Sasha updated browsingContext.userPromptOpened to always contain the “defaultValue” for prompts of type “prompt”. (#1859814)
    • Henrik updated the browser.close command to silently discard all beforeunload prompts when closing the browser. (#1873196)

Lint, Docs and Workflow

Migration Improvements

  • We just closed the metabug for creating the single-file archive! This is because we now:
    • Create a single-file archive (optionally encrypted)
    • The single-file archive is a specially prepared HTML page that provides instructions on how to recover from it when viewed in Firefox, and download links for Firefox when viewed in other browsers.
    • Moves the single-file archive into a user-configured directory
    • Generates the backup in the background, relatively quickly. Right now, it’s created maximum once an hour when there’s at least 5 minutes of user idle time.
  • The team is focusing on getting the UI for managing and configuring backups completed, as well as doing cleanup, measurement and maintenance bugs

Search and Navigation

Search
  • Moritz fixed a bug to not allow for empty search using search bar one-off buttons. Bug 1904014,
  • Moritz is helping with post search-config-v2 clean up and removed icons from extensions that’s no longer used, and removed SearchEngine.searchForm Bug 1895873, Bug 1903247

 

Scotch Bonnet / Address Bar Refresh initiative
Address bar
  • Yazan fixed accessibility issue where urlbar-zoom-button announcement did not indicate the zoom can be reset to 100%. Bug 1882564
  • Daisuke fixed the search mode chiclet close button so it’s visible in dark mode.  Bug 1905572

 

Suggest
  • Drew enabled Yelp suggestions by default in 129 for users enrolled in Suggest.  Bug 906185

 

Storybook/Reusable Components

Lee.isaacy for fixing Bug 1904113 – Add space tokens to moz-message-bar.css