Firefox NightlyNew Address Bar Updates are Here – These Weeks in Firefox: Issue 172

Highlights

  • Our newly updated address bar, also known as “Scotch Bonnet”, is available in Nightly builds! 🎉
  • Weather suggestions have also been enabled in Nightly. The feature is US only at this time, as part of Firefox Suggest. :rain_cloud:
  • robwu fixed a regression introduced in Firefox 132 that was triggering the default built-in theme to be re-enabled on every browser startup – Bug 1928082
  • Love Firefox Profiler and DevTools? Check out the latest DevTools updates and see how they can better help you track down issues.

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • abhijeetchawla[:ff2400t]
  • Collin Richards
  • John Bieling (:TbSync)
  • kernp25

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • As a part of Bug 1928082, a failure hit by the new test_default_theme.js xpcshell test will ensure the default theme manifest version is in sync in both the manifest and the XPIProvider startup call to maybeInstallBuiltinAddon
WebExtensions Framework
  • Fixed a leak in ext-theme hit when an extension was setting a per-window theme using the theme WebExtensions API – Bug 1579943
  • ExtensionPolicyService content scripts helper methods has been tweaked to fix a low frequency crash hit by ExtensionPolicyService::ExecuteContentScripts – Bug 1916569
  • Fixed an unexpected issue with loading moz-extension url as subframe of the background page for extensions loaded temporarily from a directory – Bug 1926106
  • Prevent window.close() calls originated from the WebExtensions registered devtools panel to close the browser chrome window (when there is only a single tab open) – Bug 1926373
    • Thanks to Becca King for contributing this fix 🎉
  • Native messaging support for snap-packaged Firefox (default on Ubuntu):
    • Thanks to Alexandre Lissy for working on finalizing the patches from Bug 1661935
    • Fixed a regression hit by the snap-packaged Firefox 133 build – Bug 1930119
WebExtension APIs
  • Fixed a bug preventing declarativeNetRequest API dynamic rules to work correctly after a browser restart for extensions not having any static rules registered – Bug 1921353

DevTools

DevTools Toolbox

DevTools debugger log points being marked in a profiler instance

Lint, Docs and Workflow

  • A change to the mozilla/reject-addtask-only has just landed on Autoland.
    • This makes it so that when the rule is raising an issue with .only() in tests, only the .only() is highlighted, not the whole test:

a before screenshot of the Firefox code linter highlighting a whole test

an after screenshot of the Firefox code linter highlighting the ".only" part of a test

Migration Improvements

New Tab Page

  • The team is working on some new section layout and organization variations – specifically, we’re testing whether or not recommended stories should be grouped into various configurable topic sections. Stay tuned!

Picture-in-Picture

  • Thanks to contributor kern25 for:
    • Updating our Dailymotion site-specific wrapper (bug), which also happens to fix broken PiP captions (bug).
    • Updating our videojs site-specific wrapper (bug) to recognize multiple cue elements. This fixes PiP captions rendering incorrectly on Windows for some sites.

Search and Navigation

Firefox NightlyCelebrating 20 years of Firefox – These Weeks in Firefox: Issue 171

Highlights

  • Firefox is turning 20 years old! Here’s a sneak peek of what’s to come for the browser.
  • We completed work on the new messaging surface for the AppMenu / FxA avatar menu. There’s a new FXA_ACCOUNTS_APPMENU_PROTECT_BROWSING_DATA entry in about:asrouter for people who’d like to try it. Here’s another variation:

a message with an illustration of a cute fox sitting on a cloud, as well as a sign-up button, encouraging users to create a Mozilla account

  • The experiment will also test new copy for the state of the sign-in button when this message is dismissed:

  • Alexandre Poirot added an option in the Debugger Sources panel to control the visibility of WebExtension content scripts (#1698068)

  • Hubert Boma Manilla improved the Debugger by adding the paused line location in the “paused” section, and making it a live region so it’s announced to screen reader when pausing/stepping (#1843320)

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • abhijeetchawla[:ff2400t]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • In Firefox >= 133, WebExtensions sidebar panels can close themselves using window.close() (Bug 1921631)
    • Thanks to Becca King for contributing this enhancement to the WebExtensions sidebar panels 🎉
WebExtension APIs
  • A new telemetry probe related to the storage.sync quota has been introduced in Firefox 133 (Bug 1915183). The new probe is meant to help plan replacement of the deprecated Kinto-based backend with a rust-based storage.sync implementation in Firefox for Android (similar to the one introduced in Firefox for desktop v79).

DevTools

DevTools Toolbox

Lint, Docs and Workflow

  • The source documentation generate and upload tasks on CI will now output specific TEST-UNEXPECTED-FAILURE lines for new warnings/errors.
    • Running ./mach doc locally should generally do the same.
    • The previous “max n warnings” has been replaced by an allow list of current warnings/errors.
  • Flat config and ESLint v9 support has now been added to eslint-plugin-mozilla.
    • This is a big step in preparing to switch mozilla-central over to the new flat configuration & then v9.
  • hjones upgraded stylelint to the latest version and swapped its plugins to use ES modules.

New Tab Page

  • The New Tab team is analyzing the results from an experiment that tried different layouts, to see how it impacted usage. Our Data Scientists are pouring over the data to help inform design directions moving forward.
  • Another experiment is primed to run once Firefox 132 fully ships to release – the new “big rectangle” vertical widget will be tested to see whether or not users find this new affordance useful.
  • Work completed on the Fakespot experiment that we’re going to be running for Firefox 133 in December. We’ll be using the vertical widget to display products identified as high-quality, with reliable reviews.

Search and Navigation

  • 2024 Address Bar Scotch Bonnet Project
    • Various bugs were fixed by Mandy, Dale, and Yazan
      • quick actions search mode preview was formatted incorrectly (1923550)
      • dedicated Search button was getting stuck after clicking twice (1913193)
      • about chiclets not showing up when scotch bonnet is enabled (1925643)
      • tab to search not shown when scotch bonnet is enabled (1925129)
      • searchmode switcher works when Search Services fails (1906541)
      • localize strings for search mode switcher button (1924228)
      • secondary actions UX updated to be shown between heuristic and first search suggestion. (1922570)
    • To try out these scotch bonnet features, use the PREF browser.urlbar.scotchBonnet.enableOverride
  • Address Bar
    • Moritz deduplicated bookmark and history results with the same URL, but different references. (1924968) browser.urlbar.deduplication.enabled
    • Daisuke fixed overlapping remote tab text in compact mode (1924911)
    • Richardscollin, a volunteer contributor, fixed pressing esc on the address bar when it was selected and will now return focus to the window. (1086524)
    • Daisuke fixed the “Not Secure” label being Illegible when the width is too small (1925332)
  • Suggest
    • adw has been working on City-based weather suggestions (1921126, 1925734, 1925735, 1927010)
    • adw working on integrating machine learning (MLSuggest) with UrlbarPRoviderQuickSuggest (1926381)
  • Search
    • Mortiz landed a patch to localize the keyword for wikipedia search engine. 1687153, 1925735
  • Places
    • Yazan landed favicon improvement on how firefox picks the best favicon for page-icon urls without a path. (1664001)
    • Mak landed a patch where we significantly improved performance and memory usage when checking for visited URIs. The process by executing a single query for the entire batch of URIs, instead of running one query per URI. (1594368)

Firefox NightlyExperimental address bar deduplication, better auto-open Picture-in-Picture, and more – These Weeks in Firefox: Issue 170

Highlights

  • A new messaging surface for the AppMenu and PXI menu is landing imminently so that we can experiment with some messages to help users understand the value of signing up for / signing into a Mozilla account

a message with a cute fox illustration and a sign-up button in Firefox's app menu encouraging users to create a Mozilla account

  • mconley landed a patch to make the heuristics for the automatic Picture-in-Picture feature a bit smarter. This should make it less likely to auto-pip silent or small videos.
  • Moritz fixed an older bug for the address bar where duplicate Google Docs results had been appearing in the address bar dropdown. This fix is currently behind a disabled pref – people are free to test the behavior flipping browser.urlbar.deduplication.enabled to true, and feedback is welcome. We’re still investigating UI treatments to eventually show the duplicates. (1389229)

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Gregory Pappas [:gregp]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
WebExtensions Framework
  • Thanks to Florian for moving WebExtensions and AddonsManager telemetry probes away from the legacy telemetry API (Bug 1920073, Bug 1923015)
WebExtension APIs
  • The cookies API will be sorting cookies according to RFC 6265 (Bug 1818968), fixing a small chrome incompatibility issue

Migration Improvements

New Tab Page

  • We will be running an experiment in December featuring a Fakespot feed in the vertical list on newtab. This list will show products that have been identified as high-quality, and with reliable product reviews. They will link to more detailed Fakespot product pages that will give a breakdown of the product analysis.

Picture-in-Picture

  • Special shout-out to volunteer contributor def00111 who has been helping out with our site-specific wrappers!

Search and Navigation

  • 2024 Address Bar Updates (previously known as “Project Scotch Bonnet”)
    • Intuitive Search Keywords
      • Mandy added new telemetry related to intuitive search keywords (1919180)
      • Mandy also landed a patch to list the keywords in the results panel when a user types `@` (1921549)
    • Unified Search Button
      • Daisuke refined our telemetry so that user interactions with the unified search button are differentiated from user interactions with the original one-off search button row (1919857)
    • Persisted Search
      • James fixed a bug related to persisting search terms for non-default search engines (1921092)
    • Search Config v2
      • Moritz landed a patch that streamlines how we handle search parameter names for search engine URLs (1895934)
    • Search & Suggest
      • Nan landed a patch that allows us to integrate a user-interest-based relevance ranking into the address bar suggestions we receive from our Merino server (1923187)
    • Places Database
      • Daisuke landed a series of patches so that the Places database no longer fetches any icons over the network. Now that icon fetching is delegated to consumers which have better knowledge about how to do it in a safer way. (1894633)
    • Favicons
      • Yazan landed several patches related to favicons which improve the way we pick a best favicon, avoiding excessive downscaling of large favicons that could make the favicon unrecognizable. (1494016, 1556396, 1923175)

Mozilla ThunderbirdMaximize Your Day: Make Important Messages Stand Out with Filters

For the past two decades, I’ve been trying to get on Jeopardy. This is harder than answering a Final Jeopardy question in your toughest subject. Roughly a tenth of people who take the exam get invited to auditions, and only a tenth of those who make it to auditions make it to the Contestant Pool and into the show. During this time, there are two emails you DON’T want to miss: the first saying you made it to auditions, and the second that you’re in the Contestant Pool. (This second email comes with your contestant form, and yes, I have my short, fun anecdotes to share with host Ken Jennings ready to go.)

The next time I audition, reader, I am eliminating refreshing my inbox every five minutes. Instead, I’ll use Thunderbird Filters to make any emails from the Jeopardy Contestant department STAND OUT.

Whether you’re hoping to be called up for a game show, waiting on important life news, or otherwise needing to be alert, Thunderbird is here to help you out.

Make Important Messages Stand Out with Filters

Most of our previous posts have focused on cleaning out your inbox. Now, in addition to showing you how Thunderbird can clear visual and mental clutter out of the way, we’re using filters to make important messages stand out.

  1. Click the Application menu button, then Tools. followed by Message Filters.
  2. Click New. A Filter Rules dialog box will appear.
  1. In the “Filter Name” field, type a name for your filter.
  2. Under “Apply filter when”, check one of the options or both. (You probably won’t want to change from the default “Getting New Mail” and “Manually Run” options.)
  3. In the “Getting New Mail: ” dropdown menu, choose either Filter before Junk Classification or Filter after Junk Classification. (As for me, I’m choosing Filter before Junk Classification. Just in case)
  4. Choose a property, a test and a value for each rule you want to apply:
  • A property is a message element or characteristic such as “Subject” or “From”
  • A test is a check on the property, such as “contains” or “is in my address book”
  • A value completes the test with a specific detail, such as an email address or keyword
  1. Choose one or more actions for messages that meet those criteria. (For extra caution, I put THREE actions on my sample filter. You might only need one!)
<figcaption class="wp-element-caption">(Note – not the actual Jeopardy addresses!)</figcaption>

Find (and Filter) Your Important Messages

Thunderbird also lets you create a filter directly from a message. Say you’re organizing your inbox and you see a message you don’t want to miss in the future. Highlight the email, and click on the Message menu button. Scroll down to and click on ‘Create Filter from Message.’ This will open a New Filter window, automatically filled with the sender’s address. Add any other properties, tests, or values, as above. Choose your actions, name your filter, and ta-da! Your new filter will help you know when that next important email arrives.

Resources

As with last month’s article, this post was inspired by a Mastodon post (sadly, this one was deleted, but thank you, original poster!). Many thanks to our amazing Knowledge Base writers at Mozilla Support who wrote our guide to filters. Also, thanks to Martin Brinkmann and his ghacks website for this and many other helpful Thunderbird guides!

Getting Started with Filters Mozilla Support article: https://support.mozilla.org/en-US/kb/organize-your-messages-using-filters

How to Make Important Messages Stick Out in Thunderbird: https://www.ghacks.net/2022/12/02/how-to-make-important-emails-stick-out-in-thunderbird/

The post Maximize Your Day: Make Important Messages Stand Out with Filters appeared first on The Thunderbird Blog.

The Mozilla Blog20 years of Firefox: How a community project changed the web

What was browsing the web like in 2004? People said things like “surfing the internet,” for starters. Excessive pop-up ads were annoying but they felt like the norm. The search bar and multiple tabs did not exist, and there seemed to be only one browser in sight. That is, until Firefox 1.0 arrived and gave it real competition.

Built by a group of passionate developers who believed the web should be open, safe and not controlled by a single tech giant, Firefox became the choice for anyone who wanted to experience the internet differently. Millions made the switch, and the web felt bigger. 

As the internet started to evolve, so did Firefox — becoming a symbol of open innovation, digital privacy and, above all, the ability to experience the web on your own terms. Here are some key moments of the last 20 years of Firefox.

2004: Firefox 1.0 launch

Firefox 1.0 launched on Nov. 9, 2004. As an open-source project, Firefox was developed by a global community of volunteers who collaborated to make a browser that’s more secure, user-friendly and customizable. With built-in pop-up blocking, users could finally decide when and if they wanted to see pop-ups. Firefox introduced tabbed browsing, which let people open multiple sites in one window. It also made online safety a priority, with fraud protection to guard against phishing and spoofing. 

<figcaption class="wp-element-caption">On Dec. 15, 2004, Firefox’s community-funded, two-page ad appeared in The New York Times, featuring the names of thousands of supporters and declaring to millions that a faster, safer, and more open browser was here to stay.</figcaption>

2005: Mozilla Developer Center

Mozilla launched the Mozilla Developer Center (now MDN Web Docs) as a hub for web standards and developer resources. Today, MDN remains a trusted resource maintained by Mozilla and a global community of contributors.

A crop circle of the Firefox logo.<figcaption class="wp-element-caption">Local Firefox fans in Oregon made a Firefox crop circle in an oat field in August 2006. </figcaption>

2007: Open-source community support

The SUMO (support.mozilla.org) platform was originally built in 2007 to provide an open-source community support channel for users, and to help us collaborate more effectively with our volunteer contributors. Over the years, SUMO has become a powerful platform that helps users get the most out of Firefox, provides opportunities for users to connect and learn more from each other, and allows us to gather important insights – all powered by our community of contributors. Six active contributors have been with us since day one (shout outs to cor-el, jscher2000, James, mozbrowser, AliceWyman and marsf) and 16 contributors have been here for 15+ years!

<figcaption class="wp-element-caption">A Mozilla contributor story by Chris Hoffman.</figcaption>

2008: A Guinness World Record

Firefox 3.0 made history by setting a Guinness World Record for the most software downloads – over 8 million – in a single day. The event known as Download Day was celebrated across Mozilla communities worldwide, marking a moment of pride for developers, contributors and fans. 

2010: Firefox goes mobile

Firefox made its debut on mobile on Nokia N900. It brought beloved features like tabbed browsing, the Awesome Bar, and Weave Sync, allowing users to sync between desktop and mobile. It also became the first mobile browser to support add-ons, giving users the freedom to customize their browsing on the go.

A blue denim pocket with an orange fox tail sticking out from the top.<figcaption class="wp-element-caption">Pocketfox by Yaroslaff Chekunov, the winner of the “Firefox Goes Mobile” design challenge. </figcaption>

2013: Hello Chrome, it’s Firefox calling

Firefox made a major leap with WebRTC (Web Real-Time Communication), allowing users to make video and voice calls directly between Firefox and Chrome without needing plugins. This cross-browser communication was a breakthrough for open web standards, making it easier for users to connect seamlessly. Firefox also introduced RTCPeerConnection, enabling users to share files during video calls, further enhancing online collaboration.

2014: Privacy on the web

Firefox has shipped a steady drumbeat of anti-tracking features over the years, greatly increasing the privacy of the web. The impact has gone beyond just Firefox users, as online privacy is now a table-stakes deliverable for all browsers.

  • 2014: Block trackers from loading
  • 2016: Containers can isolate sites within Firefox
  • 2018: Enhanced tracking protection blocks tracking cookies (more on this below)
  • 2020: Significant improvements to prevent sites from “fingerprinting” users
  • 2022: Total Cookie Protection isolates all third party tracking cookies (more on this below)

2017: Twice as fast, 30% less memory

The firefox logo on an abstract background in different shades of blue. Text: The new Firefox. Fast for Good

Firefox took a huge step forward with Firefox Quantum, an update that made browsing twice as fast. Thanks to a new engine built using Mozilla’s Rust programming language, Firefox Quantum made pages load faster and used 30% less memory than Chrome. It was all about speed and efficiency, letting users browse quicker without slowing down their computer.

2018: Firefox blocks trackers 

Enhanced Tracking Protection (ETP) was introduced as a new feature that blocks third-party cookies, the primary tool used by companies to track users across websites. ETP made it simple for users to protect their privacy by automatically blocking trackers while ensuring websites still functioned smoothly. Initially an optional feature, ETP became the default setting by early 2019, marking a significant step in giving users better privacy without sacrificing browsing experience.

2019: Advocacy for media formats not encumbered by patents


Mozilla played a significant role in the standardization and adoption of AV1 and AVIF as part of its commitment to open, royalty-free and high-quality media standards for the web. Shipping early support in Firefox for AV1 and AVIF, along with Mozilla’s advocacy, accelerated adoption by platforms like YouTube, Netflix and Twitch. The result is a next-generation, royalty-free video codec that provides high-quality video compression without licensing fees, making it an open and accessible choice for the entire web.

2020: Adobe Flash is discontinued

Adobe retired Flash on Dec. 31, 2020. Mozilla and Firefox played a pivotal role in the end of Adobe Flash by leading the transition toward more secure, performant and open web standards like HTML5, WebGL and WebAssembly. As Firefox and other browsers adopted HTML5, it helped establish these as viable alternatives to Flash. This shift supported more secure and efficient ways to deliver multimedia content, minimizing the web’s reliance on proprietary plugins like Flash.

2022: Total Cookie Protection 

Firefox took privacy further with Total Cookie Protection (TCP), building on the foundation of ETP. Cookies, while helpful for site-specific tasks like keeping you logged in, can also be used by advertisers to track you across multiple sites. TCP isolates cookies by keeping them locked to the site they came from, preventing cross-site tracking. Inspired by the Tor Browser’s privacy features, Firefox’s approach integrates this tool directly into ETP, giving users more control over their data and stopping trackers in their tracks.

2024: 20 years of Firefox

These milestones are just a snapshot of Firefox’s story, full of many chapters that have shaped the web as we know it. Today, Firefox remains at the forefront of championing privacy, open innovation and choice. And while the last 20 years have been transformative, the best is yet to come.

<figcaption class="wp-element-caption">From left to right: Stuart Parmenter, Tracy Walker, Scott McGregor, Ben Goodger, Myk Melez, Chris Hofmann, Asa Dotzler, Johnny Stenbeck, Rafael Ebron, Jay Patel, Vlad Vucecevic and Bryan Ryner. Sitting, from left to right: Chase Philips, David Baron, Mitchell Baker, Brendan Eich, Dan Mosedale, Chris Beard and Doug Turner in 2004. Credit: Mozilla</figcaption>
<figcaption class="wp-element-caption">Mozillians and Foxy in Dublin, Ireland in August 2024. Credit: Mozilla</figcaption>

Get Firefox

Get the browser that protects what’s important

The post 20 years of Firefox: How a community project changed the web appeared first on The Mozilla Blog.

The Mozilla BlogCharging ahead on AI openness and safety

On the official ”road to the French Government’s AI Action Summit,” Mozilla and Columbia University’s Institute of Global Politics are bringing together AI experts and practitioners to advance AI safety approaches that embody the values of open source.

On Tuesday in San Francisco, Mozilla and Columbia University’s Institute of Global Politics will hold the Columbia Convening on AI Openness and Safety. The convening, which takes place on the eve of the convening of the International Network of AI Safety Institutes, will bring together leading researchers and practitioners to advance practical approaches to AI safety that embody the values of openness, transparency, community-centeredness and pragmatism. The Convening seeks to make these values actionable, and demonstrate the power of centering pluralism in AI safety to ultimately empower developers to create safer AI systems.

The Columbia Convening series started in October 2023 before the UK Safety Summit, where over 1,800 leading experts and community members jointly stated in an open letter coordinated by Mozilla and Columbia that “when it comes to AI Safety and Security, openness is an antidote not a poison.” In February 2024, the first Columbia Convening was held with this community to explore the complexities of openness in AI. It culminated in a collective framework characterizing the dimensions of openness throughout the stack of foundation models.

This second convening holds particular significance as an official event on the road to the AI Action Summit, to be held in France in February 2025. The outputs and conclusions from the collective work will directly shape the agenda and actions for the Summit, offering a crucial opportunity to foreground openness, pluralism and practicality in high-level conversations on AI safety.

The timing is particularly relevant as the open ecosystem gains unprecedented momentum among AI practitioners. Open models now cover a large range of modalities and sizes with performance almost on par with the best closed models, making them suitable for most AI use cases. This growth is reflected in the numbers: Hugging Face reported an 880% increase in the number of generative AI model repositories in two years, from 160,000 to 1.57 million. In the private sector, according to a 2024 study by the investment firm a16z, 46% of Fortune 500 company leaders say they strongly prefer to leverage open source models.

In this context, many researchers, policymakers and companies are embracing openness in AI as a benefit to safety, rather than a risk. There is also an increased recognition that safety is as much (if not more of) a system property than a model property, making it critical to extend open safety research and tooling to address risks arising at other stages of the AI development lifecycle.

The technical and research communities invested in openness in AI systems have been developing tools to make AI safer for years — to include better evaluations and benchmarks, deploying content moderation systems, and creating clear documentation for datasets and AI models. This second Columbia Convening seeks to address the needs of these AI systems developers to ensure the safe and trustworthy deployment of their systems, and to accelerate building safety tools, systems, and interventions that incorporate and reflect the values of openness. 

Working with a group of leading researchers and practitioners, the convening is structured around five key tracks:

  1. What’s missing from taxonomies of harm and safety definitions? The convening will examine gaps in popular taxonomies of harms and explore what notions of safety popularized by governments and big tech companies fail to capture, working to put critical concerns back on the agenda.
  2. Safety tooling in open AI stacks. As the ecosystem of open source tools for AI safety continues to grow, developers need better ways to navigate it. This work will focus on mapping technical interventions and related tooling, and will help identify gaps that need to be addressed for safer system deployment.
  3. The future of content safety classifiers. This discussion will chart a future roadmap for foundation models based on open source content safety classifiers, addressing key questions, necessary resources, and research agenda requirements, while drawing insights from past and current classifier system deployments. Participants will explore gaps in the content safety filtering ecosystem, considering both developer needs and future technological developments.
  4. Agentic risks for AI systems interfacing with the web. With growing interest in “agentic applications,” participants will work toward a robust working definition and map the specific needs of AI-system developers in developing safe agentic systems, while identifying current gaps to address.
  5. Participatory inputs in safety systems. The convening will examine how participatory inputs and democratic engagement can support safety tools and systems throughout development and deployment pipelines, making them more pluralistic and better adapted to specific communities and contexts.

Through these tracks, the convening will develop a community-informed research agenda at the intersection of safety and openness in AI, which will inform the AI Action Summit. In keeping with the principles of openness and working in public, we look forward to sharing our work on these issues.

The post Charging ahead on AI openness and safety appeared first on The Mozilla Blog.

The Mozilla BlogA civic tech creative on modernizing government sites, MySpace coding and pre-internet memories

A person wearing a blue blazer, smiling at the camera, with yellow grid background and decorative icons in orange and purple speech bubbles.

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and what reclaiming the internet really looks like.

This month, we caught up with Senongo Akpem, a creative in civic tech. He’s currently VP of design at Nava, a public benefit corporation that takes a human-centered approach to modernizing government technology and making it more accessible. We talked to him about his MySpace coding days, his fascination with the Internet Archive and why he thinks smart design might just be the bridge we need between government and the people it serves.

What is your favorite corner of the internet? 

The Internet Archive. It’s stunning how much of the Western world’s knowledge is captured there, metadata and all. You can spend hours examining typography choices in old travel magazines, or old-school VHS-quality shows from Japan, or an intro to modern architecture that was written in 1962.  

What is an internet deep dive that you can’t wait to jump back into?

There are a growing number of sites like The People Say that act as research indexes and databases to capture the voices of the public. One of Nava’s core strengths is our research practice, which includes human-centered design. We strive to speak to the same priority populations when we conduct research with our government partners in the benefits delivery and health care spaces. 

I’m eager to get back into the data and read more of the stories in there. 

What is the one tab you always regret closing?

Semafor Africa has a great newsletter that explores in detail the political, social and cultural news across the continent. In one post, you might read about clean energy projects and the cost of their capital investments. In another, you might read about the backstory of an Africa Cup of Nations (AFCON) match delay. 

There are so many complex, 21st century stories to be told about the African continent.

What can you not stop talking about on the internet right now?

For the past year, I’ve been part of a Nava team working on an effort to modernize Grants.gov. Grants.gov is the front door for grants across the federal government, and disburses more than $300 billion (yes, that’s a B!!) in grants throughout the country every year. These grants go to a range of grantees from small, community-based organizations to large, national nonprofits. Nava also supports the Office of Grants to help ensure the federal government doesn’t underserve any communities. 

I’ve mainly been leading strategic branding and communications efforts on the project, which often means nerding out with our government partners and coworkers on things like accessible color palettes, type scales and image banks. It’s a facet of civic tech that people often don’t think about. 

In 2023, the Office of Management and Budget released guidance directing agencies to deliver a “digital-first public experience.” Their guidance gives agencies details and deadlines for the implementation of the 21st Century IDEA Act, which was signed into law four years ago.

In multiple places, the memo describes how brand identity, visual design and design systems play a role in building trust in government systems — specifically, that clear and consistent use of an agency’s brand identity and visual design help the public identify official government entities.  

How do you see your work with Nava helping improve public trust in digital services?

Nava is a public benefit corporation (PBC), which is pretty unique in our space, and was intentionally set up that way by our founders. Being a PBC is not just a best practice or a label — it has legal weight, and is part of the company DNA. The people I work with at Nava have a fiduciary duty not only to our stakeholders, but to our stated mission: to improve the access, effectiveness, and simplicity of government services.

Nava believes that for companies like ours — that are paid with taxpayer dollars, whose work affects millions of lives — social responsibility should be the norm, not the exception.

The human-centered approach we take creates a better experience for end-users and the agencies we partner with. It ultimately builds trust in public institutions and the digital services provided. I see huge opportunities for the researchers, service designers, content strategists, frontend designers and communications designers at Nava to contribute to this. 

As Nava grows — we’ve recently entered the mid-sized category — we continue to place our mission at the forefront, and strive to set a good example of what’s possible. 

What was the first online community you engaged with?

My first sustained experience with an online community (not counting email) was probably MySpace around 2005-06. As I’m sure many people remember, it was a hit as soon as people got on there and started adding content. I was living in Japan at the time, and used the CSS/HTML hack to put a skin on my page while adding music, friends, you name it. I think that was one of the first times I felt the internet converging across cultures, rather than just the Web 1.0 model of static blocks of information. 

What articles and/or videos are you waiting to read/watch right now?

I got about two-thirds of the way through Scavengers Reign before I had to take a pause. It’s about the survivors of a spaceship crash on a distant planet that is teeming with strange life. When I started it, I assumed it would be a beautiful, quiet anime like a Moebius illustration. Spoiler Alert: It turned into a horror show! Every episode was more desperate than the last one. I’m waiting to build up the nerve to finish the first season. 

If you could create your own corner of the internet, what would it look like?

It would probably be something dedicated to archiving cultural/family ephemera. For the past few years, I have been slowly scanning in my parents photos, letters, postcards, passports and other small pieces of their life that I have managed to save. 

A woman smiles as she sits on a motorbike in the foreground, while in the background, a man rides a motorcycle with a young child sitting in front of him. They are outdoors on a dirt road, with trees and buildings in the background.<figcaption class="wp-element-caption">Senongo’s mother, father, and older sister sit on motorbikes in Benue State, Nigeria, around 1975-76.</figcaption>

A few years back, while in a taxi in Denver, I told the driver about my project and we began to chat about how important it is to save those family memories. The driver explained that she was from New Orleans, and her grandmother had been a Voodoo priestess. The family had sadly not been able to capture any of her stories or memories before she passed. 

My own corner of the internet would be a set of these poignant little memories from before the internet, scanned or recorded for future generations to share. 


Senongo Akpem is the vice president of design at Nava, a public benefit corporation working to make government services simple, effective and accessible to all. For the past two decades, he has specialized in collaborating with clients across the world on flexible, impactful digital experiences. Prior to joining Nava, he was design director at Constructive, a social impact design agency, and an art director at Cambridge University Press, where he led a global design team. Senongo is the author of “Cross-Cutural Design,” a book about creating culturally relevant and responsible experiences that reach a truly global audience.

The child of a Nigerian father and a Dutch-American mother, Senongo grew up in Nigeria, lived in Japan for almost a decade, and now calls New York City home. Living in constantly shifting cultural and physical spaces has given him unique insight into the influence of culture on communication and creativity. Senongo speaks at conferences around the world about cross-cultural design, digital storytelling, and transmedia. He loves any and all science fiction.

The post A civic tech creative on modernizing government sites, MySpace coding and pre-internet memories appeared first on The Mozilla Blog.

About:CommunityA tribute to Dian Ina Mahendra

It is with a heavy heart that I share the passing of my dear friend, Dian Ina Mahendra, who left us after a long battle with illness. Dian Ina was a remarkable woman whose warmth, kindness, and ever-present support touched everyone around her. Her ability to offer solutions to even the most challenging problems was truly a gift, and she had an uncanny knack for finding a way out of every situation.

Dian Ina’s contribution to Mozilla spanned back to the launch of Firefox 4 in 2011. She had also been heavily involved during the days of Firefox OS, the Webmaker campaign, FoxYeah, and most recently, Firefox Rocket (later renamed Firefox Lite) when it first launched in Indonesia. Additionally, she had been a dedicated contributor to localization through Pontoon.

Those who knew Dian Ina were constantly drawn to her, not just for her brilliant ideas, but for her open heart and listening ear. She was the person people turned to when they needed advice or simply someone to talk to. No matter how big or small the problem, she always knew just what to say, offering guidance with grace and clarity.

Beyond her wisdom, Dian Ina was a source of light and laughter. Her fun-loving nature and infectious energy made her the key person everyone turned to when they were looking for recommendations, whether it was for the best restaurant in town, a great book, or even advice on life itself. Her opinions were trusted, not only for their insight but also for the care she took in considering what would truly benefit others.

Her impact on those around her was immeasurable. She leaves behind a legacy of warmth, wisdom, and a deep sense of trust from everyone who had the privilege of knowing her. We will miss her dearly, but her spirit and the lessons she shared will live on in the hearts of all who knew her.

Here are some of the memories that people shared about Dian Ina:

  • Franc: Ina was a funny person, always with a smile. We shared many events like All Hands, Leadership Summit and more. Que la tierra te sea leve.

  • Rosana Ardila: Dian Ina was a wonderful human being. I remember her warm smile, when she was supporting the community, talking about art or food. She was independent and principled and so incredibly fun to be around. I was looking forward to seeing her again, touring her museum in Jakarta, discovering more food together, talking about art and digital life, the little things you do with people you like. She was so multifaceted, so smart and passionate. She left a mark on me and I will remember her, I’ll keep the memory of her big smile with me.
  • Delphine: I am deeply saddened to hear of Dian Ina’s passing. She was a truly kind and gentle soul, always willing to lend a hand. I will cherish the memories of our conversations and her dedication to her work as a localizer and valued member of the Mozilla community. Her presence will be profoundly missed.
  • Fauzan: For me, Ina is the best mentor in conflict resolution, design, art, dan L10n. She is totally irreplaceable in Indonesian community. We already missed her a lot.
  • William: I will never forget that smile and that contagious laughter of yours. I have such fond memories of my many trips to Jakarta, in large part thanks to you. May you rest in peace dearest Dian Ina.

  • Amira Dhalla: I’m going to remember Ina as the thoughtful, kind, and warm person she always was to everyone around her. We have many memories together but I specifically remember us giggling and jumping around together on the grounds of a castle in Scotland. We had so many fun memories together talking technology, art, and Indonesia. I’m saddened by the news of her passing but comforted by the Mozilla community honoring her in a special way and know we will keep her legacy alive.

  • Kiki: Mbak Ina was one of the female leaders I looked up to within the Mozilla Indonesia Community. She embodied all the definition of a smart and capable woman. The kind who was brave, assertive and above all, so fun to be around. I like that she can keep things real by not being afraid of sharing the hard truth, which is truly appreciative within a community setting. I always thought about her and her partner (Mas Mahen) as a fun and intelligent couple. Deep condolences to Mas Mahen and her entire family in Malang and Bandung. She left a huge mark on the Mozilla Indonesia Community, and she’ll be deeply missed.

  • Joe Cheng: I am deeply saddened to hear of Dian Ina’s passing. As the Product Manager for Firefox Lite, I had the privilege of witnessing her invaluable contributions firsthand. Dian was not only a crucial part of Mozilla’s community in Indonesia but also a driving force behind the success of Firefox Lite and other Mozilla projects. Her enthusiasm, unwavering support, and kindness left an indelible mark on everyone who met her. I fondly remember the time my team and I spent with her during our visit to Jakarta, where her vibrant spirit and warm smiles brought joy to our interactions. Dian’s positive energy and dedication will be remembered always, and her legacy will live on in the Mozilla community and beyond. She will be dearly missed.

This Week In RustThis Week in Rust 573

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is struct-split, a proc macro to implement partial borrows.

Thanks to Felix for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

403 pull requests were merged in the last week

Rust Compiler Performance Triage

Regressions primarily in doc builds. No significant changes in cycle or max-rss counts.

Triage done by @simulacrum. Revision range: 27e38f8f..d4822c2d

1 Regressions, 1 Improvements, 4 Mixed; 1 of them in rollups 47 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-11-13 - 2024-12-11 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Netstack3 encompasses 63 crates and 60 developer-years of code. It contains more code than the top ten crates on crates.io combined. ... For the past eleven months, they have been running the new networking stack on 60 devices, full time. In that time, Liebow-Feeser said, most code would have been expected to show "mountains of bugs". Netstack3 had only three; he attributed that low number to the team's approach of encoding as many important invariants in the type system as possible.

Joshua Liebow-Feeser at RustConf, as reported by Daroc Alden on Linux Weekly News

Thanks to Anton Fetisov for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla BlogExploring the Firefox community on r/firefox

Open source thrives because of its people. Firefox, like so many successful open-source projects, is powered by passionate contributors and dedicated supporters. Their collective efforts have transformed Firefox from just a web browser into the cornerstone of a global community, bringing together users and developers with a shared vision for the open web. Reddit, one of the most visited websites in the world, is a platform where millions of users — called Redditors — share and vote on content in self-moderated subreddits. One such space is r/firefox, a vibrant community of over 195,000 Firefox enthusiasts. Unlike a corporate-managed forum, this is an organic, user-driven environment where members engage in everything from technical discussions and support to passionate rants and heartfelt expressions for Firefox. Let’s explore this dynamic corner of the “front page of the internet” by diving into r/firefox, the Reddit community for all things Firefox.

<figcaption class="wp-element-caption">r/firefox 2008 courtesy of internet archives Wayback Machine.</figcaption>

Mozilla’s online community and contributors live across a wide variety of digital spaces. Mozilla Connect, the official portal for ideas and discussion receives millions of visits and has over 200 employees registered there. There are communities in Discord, Matrix, Github, Discourse, Bugzilla, support, MDN, the list goes on… But among the endless corners of the internet, Firefox’s r/firefox subreddit stands out — not a space managed by Mozilla, but as an organic community of passionate Firefox users. Though it’s been around since 2008, most of its members have joined in just the past five years, with nearly 100,000 new members joining in the last four alone.

<figcaption class="wp-element-caption">Which Firefox logo do you like the most? asks Redditor aphaits.</figcaption>


Who are the members of r/firefox, and what drives their posts? In many online communities, a small group of users tends to drive most of the conversation. The 90-9-1 rule is often used as a general guideline to describe this, where 1% of users create the majority of the content, 9% contribute occasionally, and the remaining 90% are passive consumers. However, this is just a rough yardstick, not an exact science—every community is unique in terms of who posts, who drives content, and how others engage. While we don’t have precise numbers for r/firefox, it seems to follow this general trend, with a core group of passionate Redditors contributing the most in-depth discussions and keeping the community vibrant.

As we explore the community on the Firefox subreddit, we can broadly identify a few archetypes for this group of super contributors to the Firefox Community to give us a better sense of what kinds of posts we can find there.

The Developer: Engages in technical discussions and may even contribute to Firefox’s code or features.

The Privacy and Open Source Advocate: Values Firefox’s commitment to privacy, web standards, and open source.

<figcaption class="wp-element-caption">Mozilla employees also have a history participating directly in r/firefox </figcaption>

The Customizer: Thrives on Firefox’s extensive customization options, especially add-ons and themes.

<figcaption class="wp-element-caption">OctoNezd sharing their Firefox add-on in this post. </figcaption>

The Challenger: Engaged Firefox users who want the product to be improved and provide critical feedback on what they find frustrating or lacking in Firefox. Posting feedback about bugs, performance issues, or changes they don’t agree with. While sometimes harsh, their feedback can highlight areas for improvement.

<figcaption class="wp-element-caption">While sometimes harsh, their feedback can highlight areas for improvement. </figcaption>

Firefox Supporter: is loyal to Firefox for its open-source values and commitment to a better internet. Participates in light-hearted discussions, from cool browser themes to quirky extensions, and loves helping others.

<figcaption class="wp-element-caption">Tracking down cute drawings with this post from janka12fsdf</figcaption>

Flair and moderators help highlight the diverse range of contributors who keep r/firefox lively. Each member brings something unique to the conversation with moderators playing a crucial role in ensuring these interactions remain productive. Flair allows contributors to display their identity and expertise, helping to shape the community’s culture and focus.

<figcaption class="wp-element-caption">The flair of r/firefox</figcaption>

The current Moderator team of r/firefox:
u/Antabaka
u/yoasif
u/rctgamer3
u/TimVdEynde
u/Alan976 (Mario583)
u/SKITTLE_LA


Moderators play a crucial role in managing online communities like r/firefox. They ensure the subreddit remains organized, safe, and aligned with community guidelines.

Not just another browser 

At the core of this community is a shared belief: Firefox isn’t just another browser; it’s a symbol of a better, more human-centered internet. This passion comes from Firefox’s open-source roots and its commitment to privacy and customization. In a world where tech giants dominate the market, Firefox offers something different—something people feel deeply connected to.

The users of r/firefox prove that a browser can be more than just a tool for browsing the internet. For many, it’s a symbol of their commitment to an open, people-first web. In this corner of “the front page of the internet,” their contributions—whether coding, troubleshooting, or sharing memes—are collectively helping shape the future of the web.

Appendix: r/firefox Through the Years


2008:

  1. Firefox global market share reaches 21.5% | Mozilla Links
  2. They Shrunk My Firefox! Mozilla Shows off Mobile Mockups

2009:

  1. Firefox 3.5 RC3 coming this week
  2. Mozilla’s internal tools for its most popular add-on, how its creator wants to let you use it! (Firefox Sync Interview)
  3. Mozilla Firefox 3.5 Release Candidate 3 now available
  4. Mozilla has more than 750 million users
  5. Speed tweaks for Firefox without Linux
  6. Speed tip for Firefox: Try increasing the Page File to quadruple RAM (Linux only)
  7. Drowning under 30 tabs? Help is on the way
  8. Google Toolbar in Firefox 3.5?
  9. Firefox memory usage control plan (Linux only)

2010:

  1. Why do I have 8 different versions of Java extensions in my Firefox? Shouldn’t there even be one delete button for all?
  2. 2010 Best of Show prize for top 10 browsers at CES
  3. Add-on recommendations in which Firefox addon depends please add a mini Firefox icon on the addons search results
  4. Mozilla releases Firefox 3.6.13; fixes multiple plugin crashes for uninterrupted browsing experience
  5. Plugin for Firefox’s ‘undead’ status: boosts performance and fixes crashes
  6. Firefox is gradually making addons less bloated every day, we won’t have to laugh back then. It’s been a long wait but it’s coming!

2011:

  1. Chrome’s RiverZoom extension ported to Firefox via Scriptish/Greasemonkey script, w00t!
  2. How do you open a new tab next to the current focused tab?
  3. [Support Request] Firefox Aurora 9.0.2 Won’t download anything, details inside

2012:

  1. Mozilla releases: Shuts, blocks IE, Chrome and Internet Explorer down!
  2. This is how to make Firefox actually do the process when you click close!
  3. Firefox 11 is out now and you can access it (I got 1 problem, what’s up?)
  4. Any advice for experimental branches of Firefox?
  5. Many users: is it becoming superfluous?
  6. Mozilla Firefox on the long-awaited Multi-account extensions experiment
  7. Is Firefox doing what the interface wants with my brain’s memory?
  8. [Fixed] In Firefox, when you run Java Runtime, the result will be mind-blowing

2016:

  1. Mozilla releases Windows nightlies and all updates address all issues
  2. Pick the One: There are experimental browsers and extensions by Mozilla and Google

2018:

  1. [Sticky] Trying to use Firefox with no extensions has surfaced numerous user complaints.
  2. We need to stop the sync from showing a default response. It’s gone too far for being Firefox’s choice.
  3. [Sticky] Synonymity EVER Privacy in Firefox; Mozilla’s opinion

2020:

  1. Mozilla’s Daily Note 2020-09-11
  2. Introducing r/firefox subreddit design update Megathread
  3. Nightly Discussion for Builds in Firefox Nightly

2022:

  1. Weekly Addon suggestions! “I have an addon for that!” post for 2022-03-09
  2. How to easily transfer bookmarks to the Firefox bookmarks bar in Windows?
  3. Firefox tabs are bringing back web-clips – an update for Chrome users

2024:

  1. 2024 is the best year for Firefox
  2. Opportunity to contribute to Multi-Account Containers extension

Get Firefox

Get the browser that protects what’s important

The post Exploring the Firefox community on r/firefox appeared first on The Mozilla Blog.

The Mozilla BlogHow AI is reshaping creativity: Insights from art, tech and policy

AI is shaking things up in the creative world, and I get why a lot of artists feel anxious. Whenever new technology comes along — especially in industries like ours — it brings fear. Fear of losing control, fear of being replaced. That’s real. But there’s another side to this: AI can open doors we never thought possible.

In “Creativity in the Age of AI: Insights, Ethics & Opportunities,” a report I co-wrote with digital policy expert Natalia Domagala and technologist Angela Lungati — in collaboration with Mozilla and Skillshare — we explore both the anxieties and opportunities AI brings to creatives.

Together, we explore the future of creativity in this paper, touching on the ethical challenges and the immense possibilities AI brings.

Key points explored in the report:

  • Ethical concerns about ownership: One of the biggest issues is ownership, as Natalia explores in our paper. AI pulls from massive datasets, often without the original creators’ consent. This raises serious concerns about transparency and who owns the work AI generates. Copyright laws weren’t built for this. 
  • Bias in AI: AI is only as good as the data it’s trained on. If AI is trained on biased data, it will reproduce those biases. It’s crucial to make sure AI tools are built with diverse datasets, and that the people designing them understand the importance of inclusivity.
  • AI can lower barriers to creative innovation: The rise of generative AI tools like ChatGPT and OpenAI Sora is making high-level creative outputs accessible to non-experts. These tools can help level the playing field, allowing creatives to explore ideas and storytelling that wouldn’t have been possible before.
  • A tool for cultural preservation: With “Protopica,” a short film I co-directed with Will Selviz, AI allowed us to blend Caribbean heritage and futurism, creating a new form of storytelling that wouldn’t have been achievable with traditional methods. This shows how AI can preserve culture while pushing creative boundaries.
  • AI as a force for social change: Angela highlighted how AI-powered tools are supporting civic education and engagement in Kenya. For example, Corrupt Politicians GPT exposed corruption cases involving Kenyan politicians, while Finance Bill GPT simplified the complex provisions of a controversial finance bill. These tools have helped local communities understand the implications of proposed laws, contributing to nationwide protests and civic participation.
  • AI amplifying, not replacing, human creativity: There’s a real fear among creatives that AI could replace their jobs, and that fear is legitimate. In a world driven by productivity, companies often cut human roles first. But AI shouldn’t be about replacing humans — it’s about amplifying what we can do. It should be used to empower, not replace, human creativity.

If you’re curious about how AI is changing the creative world — whether you’re excited or skeptical — this paper is for you. We explore the risks, the rewards and what AI means for the future of creativity. It’s the start of a crucial conversation about creativity and control, with insights from the worlds of art, technology and policy — offering a glimpse into how AI is reshaping the future.

Creativity in the Age of AI

Read the paper

Manuel Sainsily is a TEDx speaker and an XR and AI instructor at McGill University and UMass Boston. Born in Guadeloupe and a Canadian citizen based in Montreal, where he completed his Master of Science in computer sciences, he is a trilingual public speaker, designer, and educator with over 15 years of experience who champions the responsible use and understanding of artificial intelligence. From delivering a masterclass on AI ethics and speaking at worldwide tech, film, and gaming conferences to being celebrated by NVIDIA, Mozilla Rise25, and Skillshare, and producing art exhibitions with Meta, OpenAI, and VIFFest, Manuel amplifies the conversation around cultural preservation and emerging technologies such as spatial computing, AI, real-time 3D, haptics, and BCI through powerful keynotes and curated events.

The post How AI is reshaping creativity: Insights from art, tech and policy appeared first on The Mozilla Blog.

About:CommunityContributor spotlight – MyeongJun Go

The beauty of an open source software lies in the collaborative spirit of its contributors. In this post, we’re highlighting the story of MyeongJun Go (Jun), who has been a dedicated contributor to the Performance Tools team. His contributions have made a remarkable impact on performance testing and tooling, from local tools like Mach Try Perf and Raptor to web-based tools such as Treeherder. Thanks to Jun, developers are even more empowered to improve the performance of our products.

Open source has offered me invaluable lessons that are hard to gain elsewhere. Working with people from around the world, I’ve learned effective collaboration practices that help us minimize disruptions and improve development quality. From code reviews, writing test cases, to clean code and refactoring practices, I’ve gained essential skills for producing maintainable, high quality code.

Q: Can you tell us a little about how you first got involved with Mozilla?

I felt a constant thirst for development while working on company services. I wanted to create something that could benefit the world and collaborate with developers globally. That’s when I decided to dive into open source development.

Around that time, I was already using Firefox as my primary browser, and I frequently referenced MDN for work, naturally familiarizing myself with Mozilla’s services. One day, I thought, how amazing would it be to contribute to a Mozilla open source project used by people worldwide? So, I joined an open source challenge.

At first, I wondered, can I really contribute to Firefox? But thanks to the supportive Mozilla staff, I was able to tackle one issue at a time and gradually build my experience.

Q: Your contributions have had a major impact on performance testing and tooling. What has been your favourite or most rewarding project to work on so far?

I’ve genuinely found every project and task rewarding—and enjoyable too. Each time I completed a task, I felt a strong sense of accomplishment.

If I had to pick one particularly memorable project, it would be the Perfdocs tool. It was my first significant project when I started contributing more actively, and its purpose is to automate documentation for the various performance tools scattered across the ecosystem. With every code push, Perfdocs automatically generates documentation in “Firefox Source Docs”.

Working on this tool gave me the chance to familiarize myself with various performance tools one by one, while also building confidence in contributing. It was rewarding to enhance the features and see the resulting documentation instantly, making the impact very tangible. Hearing from other developers about how much it simplified their work was incredibly motivating and made the experience even more fulfilling.

Q: Performance tools are critical for developers. Can you walk us through how your work helps improve the overall performance of Mozilla products?

I’ve applied various patches across multiple areas, but updates to tools like Mach Try Perf and Perfherder, which many users rely on, have had a particularly strong impact.

With Mach Try Perf, developers can easily perform performance tests by platform and category, comparing results between the base commit (before changes) and the head commit (after changes). However, since each test can take considerable time, I developed a caching feature that stores test results from previous runs when the base commit is the same. This allows us to reuse existing results instead of re-running tests, significantly reducing the time needed for performance testing.

I also developed several convenient flags to enhance testing efficiency. For instance, when an alert occurs in Perfherder, developers can now re-run tests simply by using the “–alert” flag with the alert ID in the Mach Try Perf command.

Additionally, I recently integrated Perfherder with Bugzilla to automatically file bugs. Now, with just a click of the ‘file bug’ button, related bugs are filed automatically, reducing the need for manual follow-up.

These patches, I believe, have collectively helped improve the productivity of Mozilla’s developers and contributors, saving a lot of time in the development process.

Q: How much of a challenge do you find being in a different time zone to the rest of the team? How do you manage this?

I currently live in South Korea (GMT+9), and most team meetings are scheduled from 10 PM to midnight my time. During the day, I focus on my job, and in the evening, I contribute to the project. This setup actually helps me use my time more efficiently. In fact, I sometimes feel that if we were in the same time zone, balancing both my work and attending team meetings might be even more challenging.

Q: What are some tools or methodologies you rely on?

When developing Firefox, I mainly rely on two tools: Visual Studio Code (VSC) on Linux and SearchFox. SearchFox is incredibly useful for navigating Mozilla’s vast codebase, especially as it’s web-based and makes sharing code with teammates easy.

Since Mozilla’s code is open source, it’s accessible for the world to see and contribute to. This openness encourages me to seek feedback from mentors regularly and to focus on refactoring through detailed code reviews, with the goal of continually improving code quality.

I’ve learned so much in this process, especially about reducing code complexity and enhancing quality. I’m always grateful for the detailed reviews and constructive feedback that help me improve.

Q: Are there any exciting projects you’d like to work on?

I’m currently finding plenty of challenge and growth working with testing components, so rather than seeking new projects, I’m focused on my current tasks. I’m also interested in learning Rust and exploring trends like AI and blockchain.

Recently, I’ve considered ways to improve user convenience in tools like Mach Try Perf and Perfherder, such as making test results clearer and easier to review. I’m happy with my work and growth here, but I keep an open mind toward new opportunities. After all, one thing I’ve learned in open source is to never say, ‘I can’t do this.’

Q: What advice would you give to someone new to contributing?

If you’re starting as a contributor to the codebase, building it alone might feel challenging. You might wonder, “Can I really do this?” But remember, you absolutely can. There’s one thing you’ll need: persistence. Hold on to a single issue and keep challenging yourself. As you solve each issue, you’ll find your skills growing over time. It’s a meaningful challenge, knowing that your contributions can make a difference. Contributing will make you more resilient and help you grow into a better developer.

Q: What’s something you’ve learned during your time working on performance tools?

Working with performance tools has given me valuable experience across a variety of tools, from local ones like Mach Try Perf, Raptor, and Perfdocs to web based tools such as Treeherder and Perfherder. Not only have I deepened my technical skills, but I also became comfortable using Python, which wasn’t my primary language before.

Since Firefox runs across diverse environments, I learned how to execute individual tests for different conditions and manage and visualize performance test results efficiently. This experience taught me the full extent of automation’s capabilities and inspired me to explore how far we can push it.

Through this large scale project, I’ve learned how to approach development from scratch, analyze requirements, and carry out development while considering the impact of changes. My skills in impact analysis and debugging have grown significantly.

Open source has offered me invaluable lessons that are hard to gain elsewhere. Working with people from around the world, I’ve learned effective collaboration practices that help us minimize disruptions and improve development quality. From code reviews, writing test cases, to clean code and refactoring practices, I’ve gained essential skills for producing maintainable, high quality code.

Q: What do you enjoy doing in your spare time when you’re not contributing to Mozilla?

I really enjoy reading and learning new things in my spare time. Books offer me a chance to grow, and I find it exciting to dive into new subjects. I also prioritize staying active with running and swimming to keep both my body and mind healthy. It’s a great balance that keeps me feeling refreshed and engaged.


Interested in contributing to performance tools like Jun? Check out our wiki to learn more.

The Servo BlogBehind the code: an interview with msub2

Behind the Code is a new series of interviews with the contributors who help propel Servo forward. Ever wondered why people choose to work on web browsers, or how they get started? We invite you to look beyond the project’s pull requests and issue reports, and get to know the humans who make it happen.


msub2

Some representative contributions:

Tell us about yourself!

My name is Daniel, though I more commonly go by my online handle “msub2”. I’m something of a generalist, but my primary interests are developing for the web, XR, and games. I created and run the WebXR Discord, which has members from both the Immersive Web Working Group and the Meta Browser team, among others. In my free time (when I’m not working, doing Servo things, or tending to my other programming projects) I’m typically watching videos from YouTube/Dropout/Nebula/etc and playing video games.

Why did you start contributing to Servo?

A confluence of interests, to put it simply. I was just starting to really get into Rust, having built a CHIP-8 emulator and an NES emulator to get my hands dirty, but I also had prior experience contributing to other browser projects like Chromium and Gecko. I was also eyeing Servo’s WebXR implementation (which I had submitted a couple small fixes for last year) as I could see there was still plenty of work that could be done there. To get started though, I looked for an adjacent area that I could work on to get familiar with the main Servo codebase, which led to my first contribution being support for non-XR gamepads!

What was challenging about your first contribution?

I’d say the most challenging part of my first contribution was twofold: the first was just getting oriented with how data flows in and out of Servo via the embedding API and the second was understanding how DOM structs, methods, and codegen all worked together in the script crate. Servo is a big project, but luckily I got lots of good help and feedback as I was working through it, which definitely made things easier. Looking at existing examples in the codebase of the things I was trying to do got me the rest of the way there I’d say.

What do you like about contributing to the project? What do you get out of it?

The thing I like most about Servo (and perhaps the web platform as an extension) is the amount of interesting problems that there are to solve when it comes to implementing/supporting all of its different features. While most of my contributions so far have been focused around Gamepad and WebXR, recently I’ve been working to help implement SubtleCrypto alongside another community member, which has been really interesting! In addition to the satisfaction I get just from being able to solve interesting problems, I also rather enjoy the feeling of contributing to a large, communal, open-source project.

Any final thoughts you’d like to share?

I’d encourage anyone who’s intrigued by the idea of contributing to Servo to give it a shot! The recent waves of attention for projects like Verso and Ladybird have shown that there is an appetite for new browsers and browser engines, and with Servo’s history it just feels right that it should finally be able to rise to a more prominent status in the ecosystem.

Don MartiLinks for 10 November 2024

Signal Is Now a Great Encrypted Alternative to Zoom and Google Meet These updates mean that Signal is now a free, robust, and secure video conferencing service that can hang with the best of them. It lets you add up to 50 people to a group call and there is no time limit on each call.

The New Alt Media and the Future of Publishing - Anil Dash

I’m a neuroscientist who taught rats to drive − their joy suggests how anticipating fun can enrich human life

Ecosia and Qwant, two European search engines, join forces

What can McCain’s Grand Prix win teach us? Nothing new Ever since Byron Sharp decided he was going for red for his book cover, marketing thinkers have assembled a quite extraordinary disciplinary playbook. And it’s one that looks nothing like the existing stuff that it replaced. Of course, the majority of marketers know nothing about any of it. They inhabit the murkier corners of marketing, where training is rejected because change is held up as a circuit-breaker for learning anything from the past. AI and the ‘new consumer’ mean everything we once knew is pointless now. Better to be ignorant and untrained than waste time on irrelevant historical stuff. But for those who know that is bullshit, who study, who respect marketing knowledge, who know the foundations do not change, the McCain case is a jewel sparkling with everything we have learned in these very fruitful 15 years.

The Counterculture Switch: creating in a hostile environment

Why Right-Wing Media Thrives While The Left Gets Left Behind

The Rogue Emperor, And What To Do About Them Anywhere there is an organisation or group that is centred around an individual, from the smallest organisation upwards, it’s possible for it to enter an almost cult-like state in which the leader both accumulates too much power, and loses track of some of the responsibilities which go with it. If it’s a tech company or a bowls club we can shrug our shoulders and move to something else, but when it occurs in an open source project and a benevolent dictator figure goes rogue it has landed directly on our own doorstep as the open-source community.

We need a Wirecutter for groceries

Historic calculators invented in Nazi concentration camp will be on exhibit at Seattle Holocaust center

One Company A/B Tested Hybrid Work. Here’s What They Found. According to the Society of Human Resource Management, each quit costs companies at least 50% of the employees’ annual salary, which for Trip.com would mean $30,000 for each quit. In Trip.com’s experiment, employees liked hybrid so much that their quit rates fell by more than a third — and saved the company millions of dollars a year.

Mozilla ThunderbirdVIDEO: Q&A with Mark Surman

Last month we had a great chat with two members of the Thunderbird Council, our community governance body. This month, we’re looking at the relationship between Thunderbird and our parent organization, MZLA, and the broader Mozilla Foundation. We couldn’t think of a better way to do this than sitting down for a Q&A with Mark Surman, president of the Mozilla Foundation.

We’d love to hear your suggestions for topics or guests for the Thunderbird Community Office Hours! You can always send them to officehours@thunderbird.org.

October Office Hours: Q&A with Mark Surman

In many ways, last month’s office hours was a perfect lead-in to this month’s, as our community and Mozilla have been big parts of the Thunderbird Story. Even though this year marks 20 years since Thunderbird 1.0, Thunderbird started as ‘Minotaur’ alongside ‘Phoenix,’ the original name for Firefox, in 2003. Heather, Monica, and Mark all discuss Thunderbird’s now decades-long journey, but this chat isn’t just about our past. We talk about what we hope is a a long future, and how and where we can lead the way.

If you’ve been a long-time user of Thunderbird, or are curious about how Thunderbird, MZLA, and the Mozilla Foundation all relate to each other, this video is for you.

Watch, Read, and Get Involved

We’re so grateful to Mark for joining us, and turning an invite during a chat at Mozweek into reality! We hope this video gives a richer context to Thunderbird’s past as it highlights one of the main characters in our long story.

VIDEO (Also on Peertube):

Thunderbird and Mozilla Resources:

The post VIDEO: Q&A with Mark Surman appeared first on The Thunderbird Blog.

Andrew HalberstadtJujutsu: A Haven for Mercurial Users at Mozilla

One of the pleasures of working at Mozilla, has been learning and using the Mercurial version control system. Over the past decade, I’ve spent countless hours tinkering my worfklow to be just so. Reading docs and articles, meticulously tweaking settings and even writing an extension.

I used to be very passionate about Mercurial. But as time went on, the culture at Mozilla started changing. More and more repos were created in Github, and more and more developers started using git-cinnabar to work on mozilla-central. Then my role changed and I found that 90% of my work was happening outside of mozilla-central and the Mercurial garden I had created for myself.

So it was with a sense of resigned inevitability that I took the news that Mozilla would be migrating mozilla-central to Git. The fire in me was all but extinguished, I was resigned to my fate. And what’s more, I had to agree. The time had come for Mozilla to officially make the switch.

Glandium wrote an excellent post outlining some of the history of the decisions made around version control, putting them into the context of the time. In that post, he offers some compelling wisdom to Mercurial holdouts like myself:

I’ll swim against the current here, and say this: the earlier you can switch to git, the earlier you’ll find out what works and what doesn’t work for you, whether you already know Git or not.

When I read that, I had to agree. But, I just couldn’t bring myself to do it. No, if I was going to have to give up my revsets and changeset obsolesence and my carefully curated workflows, then so be it. But damnit! I was going to continue using them for as long as possible.

And I’m glad I didn’t switch because then I stumbled upon Jujutsu.

The Mozilla BlogWe asked why you love Firefox. Here’s what you said.

For two decades, Firefox has been at the heart of an open, user-centered web. From the early days of tabbed browsing and pop-up blocking to today’s privacy protections and customization options, Firefox has empowered users like you with control and freedom to explore the internet on your own terms. So, to mark our 20th anniversary, we asked: What made you fall in love with Firefox? 

Whether you’ve been with us since the very first version or joined more recently, your answers remind us of the deep connections Firefox has built over the years. Some of you love Firefox for the features that make it stand out from other browsers. Others value Firefox for the trust it has earned over time. And for many, it’s been a loyal companion from the very beginning.

Here’s a look at what makes Firefox special to so many of you.

Features that keep you coming back

These are the features that make Firefox your go-to browser.

“It’s just that other browsers are too privacy invasive. And Firefox has a lot of great features not just one.”
— @xonidev

“Containers is the killer feature.”
— @Kaegun

“PIP (picture-in-picture) in every video”
— @JanakXD

“Add-ons on mobile 👀
— @kotulp

“switched on V 1.0 never gone back to other browsers. Using sync between multiple desktops /  Laptops & mobile is great [especially] for adblock extensions!”
— @satanas_g

Improvements over the years

As Firefox has grown, so has our commitment to making your browsing experience better and faster.

“Stability improvements, cleaner UI, rust under the hood, adblock”
— @lee_official_the_real_one

“It’s fast”
— @blessedwithsins

The trust factor

Beyond features, many of you choose Firefox for its transparency, commitment to open-source, and user-first principles.

“It wasn’t a feature. It was trust.”
— @JimConnolly

“I moved from Opera to Firefox because it was open-source and obeyed most of the standards. It’s been my default since version 1.5. Why wouldn’t it be?”
— @omarwilley

In it from the beginning

Some of you have been here since the early days, and Firefox has become part of your internet history.

“my dad used it first before installing it on the family laptop 18 years ago where every one could use it. Gotta say i never switched to another browser after i got my own computer 16 years ago.”
— @032Zero

“Tabbed browsing. Never left since.”
— @ergosteur

“I just liked Mozilla’s logo at the time, this was 20 years ago”
— @SneedPlays

“I don’t remember using anything else except firefox (as main web browser)”
— @Miki1877852468

“Well, it was a successor to Netscape. Though migration was in 2008, I started using Firefox around 2005-2006. It was the best browser at the time. It is still the best browser for me now.”
— @erolcanulutas

Whatever it is that made you fall in love with Firefox, we’re so glad you’re here. Thanks for being part of our story and helping us keep the web open, safe and truly yours.

Get Firefox

Get the browser that protects what’s important

The post We asked why you love Firefox. Here’s what you said. appeared first on The Mozilla Blog.

The Servo BlogThis month in Servo: faster fonts, fetches, and flexbox!

Servo nightly showing new support for non-ASCII characters in <img srcset>, ‘transition-behavior: allow-discrete’, ‘mix-blend-mode: plus-lighter’, and ‘width: stretch’

Servo now supports ‘mix-blend-mode: plus-lighter’ (@mrobinson, #34057) and ‘transition-behavior: allow-discrete’ (@Loirooriol, #33991), including in the ‘transition’ shorthand (@Loirooriol, #34005), along with the fetch metadata request headers ‘Sec-Fetch-Site’, ‘Sec-Fetch-Mode’, ‘Sec-Fetch-User’, and ‘Sec-Fetch-Dest’ (@simonwuelker, #33830).

We now have partial support for the CSS size keywords ‘min-content’, ‘max-content’, ‘fit-content’, and ‘stretch’ (@Loirooriol, #33558, #33659, #33854, #33951), including in floats (@Loirooriol, #33666), atomic inlines (@Loirooriol, #33737), and elements with ‘position: absolute’ or ‘fixed’ (@Loirooriol, #33950).

We’re implementing the SubtleCrypto API, starting with full support for crypto.subtle.digest() (@simonwuelker, #34034), partial support for generateKey() with AES-CBC and AES-CTR (@msub2, #33628, #33963), and partial support for encrypt(), and decrypt() with AES-CBC (@msub2, #33795).

More engine changes

Servo’s architecture is improving, with a new cross-process compositor API that reduces memory copy overhead for video (@mrobinson, @crbrz, #33619, #33660, #33817). We’ve also started phasing out our old OpenGL bindings (gleam and sparkle) in favour of glow, which should reduce Servo’s complexity and binary size (@sagudev, @mrobinson, surfman#318, webxr#248, #33538, #33910, #33911).

We’ve updated to Stylo 2024-10-04 (@Loirooriol, #33767) and wgpu 23 (@sagudev, #34073, #33819, #33635). The new version of wgpu includes several patches from @sagudev, adding support for const_assert, as well as accessing const arrays with runtime index values. We’ve also reworked WebGPU canvas presentation to ensure that we never use old buffers by mistake (@sagudev, #33613).

We’ve also landed a bunch of improvements to our DOM geometry APIs, with DOMMatrix now supporting toString() (@simonwuelker, #33792) and updating is2D on mutation (@simonwuelker, #33796), support for DOMRect.fromRect() (@simonwuelker, #33798), and getBounds() on DOMQuad now handling NaN correctly (@simonwuelker, #33794).

We now correctly handle non-ASCII characters in <img srcset> (@evuez, #33873), correctly handle data: URLs in more situations (@webbeef, #33500), and no longer throw an uncaught exception when pages try to use IntersectionObserver (@mrobinson, #33989).

Outreachy contributors are doing great work in Servo again, helping us land many of this month’s improvements to GC static analysis (@taniishkaa, @webbeef, @chickenleaf, @jdm, @jahielkomu, @wulanseruniati, @lauwwulan, #33692, #33706, #33800, #33774, #33816, #33808, #33827, #33822, #33820, #33828, #33852, #33843, #33836, #33865, #33862, #33891, #33888, #33880, #33902, #33892, #33893, #33895, #33931, #33924, #33917, #33921, #33958, #33920, #33973, #33960, #33928, #33985, #33984, #33978, #33975, #34003, #34002) and code health (@chickenleaf, @DileepReddyP, @taniishkaa, @mercybassey, @jahielkomu, @cashall-0, @tony-nyagah, @lwz23, @Noble14477, #33959, #33713, #33804, #33618, #33625, #33631, #33632, #33633, #33643, #33643, #33646, #33648, #33653, #33664, #33685, #33686, #33689, #33686, #33690, #33705, #33707, #33724, #33727, #33728, #33729, #33730, #33740, #33744, #33757, #33771, #33757, #33782, #33790, #33809, #33818, #33821, #33835, #33840, #33853, #33849, #33860, #33878, #33881, #33894, #33935, #33936, #33943).

Performance improvements

Our font system is faster now, with reduced latency when loading system fonts (@mrobinson, #33638), layout no longer blocking on sending font data to WebRender (@mrobinson, #33600), and memory mapped system fonts on macOS and FreeType platforms like Linux (@mrobinson, @mukilan, #33747).

Servo now has a dedicated fetch thread (@mrobinson, #33863). This greatly reduces the number of IPC channels we create for individual requests, and should fix crashes related to file descriptor exhaustion on some platforms. Brotli-compressed responses are also handled more efficiently, such that we run the parser with up to 8 KiB of decompressed data at a time, rather than only 10 bytes of compressed data at a time (@crbrz, #33611).

Flexbox layout now uses caching to avoid doing unnecessary work (@mrobinson, @Loirooriol, #33964, #33967), and now has experimental tracing-based profiling support (@mrobinson, #33647), which in turn no longer spams RUST_LOG=info when not enabled (@delan, #33845). We’ve also landed optimisations in table layout (@Loirooriol, #33575) and in our layout engine as a whole (@Loirooriol, #33806).

Work continues on making our massive script crate build faster, with improved incremental builds (@sagudev, @mrobinson, #33502) and further patches towards splitting script into smaller crates (@sagudev, @jdm, #33627, #33665).

We’ve also fixed several crashes, including when initiating a WebXR session on macOS (@jdm, #33962), when laying out replaced elements (@Loirooriol, #34006), when running JavaScript modules (@jdm, #33938), and in many situations when garbage collection occurs (@chickenleaf, @taniishkaa, @Loirooriol, @jdm, #33857, #33875, #33904, #33929, #33942, #33976, #34019, #34020, #33965, #33937).

servoshell, embedding, and devtools

Devtools support (--devtools 6080) is now compatible with Firefox 131+ (@eerii, #33661), and no longer lists iframes as if they were inspectable tabs (@eerii, #34032).

Servo-the-browser now avoids unnecessary redraws (@webbeef, #34008), massively reducing its CPU usage, and no longer scrolls too slowly on HiDPI systems (@nicoburns, #34063). We now update the location bar when redirects happen (@rwakulszowa, #34004), and these updates are sent to all embedders of Servo, not just servoshell.

We’ve added a new --unminify-css option (@Taym95, #33919), allowing you to dump the CSS used by a page like you can for JavaScript. This will pave the way for allowing you to modify that CSS for debugging site compat issues, which is not yet implemented.

We’ve also added a new --screen-size option that can help with testing mobile websites (@mrobinson, #34038), renaming the old --resolution option to --window-size, and we’ve removed --no-minibrowser mode (@Taym95, #33677).

We now publish nightly builds for OpenHarmony on servo.org (@mukilan, #33801). When running servoshell on OpenHarmony, we now display toasts when pages load or panic (@jschwe, #33621), and you can now pass certain Servo options via hdc shell aa start or a test app (@jschwe, #33588).

Donations

Thanks again for your generous support! We are now receiving 4201 USD/month (+1.3% over September) in recurring donations. We are no longer accepting donations on LFX — if you were donating there, please move your recurring donations to GitHub or Open Collective.

Servo is also on thanks.dev, and already ten GitHub users that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

4201 USD/month
10000

With this money, we’ve been able to pay for a second Outreachy intern in this upcoming round, plus our web hosting and self-hosted CI runners for Windows and Linux builds. When the time comes, we’ll also be able to afford macOS runners and perf bots! As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Conference talks

Support.Mozilla.OrgCelebrating our top contributors on Firefox’s 20th anniversary

Firefox was built by a group of passionate developers, and has been supported by a dedicated community of caring contributors since day one.

The SUMO platform was originally built in 2007 to provide an open-source community support channel for users, and to help us collaborate more effectively with our volunteer contributors.

Over the years, SUMO has become a powerful platform that helps users get the most out of Firefox, provides opportunities for users to connect and learn more from each other, and allows us to gather important insights – all powered by our community of contributors.

SUMO is not just a support platform but a place where other like-minded users, who care about making the internet a better place for everyone, can find opportunities to grow their skills and contribute.

Our contributor community has been integral to Firefox’s success. Contributors humanize the experience across our support channels, champion meaningful fixes and changes, and help us onboard the next generation of Firefox users (and potential contributors!).

Fun facts about our community:

  • We’re global! We have active contributors in 63 countries.
  • 6 active contributors have been with us since day one (Shout outs to Cor-el, jscher2000, James, mozbrowser, AliceWyman, and marsf) and 16 contributors have been here for 15+ years!
  • In 2024*, our contributor community responded to 18,390 forum inquiries, made 747 en-US revisions and 5,684 l10n revisions to our Knowledge Base, responded to 441 Tweets, and issued 1,296 Play Store review responses (*from Jan-Oct 2024 for Firefox desktop, Android, and iOS. Non OP and non staff)

Screenshot of the top contributors from Jan-Oct 2024

Chart reflects top contributors for Firefox (Desktop, Android, and iOS)

Highlights from throughout the years:

Started in October 2007, SUMO has evolved in many different ways, but its spirit remains the same. It supports our wider user community while also allowing us to build strong relationships with our contributors. Below is a timeline of some key moments in SUMO’s history:

  • 2 October 2007 – SUMO launched on TikiWiki. Knowledge Base was implemented in this initial phase, but article localization wasn’t supported until February 2008.
  • 18 December 2007 – Forum went live
  • 28 December 2007 – Live chat launched
  • 5 February 2009 – SUMO logo was introduced
  • 11 October 2010 – We expanded to Twitter (now X) supported by the Army of Awesome
  • December 2010 – SUMO migrated from TikiWiki to Kitsune. The migration was done in stages and lasted most of 2010.
  • 14 March 2021 – We expanded to take on Play Store support and consolidated our social support platforms in Conversocial/Verint
  • 9 November 2024 – Our SUMO channels are largely powered by active contributors across forums, Knowledge Base and social

We are so grateful for our active community of contributors who bring our mission to life every day. Special thanks to those of you who have been with us since the beginning.

And to celebrate this milestone, we are going to reward top contributors (>99 contributions) for all products in 2024 with a special SUMO badge. Additionally, contributors with more than 999 contributions throughout SUMO’s existence and those with >99 contributions in 2024 will be given swag vouchers to shop at Mozilla’s swag stores.

Cheers to the progress we’ve made, and the incredible foundation we’ve built together. The best is yet to come!

 

P.S. Thanks to Chris Ilias for additional note on SUMO's history.

Mozilla Privacy BlogJoin Us to Mark 20 Years of Firefox

You’re invited to Firefox’s 20th birthday!

 

We’re marking 20 years of Firefox — the independent open-source browser that has reshaped the way millions of people explore and experience the internet. Since its launch, Firefox has championed privacy, security, transparency, and put control back in the hands of people online.

Come celebrate two decades of innovation, advocacy, and community — while looking forward to what’s to come.

The post Join Us to Mark 20 Years of Firefox appeared first on Open Policy & Advocacy.

Mozilla Privacy BlogBehind the Scenes of eIDAS: A Look at Article 45 and Its Implications

On October 21, 2024, Mozilla hosted a panel discussion during the Global Encryption Summit to explore the ongoing debate around Article 45 of the eIDAS regulation. Moderated by Robin Wilton from the Internet Society, the panel featured experts Dennis Jackson from Mozilla, Alexis Hancock from Certbot at EFF, and Thomas Lohninger from epicenter.works. Our panelists provided their insights on the technical, legal, and privacy concerns surrounding Article 45 and the potential impact on internet security and privacy. The panel, facilitated by Mozilla in connection with its membership on the Global Encryption Coalition Steering Committee, was part of the annual celebration of Global Encryption Day on October 21.

What is eIDAS and Why is Article 45 Important?

The original eIDAS regulation, introduced in 2014, aimed to create a unified framework for secure electronic identification (eID) and trust services across the European Union. Such trust services, provided by designated Trust Service Providers (TSPs), included electronic signatures, timestamps, and website authentication certificates. Subsequently, Qualified Web Authentication Certificates (QWACs) were also recognized as a method to verify that the entity behind a website also controls the domain in an effort to increase trust amongst users that they are accessing a legitimate website.

Over the years, the cybersecurity community has expressed its concerns for users’ privacy and security regarding the use of QWACs, as they can lead to a false sense of security. Despite this criticism, in 2021, an updated EU proposal to the original law, in essence, aimed to mandate the recognition of QWACs as long as they were issued by qualified TSPs. This, in practice, would undermine decades of web security measures and put users’ privacy and security at stake.

The Security Risk Ahead campaign raised awareness and addressed these issues by engaging widely with policymakers and including through a public letter signed by more than 500 experts that was also endorsed by organizations including Internet Society, European Digital Rights (EDRi), EFF, and Epicenter.works among others.

The European Parliament introduced last-minute changes to mitigate risks of surveillance and fraud, but these safeguards now need to be technically implemented to protect EU citizens from potential exposure.

Technical Concerns and Security Risks

Thomas Lohninger provided context on how Article 45 fits into the larger eIDAS framework. He explained that while eIDAS aims to secure the wider digital ecosystem, QWACs under Article 45 could erode trust in website security, affecting both European and global users.

Dennis Jackson, a member of Mozilla’s cryptography team, cautioned that without robust safeguards, Qualified Website Authentication Certificates (QWACs) could be misused, leading to increased risk of fraud. He noted limited involvement of technical experts in drafting Article 45 resulted in significant gaps within the law. The version of Article 45, as originally proposed in 2021, radically expanded the capabilities of EU governments to surveil their citizens by ensuring that cryptographic keys under government control can be used to intercept encrypted web traffic across the EU.

Why Extended Validation Certificates (EVs) Didn’t Work—and Why Article 45 Might Not Either

Alexis Hancock compared Article 45 to extended validation (EV) certificates, which were introduced years ago with similar intentions but ultimately failed to achieve their goals. EV certificates were designed to offer more information about the identity of websites but ended up being expensive and ineffective as most users didn’t even notice them.

Hancock cautioned that QWACs could suffer from the same problems. Instead of focusing on complex authentication mechanisms, she argued, the priority should be on improving encryption and keeping the internet secure for everyone, regardless of whether a website has paid for a specific type of certificate.

Balancing Security and Privacy: A Tough Trade-Off

A key theme was balancing online transparency and protecting user privacy. All the panelists agreed that while identifying websites more clearly may have its advantages, it should not come at the expense of privacy and security. The risk is that requiring more authentication online could lead to reduced anonymity and greater potential for surveillance, undermining the principles of free expression and privacy on the internet.

The panelists also pointed out that Article 45 could lead to a fragmented internet, with different regions adopting conflicting rules for registering and asserting ownership of a website. This fragmentation would make it harder to maintain a secure and unified web, complicating global web security.

The Role of Web Browsers in Protecting Users

Web browsers, like Firefox, play a crucial role in protecting users. The panelists stressed that browsers have a responsibility to push back against policies that could compromise user privacy or weaken internet security.

Looking Ahead: What’s Next for eIDAS and Web Security?

Thomas Lohninger raised the possibility of legal challenges to Article 45. If the regulation is implemented in a way that violates privacy rights or data protection laws, it could be contested under the EU’s legal frameworks, including the General Data Protection Regulation (GDPR) and the ePrivacy Directive. Such battles could be lengthy and complex however, underscoring the need for continued advocacy.

As the panel drew to a close, the speakers emphasized that while the recent changes to Article 45 represent progress, the fight is far from over. The implementation of eIDAS continues to evolve, and it’s crucial that stakeholders, including browsers, cybersecurity experts, and civil society groups, remain vigilant in advocating for a secure and open internet.

The consensus from the panel was clear: as long as threats to encryption and web security exist, the community must stay engaged in these debates. Scrutinizing policies like eIDAS  is essential to ensure they truly serve the interests of internet users, not just large institutions or governments.

The panelists concluded by calling for ongoing collaboration between policymakers, technical experts, and the public to protect the open web and ensure that any changes to digital identity laws enhance, rather than undermine, security and privacy for all.


You can watch the panel discussion here.

The post Behind the Scenes of eIDAS: A Look at Article 45 and Its Implications appeared first on Open Policy & Advocacy.

The Rust Programming Language BlogGoogle Summer of Code 2024 results

As we have previously announced, the Rust Project participated in Google Summer of Code (GSoC) for the first time this year. Nine contributors have been tirelessly working on their exciting projects for several months. The projects had various durations; some of them have ended in August, while the last one has been concluded in the middle of October. Now that the final reports of all the projects have been submitted, we can happily announce that all nine contributors have passed the final review! That means that we have deemed all of their projects to be successful, even though they might not have fulfilled all of their original goals (but that was expected).

We had a lot of great interactions with our GSoC contributors, and based on their feedback, it seems that they were also quite happy with the GSoC program and that they had learned a lot. We are of course also incredibly grateful for all their contributions - some of them have even continued contributing after their project has ended, which is really awesome. In general, we think that Google Summer of Code 2024 was a success for the Rust Project, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our project idea list.

Below you can find a brief summary of each of our GSoC 2024 projects, including feedback from the contributors and mentors themselves. You can find more information about the projects here.

Adding lint-level configuration to cargo-semver-checks

cargo-semver-checks is a tool designed for automatically detecting semantic versioning conflicts, which is planned to one day become a part of Cargo itself. The goal of this project was to enable cargo-semver-checks to ship additional opt-in lints by allowing users to configure which lints run in which cases, and whether their findings are reported as errors or warnings. Max achieved this goal by implementing a comprehensive system for configuring cargo-semver-checks lints directly in the Cargo.toml manifest file. He also extensively discussed the design with the Cargo team to ensure that it is compatible with how other Cargo lints are configured, and won't present a future compatibility problem for merging cargo-semver-checks into Cargo.

Predrag, who is the author of cargo-semver-checks and who mentored Max on this project, was very happy with his contributions that even went beyond his original project scope:

He designed and built one of our most-requested features, and produced design prototypes of several more features our users would love. He also observed that writing quality CLI and functional tests was hard, so he overhauled our test system to make better tests easier to make. Future work on cargo-semver-checks will be much easier thanks to the work Max put in this summer.

Great work, Max!

Implementation of a faster register allocator for Cranelift

The Rust compiler can use various backends for generating executable code. The main one is of course the LLVM backend, but there are other backends, such as GCC, .NET or Cranelift. Cranelift is a code generator for various hardware targets, essentially something similar to LLVM. The Cranelift backend uses Cranelift to compile Rust code into executable code, with the goal of improving compilation performance, especially for debug (unoptimized) builds. Even though this backend can already be faster than the LLVM backend, we have identified that it was slowed down by the register allocator used by Cranelift.

Register allocation is a well-known compiler task where the compiler decides which registers should hold variables and temporary expressions of a program. Usually, the goal of register allocation is to perform the register assignment in a way that maximizes the runtime performance of the compiled program. However, for unoptimized builds, we often care more about the compilation speed instead.

Demilade has thus proposed to implement a new Cranelift register allocator called fastalloc, with the goal of making it as fast as possible, at the cost of the quality of the generated code. He was very well-prepared, in fact he had a prototype implementation ready even before his GSoC project has started! However, register allocation is a complex problem, and thus it then took several months to finish the implementation and also optimize it as much as possible. Demilade also made extensive use of fuzzing to make sure that his allocator is robust even in the presence of various edge cases.

Once the allocator was ready, Demilade benchmarked the Cranelift backend both with the original and his new register allocator using our compiler benchmark suite. And the performance results look awesome! With his faster register allocator, the Rust compiler executes up to 18% less instructions across several benchmarks, including complex ones like performing a debug build of Cargo itself. Note that this is an end-to-end performance improvement of the time needed to compile a whole crate, which is really impressive. If you would like to examine the results in more detail or even run the benchmark yourself, check out Demilade's final report, which includes detailed instructions on how to reproduce the benchmark.

Apart from having the potential to speed up compilation of Rust code, the new register allocator can be also useful for other use-cases, as it can be used in Cranelift on its own (outside the Cranelift codegen backend). What can we can say other than we are very happy with Demilade's work! Note that the new register allocator is not yet available in the Cranelift codegen backend out-of-the-box, but we expect that it will eventually become the default choice for debug builds and that it will thus make compilation of Rust crates using the Cranelift backend faster in the future.

Improve Rust benchmark suite

This project was relatively loosely defined, with the overarching goal of improving the user interface of the Rust compiler benchmark suite. Eitaro tackled this challenge from various angles at once. He improved the visualization of runtime benchmarks, which were previously a second-class citizen in the benchmark suite, by adding them to our dashboard and by implementing historical charts of runtime benchmark results, which help us figure out how is a given benchmark behaving over a longer time span.

Another improvement that he has worked on was embedding a profiler trace visualizer directly within the rustc-perf website. This was a challenging task, which required him to evaluate several visualizers and figure out a way how to include them within the source code of the benchmark suite in a non-disruptive way. In the end, he managed to integrate Perfetto within the suite website, and also performed various optimizations to improve the performance of loading compilation profiles.

Last, but not least, Eitaro also created a completely new user interface for the benchmark suite, which runs entirely in the terminal. Using this interface, Rust compiler contributors can examine the performance of the compiler without having to start the rustc-perf website, which can be challenging to deploy locally.

Apart from the mentioned contributions, Eitaro also made a lot of other smaller improvements to various parts of the benchmark suite. Thank you for all your work!

Move cargo shell completions to Rust

Cargo's completion scripts have been hand maintained and frequently broken when changed. The goal for this effort was to have the completions automatically generated from the definition of Cargo's command-line, with extension points for dynamically generated results.

shanmu took the prototype for dynamic completions in clap (the command-line parser used by Cargo), got it working and tested for common shells, as well as extended the parser to cover more cases. They then added extension points for CLI's to provide custom completion results that can be generated on the fly.

In the next phase, shanmu added this to nightly Cargo and added different custom completers to match what the handwritten completions do. As an example, with this feature enabled, when you type cargo test --test= and hit the Tab key, your shell will autocomplete all the test targets in your current Rust crate! If you are interested, see the instructions for trying this out. The link also lists where you can provide feedback.

You can also check out the following issues to find out what is left before this can be stabilized:

Rewriting esoteric, error-prone makefile tests using robust Rust features

The Rust compiler has several test suites that make sure that it is working correctly under various conditions. One of these suites is the run-make test suite, whose tests were previously written using Makefiles. However, this setup posed several problems. It was not possible to run the suite on the Tier 1 Windows MSVC target (x86_64-pc-windows-msvc) and getting it running on Windows at all was quite challenging. Furthermore, the syntax of Makefiles is quite esoteric, which frequently caused mistakes to go unnoticed even when reviewed by multiple people.

Julien helped to convert the Makefile-based run-make tests into plain Rust-based tests, supported by a test support library called run_make_support. However, it was not a trivial "rewrite this in Rust" kind of deal. In this project, Julien:

  • Significantly improved the test documentation;
  • Fixed multiple bugs that were present in the Makefile versions that had gone unnoticed for years -- some tests were never testing anything or silently ignored failures, so even if the subject being tested regressed, these tests would not have caught that.
  • Added to and improved the test support library API and implementation; and
  • Improved code organization within the tests to make them easier to understand and maintain.

Just to give you an idea of the scope of his work, he has ported almost 250 Makefile tests over the span of his GSoC project! If you like puns, check out the branch names of Julien's PRs, as they are simply fantestic.

As a result, Julien has significantly improved the robustness of the run-make test suite, and improved the ergonomics of modifying existing run-make tests and authoring new run-make tests. Multiple contributors have expressed that they were more willing to work with the Rust-based run-make tests over the previous Makefile versions.

The vast majority of run-make tests now use the Rust-based test infrastructure, with a few holdouts remaining due to various quirks. After these are resolved, we can finally rip out the legacy Makefile test infrastructure.

Rewriting the Rewrite trait

rustfmt is a Rust code formatter that is widely used across the Rust ecosystem thanks to its direct integration within Cargo. Usually, you just run cargo fmt and you can immediately enjoy a properly formatted Rust project. However, there are edge cases in which rustfmt can fail to format your code. That is not such an issue on its own, but it becomes more problematic when it fails silently, without giving the user any context about what went wrong. This is what was happening in rustfmt, as many functions simply returned an Option instead of a Result, which made it difficult to add proper error reporting.

The goal of SeoYoung's project was to perform a large internal refactoring of rustfmt that would allow tracking context about what went wrong during reformatting. In turn, this would enable turning silent failures into proper error messages that could help users examine and debug what went wrong, and could even allow rustfmt to retry formatting in more situations.

At first, this might sound like an easy task, but performing such large-scale refactoring within a complex project such as rustfmt is not so simple. SeoYoung needed to come up with an approach to incrementally apply these refactors, so that they would be easy to review and wouldn't impact the entire code base at once. She introduced a new trait that enhanced the original Rewrite trait, and modified existing implementations to align with it. She also had to deal with various edge cases that we hadn't anticipated before the project started. SeoYoung was meticulous and systematic with her approach, and made sure that no formatting functions or methods were missed.

Ultimately, the refactor was a success! Internally, rustfmt now keeps track of more information related to formatting failures, including errors that it could not possibly report before, such as issues with macro formatting. It also has the ability to provide information about source code spans, which helps identify parts of code that require spacing adjustments when exceeding the maximum line width. We don't yet propagate that additional failure context as user facing error messages, as that was a stretch goal that we didn't have time to complete, but SeoYoung has expressed interest in continuing to work on that as a future improvement!

Apart from working on error context propagation, SeoYoung also made various other improvements that enhanced the overall quality of the codebase, and she was also helping other contributors understand rustfmt. Thank you for making the foundations of formatting better for everyone!

Rust to .NET compiler - add support for compiling & running cargo tests

As was already mentioned above, the Rust compiler can be used with various codegen backends. One of these is the .NET backend, which compiles Rust code to the Common Intermediate Language (CIL), which can then be executed by the .NET Common Language Runtime (CLR). This backend allows interoperability of Rust and .NET (e.g. C#) code, in an effort to bring these two ecosystems closer together.

At the start of this year, the .NET backend was already able to compile complex Rust programs, but it was still lacking certain crucial features. The goal of this GSoC project, implemented by Michał, who is in fact the sole author of the backend, was to extend the functionality of this backend in various areas. As a target goal, he set out to extend the backend so that it could be used to run tests using the cargo test command. Even though it might sound trivial, properly compiling and running the Rust test harness is non-trivial, as it makes use of complex features such as dynamic trait objects, atomics, panics, unwinding or multithreading. These features were especially tricky to implement in this codegen backend, because the LLVM intermediate representation (IR) and CIL have fundamental differences, and not all LLVM intrinsics have .NET equivalents.

However, this did not stop Michał. He has been working on this project tirelessly, implementing new features, fixing various issues and learning more about the compiler's internals every new day. He has also been documenting his journey with (almost) daily updates on Zulip, which were fascinating to read. Once he has reached his original goal, he moved the goalpost up to another level and attempted to run the compiler's own test suite using the .NET backend. This helped him uncover additional edge cases and also led to a refactoring of the whole backend that resulted in significant performance improvements.

By the end of the GSoC project, the .NET backend was able to properly compile and run almost 90% of the standard library core and std test suite. That is an incredibly impressive number, since the suite contains thousands of tests, some of which are quite arcane. Michał's pace has not slowed down even after the project has ended and he is still continuously improving the backend. Oh, and did we already mention that his backend also has experimental support for emitting C code, effectively acting as a C codegen backend?! Michał has been very busy over the summer.

We thank Michał for all his work on the .NET backend, as it was truly inspirational, and led to fruitful discussions that were relevant also to other codegen backends. Michał's next goal is to get his backend upstreamed and create an official .NET compilation target, which could open up the doors to Rust becoming a first-class citizen in the .NET ecosystem.

Sandboxed and deterministic proc macro using WebAssembly

Rust procedural (proc) macros are currently run as native code that gets compiled to a shared object which is loaded directly into the process of the Rust compiler. Because of this design, these macros can do whatever they want, for example arbitrarily access the filesystem or communicate through a network. This has not only obvious security implications, but it also affects performance, as this design makes it difficult to cache proc macro invocations. Over the years, there have been various discussions about making proc macros more hermetic, for example by compiling them to WebAssembly modules, which can be easily executed in a sandbox. This would also open the possibility of distributing precompiled versions of proc macros via crates.io, to speed up fresh builds of crates that depend on proc macros.

The goal of this project was to examine what would it take to implement WebAssembly module support for proc macros and create a prototype of this idea. We knew this would be a very ambitious project, especially since Apurva did not have prior experience with contributing to the Rust compiler, and because proc macro internals are very complex. Nevertheless, some progress was made. With the help of his mentor, David, Apurva was able to create a prototype that can load WebAssembly code into the compiler via a shared object. Some work was also done to make use of the existing TokenStream serialization and deserialization code in the compiler's proc_macro crate.

Even though this project did not fulfill its original goals and more work will be needed in the future to get a functional prototype of WebAssembly proc macros, we are thankful for Apurva's contributions. The WebAssembly loading prototype is a good start, and Apurva's exploration of proc macro internals should serve as a useful reference for anyone working on this feature in the future. Going forward, we will try to describe more incremental steps for our GSoC projects, as this project was perhaps too ambitious from the start.

Tokio async support in Miri

miri is an intepreter that can find possible instances of undefined behavior in Rust code. It is being used across the Rust ecosystem, but previously it was not possible to run it on any non-trivial programs (those that ever await on anything) that use tokio, due a to a fundamental missing feature: support for the epoll syscall on Linux (and similar APIs on other major platforms).

Tiffany implemented the basic epoll operations needed to cover the majority of the tokio test suite, by crafting pure libc code examples that exercised those epoll operations, and then implementing their emulation in miri itself. At times, this required refactoring core miri components like file descriptor handling, as they were originally not created with syscalls like epoll in mind.

Suprising to everyone (though probably not tokio-internals experts), once these core epoll operations were finished, operations like async file reading and writing started working in miri out of the box! Due to limitations of non-blocking file operations offered by operating systems, tokio is wrapping these file operations in dedicated threads, which was already supported by miri.

Once Tiffany has finished the project, including stretch goals like implementing async file operations, she proceeded to contact tokio maintainers and worked with them to run miri on most tokio tests in CI. And we have good news: so far no soundness problems have been discovered! Tiffany has become a regular contributor to miri, focusing on continuing to expand the set of supported file descriptor operations. We thank her for all her contributions!

Conclusion

We are grateful that we could have been a part of the Google Summer of Code 2024 program, and we would also like to extend our gratitude to all our contributors! We are looking forward to joining the GSoC program again next year.

The Rust Programming Language Bloggccrs: An alternative compiler for Rust

This is a guest post from the gccrs project, at the invitation of the Rust Project, to clarify the relationship with the Rust Project and the opportunities for collaboration.

gccrs is a work-in-progress alternative compiler for Rust being developed as part of the GCC project. GCC is a collection of compilers for various programming languages that all share a common compilation framework. You may have heard about gccgo, gfortran, or g++, which are all binaries within that project, the GNU Compiler Collection. The aim of gccrs is to add support for the Rust programming language to that collection, with the goal of having the exact same behavior as rustc.

First and foremost, gccrs was started as a project because it is fun. Compilers are incredibly rewarding pieces of software, and are great fun to put together. The project was started back in 2014, before Rust 1.0 was released, but was quickly put aside due to the shifting nature of the language back then. Around 2019, work on the compiler started again, led by Philip Herron and funded by Open Source Security and Embecosm. Since then, we have kept steadily progressing towards support for the Rust language as a whole, and our team has kept growing with around a dozen contributors working regularly on the project. We have participated in the Google Summer of Code program for the past four years, and multiple students have joined the effort.

The main goal of gccrs is to provide an alternative option for compiling Rust. GCC is an old project, as it was first released in 1987. Over the years, it has accumulated numerous contributions and support for multiple targets, including some not supported by LLVM, the main backend used by rustc. A practical example of that reach is the homebrew Dreamcast scene, where passionate engineers develop games for the Dreamcast console. Its processor architecture, SuperH, is supported by GCC but not by LLVM. This means that Rust is not able to be used on those platforms, except through efforts like gccrs or the rustc-codegen-gcc backend - whose main differences will be explained later.

GCC also benefits from the decades of software written in unsafe languages. As such, a high amount of safety features have been developed for the project as external plugins, or even within the project as static analyzers. These analyzers and plugins are executed on GCC's internal representations, meaning that they are language-agnostic, and can thus be used on all the programming languages supported by GCC. Likewise, many GCC plugins are used for increasing the safety of critical projects such as the Linux kernel, which has recently gained support for the Rust programming language. This makes gccrs a useful tool for analyzing unsafe Rust code, and more generally Rust code which has to interact with existing C code. We also want gccrs to be a useful tool for rustc itself by helping pan out the Rust specification effort with a unique viewpoint - that of a tool trying to replicate another's functionality, oftentimes through careful experimentation and source reading where the existing documentation did not go into enough detail. We are also in the process of developing various tools around gccrs and rustc, for the sole purpose of ensuring gccrs is as correct as rustc - which could help in discovering surprising behavior, unexpected functionality, or unspoken assumptions.

We would like to point out that our goal in aiding the Rust specification effort is not to turn it into a document for certifying alternative compilers as "Rust compilers" - while we believe that the specification will be useful to gccrs, our main goal is to contribute to it, by reviewing and adding to it as much as possible.

Furthermore, the project is still "young", and still requires a huge amount of work. There are a lot of places to make your mark, and a lot of easy things to work on for contributors interested in compilers. We have strived to create a safe, fun, and interesting space for all of our team and our GSoC students. We encourage anyone interested to come chat with us on our various communication platforms, and offer mentorship for you to learn how to contribute to the project and to compilers in general.

Maybe more importantly however, there is a number of things that gccrs is NOT for. The project has multiple explicit non-goals, which we value just as highly as our goals.

The most crucial of these non-goals is for gccrs not to become a gateway for an alternative or extended Rust-like programming language. We do not wish to create a GNU-specific version of Rust, with different semantics or slightly different functionality. gccrs is not a way to introduce new Rust features, and will not be used to circumvent the RFC process - which we will be using, should we want to see something introduced to Rust. Rust is not C, and we do not intend to introduce subtle differences in standard by making some features available only to gccrs users. We know about the pain caused by compiler-specific standards, and have learned from the history of older programming languages.

We do not want gccrs to be a competitor to the rustc_codegen_gcc backend. While both projects will effectively achieve the same goal, which is to compile Rust code using the GCC compiler framework, there are subtle differences in what each of these projects will unlock for the language. For example, rustc_codegen_gcc makes it easy to benefit from all of rustc's amazing diagnostics and helpful error messages, and makes Rust easily usable on GCC-specific platforms. On the other hand, it requires rustc to be available in the first place, whereas gccrs is part of a separate project entirely. This is important for some users and core Linux developers for example, who believe that having the ability to compile the entire kernel (C and Rust parts) using a single compiler is essential. gccrs can also offer more plugin entrypoints by virtue of it being its own separate GCC frontend. It also allows Rust to be used on GCC-specific platforms with an older GCC where libgccjit is not available. Nonetheless, we are very good friends with the folks working on rustc_codegen_gcc, and have helped each other multiple times, especially in dealing with the patch-based contribution process that GCC uses.

All of this ties into a much more global goal, which we could summarize as the following: We do not want to split the Rust ecosystem. We want gccrs to help the language reach even more people, and even more platforms.

To ensure that, we have taken multiple measures to make sure the values of the Rust project are respected and exposed properly. One of the features we feel most strongly about is the addition of a very annoying command line flag to the compiler, -frust-incomplete-and-experimental-compiler-do-not-use. Without it, you are not able to compile any code with gccrs, and the compiler will output the following error message:

crab1: fatal error: gccrs is not yet able to compile Rust code properly. Most of the errors produced will be the fault of gccrs and not the crate you are trying to compile. Because of this, please report errors directly to us instead of opening issues on said crate's repository.

Our github repository: https://github.com/rust-gcc/gccrs

Our bugzilla tracker: https://gcc.gnu.org/bugzilla/buglist.cgi?bug_status=__open__&component=rust&product=gcc

If you understand this, and understand that the binaries produced might not behave accordingly, you may attempt to use gccrs in an experimental manner by passing the following flag:

-frust-incomplete-and-experimental-compiler-do-not-use

or by defining the following environment variable (any value will do)

GCCRS_INCOMPLETE_AND_EXPERIMENTAL_COMPILER_DO_NOT_USE

For cargo-gccrs, this means passing

GCCRS_EXTRA_ARGS="-frust-incomplete-and-experimental-compiler-do-not-use"

as an environment variable.

Until the compiler can compile correct Rust and, most importantly, reject incorrect Rust, we will be keeping this command line option in the compiler. The hope is that it will prevent users from potentially annoying existing Rust crate maintainers with issues about code not compiling, when it is most likely our fault for not having implemented part of the language yet. Our goal of creating an alternative compiler for the Rust language must not have a negative effect on any member of the Rust community. Of course, this command line flag is not to the taste of everyone, and there has been significant pushback to its presence... but we believe it to be a good representation of our main values.

In a similar vein, gccrs separates itself from the rest of the GCC project by not using a mailing list as its main mode of communication. The compiler we are building will be used by the Rust community, and we believe we should make it easy for that community to get in touch with us and report the problems they encounter. Since Rustaceans are used to GitHub, this is also the development platform we have been using for the past five years. Similarly, we use a Zulip instance as our main communication platform, and encourage anyone wanting to chat with us to join it. Note that we still have a mailing list, as well as an IRC channel (gcc-rust@gcc.gnu.org and #gccrust on oftc.net), where all are welcome.

To further ensure that gccrs does not create friction in the ecosystem, we want to be extremely careful about the finer details of the compiler, which to us means reusing rustc components where possible, sharing effort on those components, and communicating extensively with Rust experts in the community. Two Rust components are already in use by gccrs: a slightly older version of polonius, the next-generation Rust borrow-checker, and the rustc_parse_format crate of the compiler. There are multiple reasons for reusing these crates, with the main one being correctness. Borrow checking is a complex topic and a pillar of the Rust programming language. Having subtle differences between rustc and gccrs regarding the borrow rules would be annoying and unproductive to users - but by making an effort to start integrating polonius into our compilation pipeline, we help ensure that the results we produce will be equivalent to rustc. You can read more about the various components we use, and we plan to reuse even more here. We would also like to contribute to the polonius project itself and help make it better if possible. This cross-pollination of components will obviously benefit us, but we believe it will also be useful for the Rust project and ecosystem as a whole, and will help strengthen these implementations.

Reusing rustc components could also be extended to other areas of the compiler: Various components of the type system, such as the trait solver, an essential and complex piece of software, could be integrated into gccrs. Simpler things such as parsing, as we have done for the format string parser and inline assembly parser, also make sense to us. They will help ensure that the internal representation we deal with will correspond to the one expected by the Rust standard library.

On a final note, we believe that one of the most important steps we could take to prevent breakage within the Rust ecosystem is to further improve our relationship with the Rust community. The amount of help we have received from Rust folks is great, and we think gccrs can be an interesting project for a wide range of users. We would love to hear about your hopes for the project and your ideas for reducing ecosystem breakage or lowering friction with the crates you have published. We had a great time chatting about gccrs at RustConf 2024, and everyone's interest in the project was heartwarming. Please get in touch with us if you have any ideas on how we could further contribute to Rust.

This Week In RustThis Week in Rust 572

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Research
Miscellaneous

Crate of the Week

This week's crate is wtransport, an implementation of the WebTransport specification, a successor to WebSockets with many additional features.

Thanks to Josh Triplett for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

473 pull requests were merged in the last week

Rust Compiler Performance Triage

A week dominated by one large improvement and one large regression where luckily the improvement had a larger impact. The regression seems to have been caused by a newly introduced lint that might have performance issues. The improvement was in building rustc with protected visibility which reduces the number of dynamic relocations needed leading to some nice performance gains. Across a large swath of the perf suit, the compiler is on average 1% faster after this week compared to last week.

Triage done by @rylev. Revision range: c8a8c820..27e38f8f

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.8% [0.1%, 2.0%] 80
Regressions ❌
(secondary)
1.9% [0.2%, 3.4%] 45
Improvements ✅
(primary)
-1.9% [-31.6%, -0.1%] 148
Improvements ✅
(secondary)
-5.1% [-27.8%, -0.1%] 180
All ❌✅ (primary) -1.0% [-31.6%, 2.0%] 228

1 Regression, 1 Improvement, 5 Mixed; 3 of them in rollups 46 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-11-06 - 2024-12-04 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Any sufficiently complicated C project contains an adhoc, informally specified, bug ridden, slow implementation of half of cargo.

Folkert de Vries at RustNL 2024 (youtube recording)

Thanks to Collin Richards for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Rust Programming Language BlogNext Steps on the Rust Trademark Policy

As many of you know, the Rust language trademark policy has been the subject of an extended revision process dating back to 2022. In 2023, the Rust Foundation released an updated draft of the policy for input following an initial survey about community trademark priorities from the previous year along with review by other key stakeholders, such as the Project Directors. Many members of our community were concerned about this initial draft and shared their thoughts through the feedback form. Since then, the Rust Foundation has continued to engage with the Project Directors, the Leadership Council, and the wider Rust project (primarily via all@) for guidance on how to best incorporate as much feedback as possible.

After extensive discussion, we are happy to circulate an updated draft with the wider community today for final feedback. An effective trademark policy for an open source community should reflect our collective priorities while remaining legally sound. While the revised trademark policy cannot perfectly address every individual perspective on this important topic, its goal is to establish a framework to help guide appropriate use of the Rust trademark and reflect as many common values and interests as possible. In short, this policy is designed to steer our community toward a shared objective: to maintain and protect the integrity of the Rust programming language.

The Leadership Council is confident that this updated version of the policy has addressed the prevailing concerns about the initial draft and honors the variety of voices that have contributed to its development. Thank you to those who took the time to submit well-considered feedback for the initial draft last year or who otherwise participated in this long-running process to update our policy to continue to satisfy our goals.

Please review the updated Rust trademark policy here, and share any critical concerns you might have via this form by November 20, 2024. The Foundation has also published a blog post which goes into more detail on the changes made so far. The Leadership Council and Project Directors look forward to reviewing concerns raised and approving any final revisions prior to an official update of the policy later this year.

Niko MatsakisMinPin: yet another pin proposal

This post floats a variation of boats’ UnpinCell proposal that I’m calling MinPin.1 MinPin’s goal is to integrate Pin into the language in a “minimally disruptive” way2 – and in particular a way that is fully backwards compatible. Unlike Overwrite, MinPin does not attempt to make Pin and &mut “play nicely” together. It does however leave the door open to add Overwrite in the future, and I think helps to clarify the positives and negatives that Overwrite would bring.

TL;DR: Key design decisions

Here is a brief summary of MinPin’s rules

  • The pinned keyword can be used to get pinned variations of things:
    • In types, pinned P is equivalent to Pin<P>, so pinned &mut T and pinned Box<T> are equivalent to Pin<&mut T> and Pin<Box<T>> respectively.
    • In function signatures, pinned &mut self can be used instead of self: Pin<&mut Self>.
    • In expressions, pinned &mut $place is used to get a pinned &mut that refers to the value in $place.
  • The Drop trait is modified to have fn drop(pinned &mut self) instead of fn drop(&mut self).
    • However, impls of Drop are still permitted (even encouraged!) to use fn drop(&mut self), but it means that your type will not be able to use (safe) pin-projection. For many types that is not an issue; for futures or other “address sensitive” types, you should use fn drop(pinned &mut self).
  • The rules for field projection from a s: pinned &mut S reference are based on whether or not Unpin is implemented:
    • Projection is always allowed for fields whose type implements Unpin.
    • For fields whose types are not known to implement Unpin:
      • If the struct S is Unpin, &mut projection is allowed but not pinned &mut.
      • If the struct S is !Unpin[^neg] and does not have a fn drop(&mut self) method, pinned &mut projection is allowed but not &mut.
      • If the type checker does not know whether S is Unpin or not, or if the type S has a Drop impl with fn drop(&mut self), neither form of projection is allowed for fields that are not Unpin.
  • There is a type struct Unpinnable<T> { value: T } that always implements Unpin.

Design axioms

Before I go further I want to layout some of my design axioms (beliefs that motivate and justify my design).

  • Pin is part of the Rust language. Despite Pin being entirely a “library-based” abstraction at present, it is very much a part of the language semantics, and it deserves first-class support. It should be possible to create pinned references and do pin projections in safe Rust.
  • Pin is its own world. Pin is only relevant in specific use cases, like futures or in-place linked lists.
  • Pin should have zero-conceptual-cost. Unless you are writing a Pin-using abstraction, you shouldn’t have to know or think about pin at all.
  • Explicit is possible. Automatic operations are nice but it should always be possible to write operations explicitly when needed.
  • Backwards compatible. Existing code should continue to compile and work.

Frequently asked questions

For the rest of the post I’m just going to go into FAQ mode.

I see the rules, but can you summarize how MinPin would feel to use?

Yes. I think the rule of thumb would be this. For any given type, you should decide whether your type cares about pinning or not.

Most types do not care about pinning. They just go on using &self and &mut self as normal. Everything works as today (this is the “zero-conceptual-cost” goal).

But some types do care about pinning. These are typically future implementations but they could be other special case things. In that case, you should explicitly implement !Unpin to declare yourself as pinnable. When you declare your methods, you have to make a choice

  • Is the method read-only? Then use &self, that always works.
  • Otherwise, use &mut self or pinned &mut self, depending…
    • If the method is meant to be called before pinning, use &mut self.
    • If the method is meant to be called after pinning, use pinned &mut self.

This design works well so long as all mutating methods can be categorized into before-or-after pinning. If you have methods that need to be used in both settings, you have to start using workarounds – in the limit, you make two copies.

How does MinPin compare to UnpinCell?

Those of you who have been following the various posts in this area will recognize many elements from boats’ recent UnpinCell. While the proposals share many elements, there is also one big difference between them that makes a big difference in how they would feel when used. Which is overall better is not yet clear to me.

Let’s start with what they have in common. Both propose syntax for pinned references/borrows (albeit slightly different syntax) and both include a type for “opting out” from pinning (the eponymous UnpinCell<T> in UnpinCell, Unpinnable<T> in MinPin). Both also have a similar “special case” around Drop in which writing a drop impl with fn drop(&mut self) disables safe pin-projection.

Where they differ is how they manage generic structs like WrapFuture<F>, where it is not known whether or not they are Unpin.

struct WrapFuture<F: Future> {
    future: F,
}

The r: pinned &mut WrapFuture<F>, the question is whether we can project the field future:

impl<F: Future> WrapFuture<F> {
    fn method(pinned &mut self) {
        let f = pinned &mut r.future;
        //      --------------------
        //      Is this allowed?
    }
}

There is a specific danger case that both sets of rules are trying to avoid. Imagine that WrapFuture<F> implements Unpin but F does not – e.g., imagine that you have a impl<F: Future> Unpin for WrapFuture<F>. In that case, the referent of the pinned &mut WrapFuture<F> reference is not actually pinned, because the type is unpinnable. If we permitted the creation of a pinned &mut F, where F: !Unpin, we would be under the (mistaken) impression that F is pinned. Bad.

UnpinCell handles this case by saying that projecting from a pinned &mut is only allowed so long as there is no explicit impl of Unpin for WrapFuture (“if [WrapFuture<F>] implements Unpin, it does so using the auto-trait mechanism, not a manually written impl”). Basically: if the user doesn’t say whether the type is Unpin or not, then you can do pin-projection. The idea is that if the self type is Unpin, that will only be because all fields are unpin (in which case it is fine to make pinned &mut references to them); if the self type is not Unpin, then the field future is pinned, so it is safe.

In contrast, in MinPin, this case is only allowed if there is an explicit !Unpin impl for WrapFuture:

impl<F: Future> !Unpin for WrapFuture<F> {
    // This impl is required in MinPin, but not in UnpinCell
}

Explicit negative impls are not allowed on stable, but they were included in the original auto trait RFC. The idea is that a negative impl is an explicit, semver-binding commitment not to implement a trait. This is different from simply not including an impl at all, which allows for impls to be added later.

Why would you prefer MinPin over UnpinCell or vice versa?

I’m not totally sure which of these is better. I came to the !Unpin impl based on my axiom that pin is its own world – the idea was that it was better to push types to be explicitly unpin all the time than to have “dual-mode” types that masquerade as sometimes pinned and sometimes not.

In general I feel like it’s better to justify language rules by the presence of a declaration than the absence of one. So I don’t like the idea of saying “the absence of an Unpin impl allows for pin-projection” – after all, adding impls is supposed to be semver-compliant. Of course, that’s much lesss true for auto traits, but it can still be true.

In fact, Pin has had some unsoundness in the past based on unsafe reasoning that was justified by the lack of an impl. We assumed that &T could never implemented DerefMut, but it turned out to be possible to add weird impls of DerefMut in very specific cases. We fixed this by adding an explicit impl<T> !DerefMut for &T impl.

On the other hand, I can imagine that many explicitly implemented futures might benefit from being able to be ambiguous about whether they are Unpin.

What does your design axiom “Pin is its own world” mean?

The way I see it is that, in Rust today (and in MinPin, pinned places, UnpinCell, etc), if you have a T: !Unpin type (that is, a type that is pinnable), it lives a double life. Initially, it is unpinned, and you interact can move it, &-ref it, or &mut-ref it, just like any other Rust value. But once a !Unpin value becomes pinned to a place, it enters a different state, in which you can no longer move it or use &mut, you have to use pinned &mut:

flowchart TD
Unpinned[
    Unpinned: can access 'v' with '&' and '&mut'
]

Pinned[
    Pinned: can access 'v' with '&' and 'pinned &mut'
]

Unpinned --
    pin 'v' in place (only if T is '!Unpin')
--> Pinned
  

One-way transitions like this limit the amount of interop and composability you get in the language. For example, if my type has &mut methods, I can’t use them once the type is pinned, and I have to use some workaround, such as duplicating the method with pinned &mut.3 In this specific case, however, I don’t think this transition is so painful, and that’s because of the specifics of the domain: futures go through a pretty hard state change where they start in “preparation mode” and then eventually start executing. The set of methods you need at these two phases are quite distinct. So this is what I meant by “pin is its own world”: pin is not very interopable with Rust, but this is not as bad as it sounds, because you don’t often need that kind of interoperability.

How would Overwrite affect pin being in its own world?

With Overwrite, when you pin a value in place, you just gain the ability to use pinned &mut, you don’t give up the ability to use &mut:

flowchart TD
Unpinned[
    Unpinned: can access 'v' with '&' and '&mut'
]

Pinned[
    Pinned: can additionally access 'v' with 'pinned &mut'
]

Unpinned --
    pin 'v' in place (only if T is '!Unpin')
--> Pinned
  

Making pinning into a “superset” of the capabilities of pinned means that pinned &mut can be coerced into an &mut (it could even be a “true subtype”, in Rust terms). This in turn means that a pinned &mut Self method can invoke &mut self methods, which helps to make pin feel like a smoothly integrated part of the language.3

So does the axiom mean you think Overwrite is a bad idea?

Not exactly, but I do think that if Overwrite is justified, it is not on the basis of Pin, it is on the basis of immutable fields. If you just look at Pin, then Overwrite does make Pin work better, but it does that by limiting the capabilities of &mut to those that are compatible with Pin. There is no free lunch! As Eric Holk memorably put it to me in privmsg:

It seems like there’s a fixed amount of inherent complexity to pinning, but it’s up to us how we distribute it. Pin keeps it concentrated in a small area which makes it seem absolutely terrible, because you have to face the whole horror at once.4

I think Pin as designed is a “zero-conceptual-cost” abstraction, meaning that if you are not trying to use it, you don’t really have to care about it. That’s worth maintaining, if we can. If we are going to limit what &mut can do, the reason to do it is primarily to get other benefits, not to benefit pin code specifically.

To be clear, this is largely a function of where we are in Rust’s evolution. If we were still in the early days of Rust, I would say Overwrite is the correct call. It reminds me very much of the IMHTWAMA, the core “mutability xor sharing” rule at the heart of Rust’s borrow checker. When we decided to adopt the current borrow checker rules, the code was about 85-95% in conformance. That is, although there was plenty of aliased mutation, it was clear that “mutability xor sharing” was capturing a rule that we already mostly followed, but not completely. Because combining aliased state with memory safety is more complicated, that meant that a small minority of code was pushing complexity onto the entire language. Confining shared mutation to types like Cell and Mutex made most code simpler at the cost of more complexity around shared state in particular.

There’s a similar dynamic around replace and swap. Replace and swap are only used in a few isolated places and in a few particular ways, but the all code has to be more conservative to account for that possibility. If we could go back, I think limiting Replace to some kind of Replaceable<T> type would be a good move, because it would mean that the more common case can enjoy the benefits: fewer borrow check errors and more precise programs due to immutable fields and the ability to pass an &mut SomeType and be sure that your callee is not swapping the value under your feet (useful for the “scope pattern” and also enables Pin<&mut> to be a subtype of &mut).

Why did you adopt pinned &mut and not &pin mut as the syntax?

The main reason was that I wanted a syntax that scaled to Pin<Box<T>>. But also the pin! macro exists, making the pin keyword somewhat awkward (though not impossible).

One thing I was wondering about is the phrase “pinned reference” or “pinned pointer”. On the one hand, it is really a reference to a pinned value (which suggests &pin mut). On the other hand, I think this kind of ambiguity is pretty common. The main thing I have found is that my brain has trouble with Pin<P> because it wants to think of Pin as a “smart pointer” versus a modifier on another smart pointer. pinned Box<T> feels much better this way.

Can you show me an example? What about the MaybeDone example?

Yeah, totally. So boats [pinned places][] post introduced two futures, MaybeDone and Join. Here is how MaybeDone would look in MinPin, along with some inline comments:

enum MaybeDone<F: Future> {
    Polling(F),
    Done(Unpinnable<Option<F::Output>>),
    //   ---------- see below
}

impl<F: Future> !Unpin for MaybeDone<F> { }
//              -----------------------
//
// `MaybeDone` is address-sensitive, so we
// opt out from `Unpin` explicitly. I assumed
// opting out from `Unpin` was the *default* in
// my other posts.

impl<F: Future> MaybeDone<F> {
    fn maybe_poll(pinned &mut self, cx: &mut Context<'_>) {
        if let MaybeDone::Polling(fut) = self {
            //                    ---
            // This is in fact pin-projection, although
            // it's happening implicitly as part of pattern
            // matching. `fut` here has type `pinned &mut F`.
            // We are permitted to do this pin-projection
            // to `F` because we know that `Self: !Unpin`
            // (because we declared that to be true).
            
            if let Poll::Ready(res) = fut.poll(cx) {
                *self = MaybeDone::Done(Some(res));
            }
        }
    }

    fn is_done(&self) -> bool {
        matches!(self, &MaybeDone::Done(_))
    }

    fn take_output(pinned &mut self) -> Option<F::Output> {
        //         ----------------
        //     This method is called after pinning, so it
        //     needs a `pinned &mut` reference...  

        if let MaybeDone::Done(res) = self {
            res.value.take()
            //  ------------
            //
            //  ...but take is an `&mut self` method
            //  and `F:Output: Unpin` is known to be true.
            //  
            //  Therefore we have made the type in `Done`
            //  be `Unpinnable`, so that we can do this
            //  swap.
        } else {
            None
        }
    }
}

Can you translate the Join example?

Yep! Here is Join:

struct Join<F1: Future, F2: Future> {
    fut1: MaybeDone<F1>,
    fut2: MaybeDone<F2>,
}

impl<F1: Future, F2: Future> !Unpin for Join<F> { }
//                           ------------------
//
// Join is a custom future, so implement `!Unpin`
// to gain access to pin-projection.

impl<F1: Future, F2: Future> Future for Join<F1, F2> {
    type Output = (F1::Output, F2::Output);

    fn poll(pinned &mut self, cx: &mut Context<'_>) -> Poll<Self::Output> {
        // The calls to `maybe_poll` and `take_output` below
        // are doing pin-projection from `pinned &mut self`
        // to a `pinned &mut MaybeDone<F1>` (or `F2`) type.
        // This is allowed because we opted out from `Unpin`
        // above.

        self.fut1.maybe_poll(cx);
        self.fut2.maybe_poll(cx);
        
        if self.fut1.is_done() && self.fut2.is_done() {
            let res1 = self.fut1.take_output().unwrap();
            let res2 = self.fut2.take_output().unwrap();
            Poll::Ready((res1, res2))
        } else {
            Poll::Pending
        }
    }
}

What’s the story with Drop and why does it matter?

Drop’s current signature takes &mut self. But recall that once a !Unpin type is pinned, it is only safe to use pinned &mut. This is a combustible combination. It means that, for example, I can write a Drop that uses mem::replace or swap to move values out from my fields, even though they have been pinned.

For types that are always Unpin, this is no problem, because &mut self and pinned &mut self are equivalent. For types that are always !Unpin, I’m not too worried, because Drop as is is a poor fit for them, and pinned &mut self will be beter.

The tricky bit is types that are conditionally Unpin. Consider something like this:

struct LogWrapper<T> {
    value: T,
}

impl<T> Drop for LogWrapper<T> {
    fn drop(&mut self) {
        ...
    }
}

At least today, whether or not LogWrapper is Unpin depends on whether T: Unpin, so we can’t know it for sure.

The solution that boats and I both landed on effectively creates three categories of types:5

  • those that implement Unpin, which are unpinnable;
  • those that do not implement Unpin but which have fn drop(&mut self), which are unsafely pinnable;
  • those that do not implement Unpin and do not have fn drop(&mut self), which are safely pinnable.

The idea is that using fn drop(&mut self) puts you in this purgatory category of being “unsafely pinnable” (it might be more accurate to say being “maybe unsafely pinnable”, since often at compilation time with generics we won’t know if there is an Unpin impl or not). You don’t get access to safe pin projection or other goodies, but you can do projection with unsafe code (e.g., the way the pin-project-lite crate does it today).

It feels weird to have Drop let you use &mut self when other traits don’t.

Yes, it does, but in fact any method whose trait uses pinned &mut self can be implemented safely with &mut self so long as Self: Unpin. So we could just allow that in general. This would be cool because many hand-written futures are in fact Unpin, and so they could implement the poll method with &mut self.

Wait, so if Unpin types can use &mut self, why do we need special rules for Drop?

Well, it’s true that an Unpin type can use &mut self in place of pinned &mut self, but in fact we don’t always know when types are Unpin. Moreover, per the zero-conceptual-cost axiom, we don’t want people to have to know anything about Pin to use Drop. The obvious approaches I could think of all either violated that axiom or just… well… seemed weird:

  • Permit fn drop(&mut self) but only if Self: Unpin seems like it would work, since most types are Unpin. But in fact types, by default, are only Unpin if their fields are Unpin, and so generic types are not known to be Unpin. This means that if you write a Drop impl for a generic type and you use fn drop(&mut self), you will get an error that can only be fixed by implementing Unpin unconditionally. Because “pin is its own world”, I believe adding the impl is fine, but it violates “zero-conceptual-cost” because it means that you are forced to understand what Unpin even means in the first place.
  • To address that, I considered treating fn drop(&mut self) as implicitly declaring Self: Unpin. This doesn’t violate our axioms but just seems weird and kind of surprising. It’s also backwards incompatible with pin-project-lite.

These considerations let me to conclude that actually the current design kind of puts in a place where we want three categories. I think in retrospect it’d be better if Unpin were implemented by default but not as an auto trait (i.e., all types were unconditionally Unpin unless they declare otherwise), but oh well.

What is the forwards compatibility story for Overwrite?

I mentioned early on that MinPin could be seen as a first step that can later be extended with Overwrite if we choose. How would that work?

Basically, if we did the s/Unpin/Overwrite/ change, then we would

  • rename Unpin to Overwrite (literally rename, they would be the same trait);
  • prevent overwriting the referent of an &mut T unless T: Overwrite (or replacing, swapping, etc).

These changes mean that &mut T is pin-preserving. If T: !Overwrite, then T may be pinned, but then &mut T won’t allow it to be overwritten, replaced, or swapped, and so pinning guarantees are preserved (and then some, since technically overwrites are ok, just not replacing or swapping). As a result, we can simplify the MinPin rules for pin-projection to the following:

Given a reference s: pinned &mut S, the rules for projection of the field f are as follows:

  • &mut projection is allowed via &mut s.f.
  • pinned &mut projection is allowed via pinned &mut s.f if S: !Unpin

What would it feel like if we adopted Overwrite?

We actually got a bit of a preview when we talked about MaybeDone. Remember how we had to introduce Unpinnable around the final value so that we could swap it out? If we adopted Overwrite, I think the TL;DR of how code would be different is that most any code that today uses std::mem::replace or std::mem::swap would probably wind up using an explicit Unpinnable-like wrapper. I’ll cover this later.

This goes a bit to show what I meant about there being a certain amount of inherent complexity that we can choose to distibute: in MinPin, this pattern of wrapping “swappable” data is isolated to pinned &mut self methods in !Unpin types. With Overwrite, it would be more widespread (but you would get more widespread benefits, as well).

Conclusion

My conclusion is that this is a fascinating space to think about!6 So fun.


  1. Hat tip to Tyler Mandry and Eric Holk who discussed these ideas with me in detail. ↩︎

  2. MinPin is the “minimal” proposal that I feel meets my desiderata; I think you could devise a maximally minimal proposal is even smaller if you truly wanted. ↩︎

  3. It’s worth noting that coercions and subtyping though only go so far. For example, &mut can be coerced to &, but we often need methods that return “the same kind of reference they took in”, which can’t be managed with coercions. That’s why you see things like last and last_mut↩︎ ↩︎

  4. I would say that the current complexity of pinning is, in no small part, due to accidental complexity, as demonstrated by the recent round of exploration, but Eric’s wider point stands. ↩︎

  5. Here I am talking about the category of a particular monomorphized type in a particular version of the crate. At that point, every type either implements Unpin or it doesn’t. Note that at compilation time there is more grey area, as they can be types that may or may not be pinnable, etc. ↩︎

  6. Also that I spent way too much time iterating on this post. JUST GONNA POST IT. ↩︎

Mozilla ThunderbirdThunderbird Monthly Development Digest: October 2024

Hello again Thunderbird Community! The last few months have involved a lot of learning for me, but I have a much better appreciation (and appetite!) for the variety of challenges and opportunities ahead for our team and the broader developer community. Catch up with last month’s update, and here’s a quick summary of what’s been happening across the different teams:

Exchange Web Services support in Rust

An important member of our team left recently and while we’ll very much miss the spirit and leadership, we all learned a lot and are in a good position to carry the project forwards. We’ve managed to unstick a few pieces of the backlog and have a few sprints left to complete work on move/copy operations, protocol logging and priority two operations (flagging messages, folder rename & delete, etc). New team members have moved past the most painful stages and have patches that have landed. Kudos to the patient mentors involved in this process!

QR Code Cross-Device Account Import

Thunderbird for Android launched this week, and the desktop client (Daily, Beta & ESR 128.4.0) now provides a simple and secure account transfer mechanism, so that account settings don’t have to be re-entered for new users of the mobile app. Download Thunderbird for Android from the Play store

Account Hub

Development of a refreshed account hub is moving forward apace and with the critical path broken down into sprints, our entire front end team is working to complete things in the next two weeks. Meta bug & progress tracking.

Clean up on aisle 2

In addition to our project work, we’ve had to be fairly nimble this month, with a number of upstream changes breaking our builds and pipelines. We get a ton of benefit from the platforms we inherit but at times it feels like we’re dealing with many things out of our control. Mental note: stay calm and focus on future improvements!

Global Database, Conversation View & folder corruption issues

On top of the conversation view feature and core refactoring to tackle the inner workings of thread-safe folder and message manipulation, work to implement a long term database replacement is well underway. Preliminary patches are regularly pumped into the development ecosystem for discussion and review, for which we’re very excited!

In-App Notifications

With phase 1 of this project now complete, we’ve scoped out additions that will make it even more flexible and suitable for a variety of purposes. Beta users will likely see the first notifications coming in November, so keep your eyes peeled. Meta Bug & progress tracking.

New Features Landing Soon

Several requested features are expected to debut this month (or very soon) and include…

As usual, if you want to see things as they land, and help us squash some early bugs, you can always check the pushlog and try running daily, which would be immensely helpful for catching things early.

See you next month.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: October 2024 appeared first on The Thunderbird Blog.

The Mozilla BlogHelp us improve our alt text generation model

Image generated by DALL-E in response to a request for a photorealistic image of a fox standing in a grassy landscape.

Firefox 130 introduces automatic alt text for PDF images and an improved alt text flow. In addition to protecting users’ privacy with a small language model that operates locally on their device, these improvements help ensure more images receive alt text resulting in more accessible PDFs. 

You can read more about our work on the model in the earlier Hacks blog post Experimenting with local alt text generation in Firefox

The work on the model happens outside of the mozilla-central code base, but as with the rest of the Firefox code, we want to keep the process open to our community. The language models used in our product are just weight files, and we want to ensure the Mozilla community understands how it was built and can help improve it. The open source AI definition from OSI is a work in progress, and our long-term aspiration is to follow the OSI’s guidelines for our local models.

Here’s how you can contribute to improving the model and helping with the accessibility of PDF documents. 

What can be improved?

The first version of the model is a work in progress, and it will make mistakes, especially when describing complex images. This is why we designed the feature to:

  • Encourage human review so that our users can correct inaccuracies and include any missing details before saving the alt text.
  • Set expectations for users interacting with PDFs that have alt text generated:
    • When you see the text This alt text was created automatically message below the text box on the alt text editor, you’ll know that the alt text was generated using our model.
    • All users who are reading the PDF outside of the Firefox editor will experience a disclaimer that comes before the alt text. This is so people reading the alt text with a screen reader or directly on the PDF can be informed that the alt text was not human-generated. For example: “Created automatically: [alt text description will go here]”.

We hope to improve the model over time, and, as with Firefox’s source code, anyone interested in helping us refine it is welcome to contribute. You don’t have to be an AI expert – but if you are an expert and spot specific areas of improvement, we’d love to hear from you.

You can contribute by adding a new issue to our repository and choosing a topic from the issue templates:

  • Model architecture
  • Training Data
  • Training code

Here’s some information to help you file an issue under one of these topics:

Model architecture

Our vision encoder-decoder model has 180M parameters and is based on the following pre-trained models:

The VIT model was pre-trained on millions of images on the ImageNet 21k classes, which uses 21,000 words from the wordnet hierarchy to find objects in images.

The version of GPT-2 used for the text decoder is a distilled version of the GPT-2 model – a process that is used to transfer knowledge from a model to a smaller model with minimal accuracy loss. That makes it a good trade-off in terms of size and accuracy. Additionally, we built a ~800-word stop list to avoid generating profanity. 

The whole model is 180M parameters and was quantized converting float 32 weights to int8, allowing us to shrink the size on disk to ~180MB which sped up the inference time in the browser.

There are many other architectures that could have been used for this job, or different quantization levels. If you believe there is a better combination, we’d love to try it.

The constraints are:

  • Everything needs to be open source under a permissive license like APLv2.
  • The model needs to be converted into ONNX using optimum.
  • The model needs to work in Transformers.js.

Training data

To train our model, we initially used the COCO and Flickr30k datasets and eventually adapted them to remove some of the annotator biases we’ve found along the way:

  • Some annotators use gender-specific descriptions. People in an image may be described as a man or a woman, which can lead to the model misgendering people. For instance, a person on a skateboard is almost always described as a man. Similar problems exist with age-specific terms (e.g., man, boy, etc.). 
  • Some descriptions may also use less-than-inclusive language or be culturally or personally offensive in some rare cases. For instance, we have spotted annotations that were only acceptable for use by and within specific demographics, were replaced in common speech by other terms decades ago, or imposed a reductive value (e.g., sexy).

To deal with these issues, we rewrote annotations with GPT-4o using a prompt that asks for a short image description. You can find that code here and the transformed datasets are published on Hugging Face: Mozilla/flickr30k-transformed-captions-gpt4o and Mozilla/coco-gpt4o. You can read more about our process here

Training our model using these new annotations greatly improved the results, however, we still detected some class imbalance – some types of images are underrepresented like transportation and some are overrepresented, like… cats. To address this, we’ve created a new complementary dataset using Pexels, with this script and GPT4-o annotations. You can find it at Mozilla/pexels-gpt4o

We know this is still insufficient, so if you would like to help us improve our datasets, here’s what you can do:

  • If you used the feature and detected a poorly described image, send it to us so we can add it to our training datasets.
  • Create a dataset on HuggingFace to fix one or more specific class imbalances.
  • reate a dataset on HuggingFace to simply add more diverse, high-quality data.

We ask the datasets to contain the following fields:

  • Image: the image in PNG, with a maximum width or height of 700 pixels.
  • Source: the source of the image.
  • License: the license of the image. Please ensure the images you’re adding have public domain or public-domain-equivalent licenses, so they can be used for training without infringing on the rights of copyright holders. 

This will allow us to automatically generate its description using our prompt, and create a new dataset that we will include in the training loop.

Training code

To train the model, we are using Transformers’ Seq2SeqTrainer in a somewhat standard way (see more details here).

Let us know if you spot a problem or find a potential improvement in the code or in our hyperparameters!

The post Help us improve our alt text generation model appeared first on The Mozilla Blog.

Don Martilinks for 3 November 2024

Remote Startups Will Win the War for Top Talent Ironically, in another strike against the spontaneous collaboration argument, a study of two Fortune 500 headquarters found that transitioning from cubicles to an open office layout actually reduced face-to-face interactions by 70 percent.

Why Strava Is a Privacy Risk for the President (and You Too) Not everybody uses their real names or photos on Strava, but many do. And if a Strava account is always in the same place as the President, you can start to connect a few dots.

Why Getting Your Neighborhood Declared a Historic District Is a Bad Idea Historic designations are commonly used to control what people can do with their own private property, and can be a way of creating a kind of “backdoor” homeowners association. Some historic neighborhoods (many of which have dubious claims to the designation) around the country have HOA-like restrictions on renovations, repairs, and even landscaping.

Donald Trump Talked About Fixing McDonald’s Ice Cream Machines. Lina Khan Actually Did. Back in March, the FTC submitted a comment to the US Copyright Office asking to extend the right to repair certain equipment, including commercial soft-serve equipment.

An awful lot of FOSS should thank the Academy Linux and open source in general seem to be huge components of the movie special effects industry – to an extent that we had not previously realized. (unless you have a stack of old Linux Journal back issues from the early 2000s—we did a lot of movie covers at the time that much of this software was being developed.)

Using an 8K TV as a Monitor For programming, word processing, and other productive work, consider getting an 8K TV instead of a multi-monitor setup. An 8K TV will have superior image quality, resolution, and versatility compared to multiple 4K displays, at roughly the same size. (huge TVs are an under-rated, subsidized technology, like POTS lines. Most or all of the huge TVs available today are smart and sold with the expectation that they’ll drive subscription and advertising revenue, which means a discount for those who use them as monitors.)

Suchir Balaji, who spent four years at OpenAI, says OpenAI’s use of copyrighted data broke the law and failed to meet fair use criteria; he left in August 2024 Mr. Balaji believes the threats are more immediate. ChatGPT and other chatbots, he said, are destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems.

The Unlikely Inventor of the Automatic Rice Cooker Fumiko, the mother of six children, spent five years researching and testing to document the ideal recipe. She continued to make rice three times a day, carefully measuring water-to-rice ratios, noting temperatures and timings, and prototyping rice-cooker designs. Conventional wisdom was that the heat source needed to be adjusted continuously to guarantee fluffy rice, but Fumiko found that heating the water and rice to a boil and then cooking for exactly 20 minutes produced consistently good results.

Comments on TSA proposal for decentralized nonstandard ID requirements Compliance with the REAL-ID Act requires a state to electronically share information concerning all driver’s licenses and state-issued IDs with all other states, but not all states do so. Because no state complies with this provision of the REAL-ID Act, or could do so unless and until all states do so, no state-issued driver’s licenses or ID cards comply with the REAL-ID Act.

Don Martior we could just not

previously: Sunday Internet optimism

The consensus, dismal future of the Internet is usually wrong. Dystopias make great fiction, but the Internet is surprisingly good at muddling through and reducing each one to nuisance level.

  • We don’t have Clipper Chip dystopia that would have put backdoors in all cryptography.

  • We don’t have software patent cartel dystopia that would have locked everyone in to limited software choices and functionality, and a stagnant market.

  • We don’t have Fritz Chip dystopia that would have mandated Digital Rights Management on all devices.

None of these problems have gone away entirely—encryption backdoors, patent trolls, and DRM are all still there—but none have reached either Internet-wide catastrophe level or faded away entirely.

Today’s hottest new dystopia narrative is that we’re going to end up with surveillance advertising features in web browsers. They’ll be mathematically different from old-school cookie tracking, so technically they won’t make it possible to identify anyone individually, but they’ll still impose the same old surveillance risks on users, since real-world privacy risks are collective.

Compromising with the dystopia narrative always looks like the realistic or grown-up path forward, until it doesn’t. And then the non-dystopia timeline generally looks inevitable once you get far enough along it. This time it’s the same way. We don’t need cross-context personalized (surveillance) advertising in our web browsers any more than we need SCO licensesnot counting the SCO license timeline as dystopia, but another good example of dismal timeline averted in our operating systems. Let’s look at the numbers. I’m going to make all the assumptions most favorable to the surveillance advertising argument. It’s actually probably a lot better than this. And it’s probably better in other countries, since the USA is relatively advanced in the commercial surveillance field. (If you have these figures for other countries, please let me know and I’ll link to them.)

Total money spent on advertising in the USA: $389.49 billion

USA population: 335,893,238

That comes out to about $1,160 spent on advertising to reach the average person in the USA every year. That’s $97 per month.

So let’s assume (again, making the assumption most favorable to the surveillance side) that all advertising is surveillance advertising. And ads without the surveillance, according to Professor Garrett Johnson are worth 52 percent less than the surveillance ads.

So if you get rid of the surveillance, your ad subsidy goes from $97 to $46. Advertisers would be spending $51 less to advertise to you, and the missing $51 is a good-sized amount of extra money to come up with every month. But remember, that’s advertising money, total, not the amount that actually makes it to the people who make the ad-supported resources you want. Since the problem is how to replace the income for the artists, writers, and everyone else who makes ad-supported content, we need to multiply the missing ad subsidy by the fraction of that top-level advertising total that makes it through to the content creator in order to come up with the amount of money that needs to be filled in from other sources like subscriptions and memberships.

How much do you need to spend on subscriptions to replace $51 in ad money? That’s going to depend on your habits. But even if you have everything set up totally right, a dollar spent on ads to reach you will buy you less than a dollar you spend yourself. Thomas Baekdal writes, in How independent publishing has changed from the 1990s until today,

Up until this point, every publisher had focused on ‘traffic at scale’, but with the new direct funding focus, every individual publisher realized that traffic does not equal money, and you could actually make more money by having an audience who paid you directly, rather than having a bunch of random clicks for the sake of advertising. The ratio was something like 1:10,000. Meaning that for every one person you could convince to subscribe, donate, become a member, or support you on Patreon … you would need 10,000 visitors to make the same amount from advertising. Or to put that into perspective, with only 100 subscribers, I could make the same amount of money as I used to earn from having one million visitors.

All surveillance ad media add some kind of adtech tax. The Association of National Advertisers found that about 1/3 of the money spent to buy ad space makes it through to the publisher.

A subscription platform and subscriber services impose some costs too. To be generous to the surveillance side, let’s say that a subscription dollar is only three times as valuable as an advertising dollar. So that $51 in missing ad money means you need to come up with $17 from somewhere. This estimate is really on the high side in practice. A lot of ad money goes to overhead and to stuff like retail ad networks (online sellers bidding for better spots in shopping search results) and to ad media like billboards that don’t pay for content at all.

So, worst case, where do you get the $17? From buying less crap, that’s where. Mustri et al.(PDF) write,

[behaviorally] targeted ads are more likely to be associated with lower quality vendors, and higher prices for identical products…

You also get a piece of the national security and other collective security benefits of eliminating surveillance, some savings in bandwidth and computing resources, and a lower likelihood of becoming a victim of fraud and identity theft. But that’s pure bonus benefit on top of the win from saving money by spending less on overpriced, personally targeted, low-quality products. (If privacy protection didn’t help you buy better stuff, the surveillance companies would have said so by now.) Because surveillance advertising gives an advantage to deceptive advertisers over legit ones, the end of surveillance advertising would also mean an increase in sales for legit brands.

And we’re not done. As a wise man once said, But wait! There’s more! Before you rush to do effective privacy tips or write to your state legislators to support anti-surveillance laws, there’s one more benefit for getting rid of surveillance/personalized advertising. Remember that extra $51 that went away? It didn’t get burned up in a fire just because it didn’t get spent on surveillance advertising. Companies still have it, and they still want to sell you stuff. Without surveillance, they’ll have to look for other ways to spend it. And many of the options are win-win for the customer. In Product is the P all marketers should strive to influence, Mark Ritson points out the marketing wins from incremental product improvements, and that’s the kind of work that often gets ignored in favor of niftier, short-term, surveillance advertising projects. Improving service and pricing are other areas that will will also do better without surveillance advertising contending for budgets. There is a lot of potential gain for a lot of people in getting rid of surveillance advertising, so let’s not waste the opportunity. Don’t worry, we’ll get another Internet dystopia narrative to worry about eventually.

More: stop putting privacy-enhancing technologies in web browsers

Related

Product is the P all marketers should strive to influence If there is one thing I have learned from a thousand customers discussing a hundred different products it’s that the things a company thinks are small are, from a consumer perspective, big. And the grand improvements the company is spending bazillions on are probably of little significance. Finding out from the source what needs to be fixed or changed and then getting it done is the quiet product work of proper marketers. (yes, I linked to this twice.)

I Bought Tech Dupes on Temu. The Shoddy Gear Wasn’t Worth the $1,260 in Savings My journey into the shady side of shopping brought me to the world of dupes — from budget alternatives to bad knockoffs of your favorite tech.

Political fundraisers WinRed and ActBlue are taking millions of dollars in donations from elderly dementia patients to fuel their campaigns [S]some of these elderly, vulnerable consumers have unwittingly given away six-figure sums – most often to Republican candidates – making them among the country’s largest grassroots political donors.

Bonus links

Marketers in a dying internet: Why the only option is a return to simplicity With machine-generated content now cluttering the most visible online touchpoints (like the frontpage of Google, or your Facebook timeline), it feels inevitable that consumer behaviors will shift as a result. And so marketers need to change how they reach target audiences.

I attended Google’s creator conversation event, and it turned into a funeral

Is AI advertising going to be too easy for its own good? As Rory Sutherland said, When human beings process a message, we sort of process how much effort and love has gone into the creation of this message and we pay attention to to the message accordingly. It’s costly signaling of a kind.

How Google is Killing Bloggers and Small Publishers – And Why

Exploiting Meta’s Weaknesses, Deceptive Political Ads Thrived on Facebook and Instagram in Run-Up to Election

Ninth Circuit Upholds AADC Ban on “Dark Patterns”

Economist ‘future-proofing’ bid brings back brand advertising and targets students

The Talospace ProjectUpdated Baseline JIT OpenPOWER patches for Firefox 128ESR

I updated the Baseline JIT patches to apply against Firefox 128ESR, though if you use the Mercurial rebase extension (and you should), it will rebase automatically and only one file had to be merged — which it did for me also. Nevertheless, everything is up to date against tip again, and this patchset works fine for both Firefox and Thunderbird. I kept the fix for bug 1912623 because I think Mozilla's fix in bug 1909204 is wrong (or at least suboptimal) and this is faster on systems without working Wasm. Speaking of, I need to get back into porting rr to ppc64le so I can solve those startup crashes.

The Mozilla BlogAfter Ticketmaster’s data breach, it’s time to secure your info

Still in its “anti-hero” era, Ticketmaster has users reeling from a data breach last May, when a hacker group claimed to have stolen data from more than 500 million people.

The breach coincided with Taylor Swift’s Eras Tour, one of the biggest tours ever that just so happened to have one of the most problematic rollouts ever. (So many fans tried to buy presale tickets that Ticketmaster’s system crashed, forcing the company to cancel the general sale — yet bots and scalpers still managed to grab tickets.)

So what do you do after a massive data breach?

Use 2FA

Two-factor-authentication (2FA if you’re into brevity) is a simple and effective way to add an extra layer of security to your logins.

Change old passwords

Look. We get it. “FearlessSwiftie13!” is a pretty solid password. But if you’ve been using it since 2008, it’s time to update it. Make it something less obvious, maybe even use Firefox’s password generator. Don’t re-use passwords. If they’re easy to remember, they’re easy to hack.

Mozilla Monitor

Not to plug our own thing, but Mozilla Monitor does a pretty good job of showing what personal data was actually breached. We recommend the free scan; it’ll tell you  if your phone number, passwords or home address have been leaked and alert you to future breaches, so you can act accordingly and stay in the loop. 

No phish

Because the Ticketmaster data breach was so big, many people’s information could now be in the hands of scammers, who may use the data they got to pose as Ticketmaster or concert venues, to steal even more of your information. Be on the lookout for any emails or texts that seem suspicious or off.

Keep tabs on your statements

Regularly review your credit card statements. Pick a day and make it a habit. Even if you haven’t been part of a headline-making breach, it’s smart – you’ll catch any unfamiliar charges and can report them to your card issuer right away.

Data breaches are no fun, but they do help people snap out of their old (and easily hackable) habits. By using a combination of these steps above and some good ol’-fashioned common sense, you’ll minimize the risk of them happening again. 

Find where your private info is exposed

Get a free scan

The post After Ticketmaster’s data breach, it’s time to secure your info appeared first on The Mozilla Blog.

Mozilla Performance BlogPerformance Testing Newsletter (Q3 Edition)

Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products.

Last quarter was MozWeek, and we had a great time meeting a number of you in our PerfTest Regression Workshop – thank you all for joining us, and making it a huge success! If you didn’t get a chance to make it, you can find the slides here, and most of the information from the workshop (including some additional bits) can be found in this documentation page. We will be running this workshop again next MozWeek, along with a more advanced version.

See below for highlights from the changes made in the last quarter.

Highlights

Blog Posts ✍️

Contributors

  • Myeongjun Go [:myeongjun]
  • Mayank Bansal [:mayankleoboy1]

If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.

The Mozilla BlogThe AI problem we can’t ignore

In August 2020, as the pandemic confined people to their homes, the U.K. canceled A-level exams and turned to an algorithm to calculate grades, key for university admissions. Based on historical data that reflected the resource advantages of private schools, the algorithm disproportionately downgraded state students. Those who attended private schools, meanwhile, received inflated grades. News of the results set off widespread backlash. The system reinforced social inequities, critics said.

This isn’t just a one-off mistake – it’s a sign of AI bias creeping into our lives, according to Gemma Galdon-Clavell, a tech policy expert and one of Mozilla’s 2025 Rise25 honorees. Whether it’s deciding who gets into college or a job, who qualifies for a loan, or how health care is distributed, bias in AI can set back efforts toward a more equitable society.

In an opinion piece for Context by the Thomson Reuters Foundation, Gemma asks us to consider the consequences of not addressing this issue. She argues that bias and fairness are the biggest yet often overlooked threats of AI. You can read her essay here

We chatted with Gemma about her piece below. 

Can you give examples of how AI is already affecting us?

AI is involved in nearly everything — whether you’re applying for a job, seeing a doctor, or applying for housing or benefits. Your resume might be screened by an AI, your wait time at the hospital could be determined by an AI triage system, and decisions about loans or mortgages are often assisted by AI. It’s woven into so many aspects of decision-making, but we don’t always see it.

Why is bias in AI so problematic?

AI systems look for patterns and then replicate them. These patterns are based on majority data, which means that minorities — people who don’t fit the majority patterns — are often disadvantaged. Without specific measures built into AI systems to address this, they will inevitably reinforce existing biases. Bias is probably the most dangerous technical challenge in AI, and it’s not being tackled head-on.

How can we address these issues?

At Eticas, we build software to identify outliers — people who don’t fit into majority patterns. We assess whether these outliers are relevant and make sure they aren’t excluded from positive outcomes. We also run a nonprofit that helps communities affected by biased AI systems. If a community feels they’ve been negatively impacted by an AI system, we work with them to reverse-engineer it, helping them understand how it works and giving them the tools to advocate for fairer systems.

What can someone do if an AI system affects them, but they don’t fully understand how it works?

Unfortunately, not much right now. Often, people don’t even know an AI system made a decision about their lives. And there aren’t many mechanisms in place for contesting those decisions. It’s different from buying a faulty product, where you have recourse. If AI makes a decision you don’t agree with, there’s very little you can do. That’s one of the biggest challenges we need to address — creating systems of accountability for when AI makes mistakes.

You’ve highlighted the challenges. What gives you hope about the future of AI?

The progress of our work on AI auditing! For years now we’ve been showing how there is an alternative AI future, one where AI products are built with trust and safety at heart, where AI audits are seen as proof of responsibility and accountability — and ultimately, safety. I often mention how my work is to build the seatbelts of AI, the pieces that make innovation safer and better. A world where we find non-audited AI as unthinkable as cars without seatbelts or brakes, that’s an AI future worth fighting for.

The post The AI problem we can’t ignore appeared first on The Mozilla Blog.

The Rust Programming Language BlogOctober project goals update

The Rust project is currently working towards a slate of 26 project goals, with 3 of them designed as flagship goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

The biggest elements of our goal are solving the "send bound" problem via return-type notation (RTN) and adding support for async closures. This month we made progress towards both. For RTN, @compiler-errors extended the return-type notation landed support for using RTN in self-types like where Self::method(): Send. He also authored a blog post with a call for testing explaining what RTN is and how it works. For async closures, the lang team reached a preliminary consensus on the async Fn syntax, with the understanding that it will also include some "async type" syntax. This rationale was documented in RFC #3710, which is now open for feedback. The team held a design meeting on Oct 23 and @nikomatsakis will be updating the RFC with the conclusions.

We have also been working towards a release of the dynosaur crate that enables dynamic dispatch for traits with async functions. This is intended as a transitionary step before we implement true dynamic dispatch. The next steps are to polish the implementation and issue a public call for testing.

With respect to async drop experiments, @nikomatsakis began reviews. It is expected that reviews will continue for some time as this is a large PR.

Finally, no progress has been made towards async WG reorganization. A meeting was scheduled but deferred. @tmandry is currently drafting an initial proposal.

We have made significant progress on resolving blockers to Linux building on stable. Support for struct fields in the offset_of! macro has been stabilized. The final naming for the "derive-smart-pointer" feature has been decided as #[derive(CoercePointee)]; @dingxiangfei2009 prepared PR #131284 for the rename and is working on modifying the rust-for-linux repository to use the new name. Once that is complete, we will be able to stabilize. We decided to stabilize support for references to statics in constants pointers-refs-to-static feature and are now awaiting a stabilization PR from @dingxiangfei2009.

Rust for Linux (RfL) is one of the major users of the asm-goto feature (and inline assembly in general) and we have been examining various extensions. @nbdd0121 authored a hackmd document detailing RfL's experiences and identifying areas for improvement. This led to two immediate action items: making target blocks safe-by-default (rust-lang/rust#119364) and extending const to support embedded pointers (rust-lang/rust#128464).

Finally, we have been finding an increasing number of stabilization requests at the compiler level, and so @wesleywiser and @davidtwco from the compiler team have started attending meetings to create a faster response. One of the results of that collaboration is RFC #3716, authored by Alice Ryhl, which proposes a method to manage compiler flags that modify the target ABI. Our previous approach has been to create distinct targets for each combination of flags, but the number of flags needed by the kernel make that impractical. Authoring the RFC revealed more such flags than previously recognized, including those that modify LLVM behavior.

The Rust 2024 edition is progressing well and is on track to be released on schedule. The major milestones include preparing to stabilize the edition by November 22, 2024, with the actual stabilization occurring on November 28, 2024. The edition will then be cut to beta on January 3, 2025, followed by an announcement on January 9, 2025, indicating that Rust 2024 is pending release. The final release is scheduled for February 20, 2025.

The priorities for this edition have been to ensure its success without requiring excessive effort from any individual. The team is pleased with the progress, noting that this edition will be the largest since Rust 2015, introducing many new and exciting features. The process has been carefully managed to maintain high standards without the need for high-stress heroics that were common in past editions. Notably, the team has managed to avoid cutting many items from the edition late in the development process, which helps prevent wasted work and burnout.

All priority language items for Rust 2024 have been completed and are ready for release. These include several key issues and enhancements. Additionally, there are three changes to the standard library, several updates to Cargo, and an exciting improvement to rustdoc that will significantly speed up doctests.

This edition also introduces a new style edition for rustfmt, which includes several formatting changes.

The team is preparing to start final quality assurance crater runs. Once these are triaged, the nightly beta for Rust 2024 will be announced, and wider testing will be solicited.

Rust 2024 will be stabilized in nightly in late November 2024, cut to beta on January 3, 2025, and officially released on February 20, 2025. More details about the edition items can be found in the Edition Guide.

Goals with updates

  • camelid has started working on using the new lowering schema for more than just const parameters, which once done will allow the introduction of a min_generic_const_args feature gate.
  • compiler-errors has been working on removing the eval_x methods on Const that do not perform proper normalization and are incompatible with this feature.
  • Posted the September update.
  • Created more automated infrastructure to prepare the October update, utilizing an LLM to summarize updates into one or two sentences for a concise table.
  • No progress has been made on this goal.
  • The goal will be closed as consensus indicates stabilization will not be achieved in this period; it will be revisited in the next goal period.
  • No major updates to report.
  • Preparing a talk for next week's EuroRust has taken away most of the free time.
  • Key developments: With the PR for supporting implied super trait bounds landed (#129499), the current implementation is mostly complete in that it allows most code that should compile, and should reject all code that shouldn't.
  • Further testing is required, with the next steps being improving diagnostics (#131152), and fixing more holes before const traits are added back to core.
  • A working-in-process pull request is available at https://github.com/weihanglo/cargo/pull/66.
  • The use of wasm32-wasip1 as a default sandbox environment is unlikely due to its lack of support for POSIX process spawning, which is essential for various build script use cases.
  • The Autodiff frontend was merged, including over 2k LoC and 30 files, making the remaining diff much smaller.
  • The Autodiff middle-end is likely getting a redesign, moving from a library-based to a pass-based approach for LLVM.
  • Significant progress was made with contributions by @x-hgg-x, improving the resolver test suite in Cargo to check feature unification against a SAT solver.
  • This was followed by porting the test cases that tripped up PubGrub to Cargo's test suite, laying the groundwork to prevent regression on important behaviors when Cargo switches to PubGrub and preparing for fuzzing of features in dependency resolution.
  • The team is working on a consensus for handling generic parameters, with both PRs currently blocked on this issue.
  • Attempted stabilization of -Znext-solver=coherence was reverted due to a hang in nalgebra, with subsequent fixes improving but not fully resolving performance issues.
  • No significant changes to the new solver have been made in the last month.
  • GnomedDev pushed rust-lang/rust#130553, which replaced an old Clippy infrastructure with a faster one (string matching into symbol matching).
  • Inspections into Clippy's type sizes and cache alignment are being started, but nothing fruitful yet.
  • The linting behavior was reverted until an unspecified date.
  • The next steps are to decide on the future of linting and to write the never patterns RFC.
  • The PR https://github.com/rust-lang/crates.io/pull/9423 has been merged.
  • Work on the frontend feature is in progress.
  • Key developments in the 'Scalable Polonius support on nightly' project include fixing test failures due to off-by-one errors from old mid-points, and ongoing debugging of test failures with a focus on automating the tracing work.
  • Efforts have been made to accept variations of issue #47680, with potential adjustments to active loans computation and locations of effects. Amanda has been cleaning up placeholders in the work-in-progress PR #130227.
  • rust-lang/cargo#14404 and rust-lang/cargo#14591 have been addressed.
  • Waiting on time to focus on this in a couple of weeks.
  • Key developments: Added the cases in the issue list to the UI test to reproduce the bug or verify the non-reproducibility.
  • Blockers: null.
  • Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue.
  • Students from the CMU Practicum Project have started writing function contracts that include safety conditions for some unsafe functions in the core library, and verifying that safe abstractions respect those pre-conditions and are indeed safe.
  • Help is needed to write more contracts, integrate new tools, review pull requests, or participate in the repository discussions.
  • Progress has been made in matching rustc suggestion output within annotate-snippets, with most cases now aligned.
  • The focus has been on understanding and adapting different rendering styles for suggestions to fit within annotate-snippets.

Goals without updates

The following goals have not received updates in the last month:

Mozilla ThunderbirdThunderbird for Android 8.0 Takes Flight

Just over two years ago, we announced our plans to bring Thunderbird to Android by taking K-9 Mail under our wing. The journey took a little longer than we had originally anticipated and there was a lot to learn along the way, but the wait is finally over! For all of you who have ever asked “when is Thunderbird for Android coming out?”, the answer is – today! We are excited to announce that the first stable release of Thunderbird for Android is out now, and we couldn’t be prouder of the newest, most mobile member of the Thunderbird family.

Resources

Thanks for Helping Thunderbird for Android Fly

Thank you for being a part of the community and sharing this adventure on Android with us! We’re especially grateful to all of you who have helped us test the beta and release candidate images. Your feedback helped us find and fix bugs, test key features, and polish the stable release. We hope you enjoy using the newest Thunderbird, now and for a long time to come!

The post Thunderbird for Android 8.0 Takes Flight appeared first on The Thunderbird Blog.

Wladimir PalantThe Karma connection in Chrome Web Store

Somebody brought to my attention that the Hide YouTube Shorts extension for Chrome changed hands and turned malicious. I looked into it and could confirm that it contained two undisclosed components: one performing affiliate fraud and the other sending users’ every move to some Amazon cloud server. But that wasn’t all of it: I discovered eleven more extensions written by the same people. Some contained only the affiliate fraud component, some only the user tracking, some both. A few don’t appear to be malicious yet.

While most of these extensions were supposedly developed or bought by a person without any other traces online, one broke this pattern. Karma shopping assistant has been on Chrome Web Store since 2020, the company behind it founded in 2013. This company employs more than 50 people and secured tons of cash in venture capital. Maybe a mistake on my part?

After looking thoroughly this explanation seems unlikely. Not only does Karma share some backend infrastructure and considerable amounts of code with the malicious extensions. Not only does Karma Shopping Ltd. admit to selling users’ browsing profiles in their privacy policy. There is even more tying them together, including a mobile app developed by Karma Shopping Ltd. whereas the identical Chrome extension is supposedly developed by the mysterious evildoer.

Screenshot of the karmanow.com website, with the Karma logo visible and a yellow button “Add to Chrome - It’s Free”

The affected extensions

Most of the extensions in question changed hands relatively recently, the first ones in the summer of 2023. The malicious code has been added immediately after the ownership transfer, with some extensions even requesting additional privileges citing bogus reasons. A few extensions have been developed this year by whoever is behind this.

Some extensions from the latter group don’t have any obvious malicious functionality at this point. If there is tracking, it only covers the usage of the extension’s user interface rather than the entire browsing behavior. This can change at any time of course.

Name Weekly active users Extension ID Malicious functionality
Hide YouTube Shorts 100,000 aljlkinhomaaahfdojalfmimeidofpih Affiliate fraud, browsing profile collection
DarkPDF 40,000 cfemcmeknmapecneeeaajnbhhgfgkfhp Affiliate fraud, browsing profile collection
Sudoku On The Rocks 1,000 dncejofenelddljaidedboiegklahijo Affiliate fraud
Dynamics 365 Power Pane 70,000 eadknamngiibbmjdfokmppfooolhdidc Affiliate fraud, browsing profile collection
Israel everywhere 70 eiccbajfmdnmkfhhknldadnheilniafp
Karma | Online shopping, but better 500,000 emalgedpdlghbkikiaeocoblajamonoh Browsing profile collection
Where is Cookie? 93 emedckhdnioeieppmeojgegjfkhdlaeo
Visual Effects for Google Meet 1,000,000 hodiladlefdpcbemnbbcpclbmknkiaem Affiliate fraud
Quick Stickies 106 ihdjofjnmhebaiaanaeeoebjcgaildmk
Nucleus: A Pomodoro Timer and Website Blocker 20,000 koebbleaefghpjjmghelhjboilcmfpad Affiliate fraud, browsing profile collection
Hidden Airline Baggage Fees 496 kolnaamcekefalgibbpffeccknaiblpi Affiliate fraud
M3U8 Downloader 100,000 pibnhedpldjakfpnfkabbnifhmokakfb Affiliate fraud

Update (2024-11-11): Hide YouTube Shorts, DarkPDF, Nucleus and Hidden Airline Baggage Fees have been taken down. Two of them have been marked as malware and one as violating Chrome Web Store policies, meaning that existing extension users will be notified. I cannot see the reason for different categorization, the functionality being identical in all of these extensions. The other extensions currently remain active.

Hiding in plain sight

Whoever wrote the malicious code chose not to obfuscate it but to make it blend in with the legitimate functionality of the extension. Clearly, the expectation was that nobody would look at the code too closely. So there is for example this:

if (window.location.href.startsWith("http") ||
    window.location.href.includes("m.youtube.com")) {
  
}

It looks like the code inside the block would only run on YouTube. Only when you stop and consider the logic properly you realize that it runs on every website. In fact, that’s the block wrapping the calls to malicious functions.

The malicious functionality is split between content script and background worker for the same reason, even though it could have been kept in one place. This way each part looks innocuous enough: there is some data collection in the content script, and then it sends a check_shorts message to the background worker. And the background worker “checks shorts” by querying some web server. Together this just happens to send your entire browsing history into the Amazon cloud.

Similarly, there are some complicated checks in the content script which eventually result in a loadPdfTab message to the background worker. The background worker dutifully opens a new tab for that address and, strangely, closes it after 9 seconds. Only when you sort through the layers it becomes obvious that this is actually about adding an affiliate cookie.

And of course there is a bunch of usual complicated conditions, making sure that this functionality is not triggered too soon after installation and generally doesn’t pop up reliably enough that users could trace it back to this extension.

Affiliate fraud functionality

The affiliate fraud functionality is tied to the kra18.com domain. When this functionality is active, the extension will regularly download data from https://www.kra18.com/v1/selectors_list?&ex=90 (90 being the extension ID here, the server accepts eight different extension IDs). That’s a long list containing 6,553 host names:

Screenshot of JSON data displayed in the browser. The selectors key is expanded, twenty domain names like drinkag1.com are visible in the list.

Update (2024-11-19): As of now, the owners of this server disabled the endpoints mentioned here. You can still see the original responses on archive.today however.

Whenever one of these domains is visited and the moons are aligned in the right order, another request to the server is made with the full address of the page you are on. For example, the extension could request https://www.kra18.com/v1/extension_selectors?u=https://www.tink.de/&ex=90:

Screenshot of JSON data displayed in the browser. There are keys shortsNavButtonSelector, url and others. The url key contains a lengthy URL from awin1.com domain.

The shortsNavButtonSelector key is another red herring, the code only appears to be using it. The important key is url, the address to be opened in order to set the affiliate cookie. And that’s the address sent via loadPdfTab message mentioned before if the extension decides that right now is a good time to collect an affiliate commission.

There are also additional “selectors,” downloaded from https://www.kra18.com/v1/selectors_list_lr?&ex=90. Currently this functionality is only used on the amazon.com domain and will replace some product links with links going through jdoqocy.com domain, again making sure an affiliate commission is collected. That domain is owned by Common Junction LLC, an affiliate marketing company that published a case study on how their partnership with Karma Shopping Ltd. (named Shoptagr Ltd. back then) helped drive profits.

Browsing profile collection

Some of the extensions will send each page visit to https://7ng6v3lu3c.execute-api.us-east-1.amazonaws.com/EventTrackingStage/prod/rest. According to the extension code, this is an Alooma backend. Alooma is a data integration platform which has been acquired by Google a while ago. Data transmitted could look like this:

Screenshot of query string parameters displayed in Developer Tools. The parameters are: token: sBGUbZm3hp, timestamp: 1730137880441, user_id: 90, distinct_id: 7796931211, navigator_language: en-US, referrer: https://www.google.com/, local_time: Mon Oct 28 2024 18:51:20 GMT+0100 (Central European Standard Time), event: page_visit, component: external_extension, external: true, current_url: https://example.com/

Yes, this is sent for each and every page loaded in the browser, at least after you’ve been using the extension for a while. And distinct_id is my immutable user ID here.

But wait, it’s a bit different for the Karma extension. Here you can opt out! Well, that’s only if you are using Firefox because Mozilla is rather strict about unexpected data collection. And if you manage to understand what “User interactions” means on this options page:

Screenshot of an options page with two switches labeled User interactions and URL address. The former is described with the text: Karma is a community of people who are working together to help each other get a great deal. We collect anonymized data about coupon codes, product pricing, and information about Karma is used to contribute back to the community. This data does not contain any personably identifiable information such as names or email addresses, but may include data supplied by the browser such as url address.

Well, I may disagree with the claim that url addresses do not contain personably identifiable information. And: yes, this is the entire page. There really isn’t any more text.

The data transmitted is also somewhat different:

Screenshot of query string parameters displayed in Developer Tools. The parameters are: referrer: https://www.google.com/, current_url: https://example.com/, browser_version: 130, tab_id: 5bd19785-e18e-48ca-b400-8a74bf1e2f32, event_number: 1, browser: chrome, event: page_visit, source: extension, token: sBGUbZm3hp, version: 10.70.0.21414, timestamp: 1730138671937, user_id: 6372998, distinct_id: 6b23f200-2161-4a1d-9400-98805c17b9e3, navigator_language: en-US, local_time: Mon Oct 28 2024 19:04:31 GMT+0100 (Central European Standard Time), ui_config: old_save, save_logic: rules, show_k_button: true, show_coupon_scanner: true, show_popups: true

The user_id field no longer contains the extension ID but my personal identifier, complementing the identifier in distinct_id. There is a tab_id field adding more context, so that it is not only possible to recognize which page I navigated to and from where but also to distinguish different tabs. And some more information about my system is always useful of course.

Who is behind this?

Eleven extensions on my list are supposedly developed by a person going by the name Rotem Shilop or Roni Shilop or Karen Shilop. This isn’t a very common last name, and if this person really exists it managed to leave no traces online. Yes, I also searched in Hebrew. Yet one extension is developed by Karma Shopping Ltd. (formerly Shoptagr Ltd.), a company based in Israel with at least 50 employees. An accidental association?

It doesn’t look like it. I’m not going into the details of shared code and tooling, let’s just say: it’s very obvious that all twelve extensions are being developed by the same people. Of course, there is still the possibility that the eleven malicious extensions are not associated directly with Karma Shopping but with some rogue employee or contractor or business partner.

However, it isn’t only the code. As explained above, five extensions including Karma share the same tracking backend which is found nowhere else. They are even sending the same access token. Maybe this backend isn’t actually run by Karma Shopping and they are only one of the customers of some third party? Yet if you look at the data being sent, clearly the Karma extension is considered first-party. It’s the other extensions which are sending external: true and component: external_extension flags.

Then maybe Karma Shopping is merely buying data from a third party, without actually being affiliated with their extensions? Again, this is possible but unlikely. One indicator is the user_id field in the data sent by these extensions. It’s the same extension ID that they use for internal communication with the kra18.com server. If Karma Shopping were granting a third party access to their server, wouldn’t they assign that third party some IDs of their own?

And those affiliate links produced by the kra18.com server? Some of them clearly mention karmanow.com as the affiliate partner.

Screenshot of JSON data displayed in the browser. url key is a long link pointing to go.skimresources.com. sref query parameter of the link is https://karmanow.com. url query parameter of the link is www.runinrabbit.com.

Finally, if we look at Karma Shopping’s mobile apps, they develop two of them. In addition to the Karma app, the app stores also contain an app called “Sudoku on the Rocks,” developed by Karma Shopping Ltd. Which is a very strange coincidence because an identical “Sudoku on the Rocks” extension also exists in the Chrome Web Store. Here however the developer is Karen Shilop. And Karen Shilop chose to include hidden affiliate fraud functionality in their extension.

By the way, guess who likes the Karma extension a lot and left a five-star review?

Screenshot of a five-star review by Rona Shilop with a generic-looking avatar of woman with a cup of coffee. The review text says: Thanks for making this amazing free extension. There is a reply by Karma Support saying: We’re so happy to hear how much you enjoy shopping with Karma.

I contacted Karma Shopping Ltd. via their public relations address about their relationship to these extensions and the Shilop person but didn’t hear back so far.

Update (2024-10-30): An extension developer told me that they were contacted on multiple independent occasions about selling their Chrome extension to Karma Shopping, each time by C-level executives of the company, from official karmanow.com email addresses. The first outreach was in September 2023, where Karma was supposedly looking into adding extensions to their portfolio as part of their growth strategy. They offered to pay between $0.2 and $1 per weekly active user.

Update (2024-11-11): Another hint pointed me towards this GitHub issue. While the content has been removed here, you can still see the original content in the edit history. It’s the author of the Hide YouTube Shorts extension asking the author of the DarkPDF extension about that Karma company interested in buying their extensions.

What does Karma Shopping want with the data?

It is obvious why Karma Shopping Ltd. would want to add their affiliate functionality to more extensions. After all, affiliate commissions are their line of business. But why collect browsing histories? Only to publish semi-insightful articles on people’s shopping behavior?

Well, let’s have a look at their privacy policy which is actually meaningful for a change. Under 1.3.4 it says:

Browsing Data. In case you a user of our browser extensions we may collect data regarding web browsing data, which includes web pages visited, clicked stream data and information about the content you viewed.

How we Use this Data. We use this Personal Data (1) in order to provide you with the Services and feature of the extension and (2) we will share this data in an aggregated, anonymized manner, for marketing research and commercial use with our business partners.

Legal Basis. (1) We process this Personal Data for the purpose of providing the Services to you, which is considered performance of a contract with you. (2) When we process and share the aggregated and anonymized data we will ask for your consent.

First of all, this tells us that Karma collecting browsing data is official. They also openly state that they are selling it. Good to know and probably good for their business as well.

As to the legal basis: I am no lawyer but I have a strong impression that they don’t deliver on the “we will ask for your consent” promise. No, not even that Firefox options page qualifies as informed consent. And this makes this whole data collection rather doubtful in the light of GDPR.

There is also a difference between anonymized and pseudonymized data. The data collection seen here is pseudonymized: while it doesn’t include my name, there is a persistent user identifier which is still linked to me. It is usually fairly easy to deanonymize pseudonymized browsing histories, e.g. because people tend to visit their social media profiles rather often.

Actually anonymized data would not allow associating it with any single person. This is very hard to achieve, and we’ve seen promises of aggregated and anonymized data go very wrong. While it’s theoretically possible that Karma correctly anonymizes and aggregates data on the server side, this is a rather unlikely outcome for a company that, as we’ve seen above, confuses the lack of names and email addresses with anonymity.

But of course these considerations only apply to the Karma extension itself. Because related extensions like Hide YouTube Shorts just straight out lie:

Screenshot of a Chrome Web Store listing. Text under the heading Privacy: The developer has disclosed that it will not collect or use your data.

Some of these extensions actually used to have a privacy policy before they were bought. Now only three still have an identical and completely bogus privacy policy. Sudoku on the Rocks happens to be among these three, and the same privacy policy is linked by the Sudoku on the Rocks mobile apps which are officially developed by Karma Shopping Ltd.

This Week In RustThis Week in Rust 571

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is tower-http-client, a library of middlewares and various utilities for HTTP-clients.

Thanks to Aleksey Sidorov for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

447 pull requests were merged in the last week

Rust Compiler Performance Triage

This week saw a lot of activity both on the regressions and improvements side. There was one large regression, which was immediately reverted. Overall, the week ended up being positive, thanks to a rollup PR that caused a tiny improvement to almost all benchmarks.

Triage done by @kobzol. Revision range: 3e33bda0..c8a8c820

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.7% [0.2%, 2.7%] 15
Regressions ❌
(secondary)
0.8% [0.1%, 1.6%] 22
Improvements ✅
(primary)
-0.6% [-1.5%, -0.2%] 153
Improvements ✅
(secondary)
-0.7% [-1.9%, -0.1%] 80
All ❌✅ (primary) -0.5% [-1.5%, 2.7%] 168

6 Regressions, 6 Improvements, 4 Mixed; 6 of them in rollups 58 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-10-30 - 2024-11-27 🦀

Virtual
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

An earnest effort to pursue [P1179R1] as a Lifetime TS[P3465R0] will compromise on C++’s outdated and unworkable core principles and adopt mechanisms more like Rust’s. In the compiler business this is called carcinization: a tendency of non-crab organisms to evolve crab-like features. – Sean Baxter on circle-lang.org

Thanks to Collin Richards for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox Developer ExperienceFirefox WebDriver Newsletter 132

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 132 release cycle.

Contributions

Firefox – including our WebDriver implementation – is developed as an open source project, and everyone is welcome to contribute. If you ever wanted to contribute to an open source project used by millions of users, or are interested in some experience in software development, jump in.

We are always grateful to receive external contributions, here are the ones which made it in Firefox 132:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi.

WebDriver BiDi

Retry commands to avoid AbortError failures

In release 132, one of our primary focus areas was enhancing the reliability of command execution.

Internally, we sometimes need to forward commands to content processes. This can easily fail, particularly when targeting a page which was either newly created or in the middle of a navigation. These failures often result in errors such as "AbortError: Actor 'MessageHandlerFrame' destroyed before query 'MessageHandlerFrameParent:sendCommand' was resolved".

<- {
  "type":"error",
  "id":14,
  "error":"unknown error",
  "message":"AbortError: Actor 'MessageHandlerFrame' destroyed before query 'MessageHandlerFrameParent:sendCommand' was resolved",
  "stacktrace":""
}

While there are valid technical reasons that prevent command execution in some cases, there are also many instances where retrying the command is a feasible solution.

The browsingContext.setViewport command was specifically updated in order to retry an internal command, as it was frequently failing. Then we updated our overall implementation in order to retry commands automatically if we detect that the page is navigating or about to navigate. Note that retrying commands is not entirely new, it’s an internal feature we were already using in a few handpicked commands. The changes in Firefox 132 just made its usage much more prevalent.

New preference: remote.retry-on-abort

To go one step further, we decided to allow all commands to be retried by default when the remote.retry-on-abort preference is set to true. Note that true is the default value, which means that with Firefox 132, all commands which need to reach the content process might now be retried (documentation). If you were previously relying on or working around the aforementioned AbortError, and notice an unexpected issue with Firefox 132, you can update this preference to make the behavior closer to previous Firefox versions. Please also file a Bug to let us know about the problem.

Bug fixes

Support.Mozilla.OrgContributor spotlight – Michele Rodaro

Hi Mozillians,

In today’s edition, I’d like to introduce you all to Michele Rodaro, a locale leader for Italian in the Mozilla Support platform. He is a professional architect, but finding pleasure and meaning in contributing to Mozilla since 2006. I’ve met him on several occasions in the past, and reading his answers feels exactly like talking to him in real life. I’m sure you can sense his warmth and kindness just by reading his responses. Here’s a beautiful analogy from Michele about his contributions to Mozilla as they relate to his background in architecture:

I see my contribution to Mozilla a bit like participating in the realization of a project, the tools change but I believe the final goal is the same: helping to build a beautiful house where people feel comfortable, where they live well, where there are common spaces, but also personal spaces where privacy must be the priority.

Q: Hi Michele, can you tell us more about yourself and what keeps you busy these days?

I live in Gemona del Friuli, a small town in the Friuli Venezia Giulia region, in the north-east of Italy, bordering Austria and Slovenia. I am a freelance architect, having graduated from Venice’s University many years ago. I own a professional studio and I mainly deal with residential planning, renovations, and design. In my free time I like to draw, read history, art, literature, satire and comics, listen to music, take care of my cats and, of course, translate or update SUMO Knowledge Base articles into Italian.

When I was younger, I played many sports (skiing, basketball, rugby, and athletics). When I can, I continue to go skiing in the beautiful mountains of my region. Oh, I also played piano in a jazz rock band I co-founded in the late 70s and early 80s (good times). In this period, from a professional point of view, I am trying to survive the absurd bureaucracy that is increasingly oppressive in my working environment. As for SUMO, I am maintaining the Italian KB at 100% of the translations, and supporting new localizers to help them align with our translation style.

Q: You get started with the Italian local forum in 2006 before you expand your contribution to SUMO in 2008. Can you tell us more about what are the different types of contributions that you’re doing for Mozilla?

I found out about Firefox in November 2005 and discovered the Mozilla Italia community and their support forum. Initially, I used the forum to ask for help from other volunteers and, after a short time, I found myself personally involved in providing online assistance to Italian users in need. Then I became a moderator of the forum and in 2008, with the help of my friend @Underpass, I started contributing to the localization of SUMO KB articles (the KB was born in that year). It all started like that.

Today, I am an Italian locale leader in SUMO. I take care of the localization of KB articles and train new Italian localizers. I continue to provide support to users on the Italian forums and when I manage to solve a problem I am really happy, but my priority is the SUMO KB because it is an essential source to help users who search online for an immediate solution to any problem encountered with Firefox on all platforms and devices or with Thunderbird, and want to learn the various features of Mozilla applications and services. Forum support has also benefited greatly from KB articles because, instead of having to write down all the procedures to solve a user’s problem every time, we can simply provide them with the link to the article that could solve the problem without having to write the same things every time, especially when the topic has already been discussed many times, but users have not searched our forum.

Q: In addition to translating articles on SUMO, you’re also involved in product translation on Pontoon. With your experience across both platforms, what do you think SUMO can learn from Pontoon, and how can we improve our overall localization process?

I honestly don’t know, they are quite different ways of doing things in terms of using translation tools specifically. I started collaborating with Pontoon’s Italian l10n team in 2014… Time flies… The rules, the style guides, and the QA process adopted for the Italian translations on Pontoon are the same ones we adopted for SUMO. I have to say that I am much more comfortable with SUMO’s localization process and tool, maybe because I have seen it start off, grow and evolve over time. Pontoon introduced Pretranslation, which helps a lot in translating strings, although it still needs improvements. A machine translation of strings that are not already in Pontoon’s “Translation Memory” is proposed. Sometimes that works fine, other times we need to correct the proposal and save it after escalating it on GitHub, so that in the future that translation becomes part of the “Translation Memory”. If the translation of a string is not accurate, it can be changed at any time.

I don’t know if it can be a solution for some parts of SUMO articles. We already have templates, maybe we should further implement the creation and use of templates, focusing on this tool, to avoid typing the translation of procedures/steps that are repeated identically in many articles.

Q: What are the biggest challenges you’re currently facing as a SUMO contributor? Are there any specific technical issues you think should be prioritized for fixing?

Being able to better train potential new localizers, and help infuse the same level of passion that I have in managing the Italian KB of SUMO. As for technical issues, staying within the scope of translating support articles, I do not encounter major problems in terms of translating and updating articles, but perhaps it is because I now know the strengths and weaknesses of the platform’s tools and I know how to manage them.

Maybe we could find a way to remedy what is usually the most frustrating thing for a contributor/localizer who, for example, is updating an article directly online: the loss of their changes after clicking the “Preview Content” button. That is when you click on the “Preview Content” button after having translated an article to correct any formatting/typing errors. If you accidentally click a link in the preview and don’t right-click the link to select “Open Link in New Tab” from the context menu, the link page opens replacing/overwriting the editing page and if you try to go back everything you’ve edited/translated in the input field is gone forever… And you have to start over. A nightmare that happened to me more than once often because I was in a hurry. I used to rely on a very good extension that saved all the texts I typed in the input fields and that I could recover whenever I wanted, but it is no longer updated for the newer versions of Firefox. I’ve tried others, but they don’t convince me. So, in my opinion, there should be a way to avoid this issue without installing extensions. I’m not a developer, I don’t know if it’s easy to find a solution, but we have Mozilla developers who are great ;)

Maybe there could be a way to automatically save a draft of the edit every “x” seconds to recover it in case of errors with the article management. Sometimes, even the “Preview Content” button could be dangerous. If you accidentally lost your Internet connection and didn’t notice, if you click on that button, the preview is not generated, you lose everything and goodbye products!

Q: Your background as a freelance architect is fascinating! Could you tell us more about that? Do you see any connections between your architectural work and your contribution to Mozilla, or do you view them as completely separate aspects of your life?

As an architect I can only speak from my personal experience, because I live in a small town, in a beautiful region which presents me with very different realities than those colleagues have to deal with in big cities like Rome or Milan. Here everything is quieter, less frenetic, which is sometimes a good thing, but not always. The needs of those who commission a project are different if you have to carry it out in a big city, the goal is the same but, urban planning, local building regulations, available spaces in terms of square footage, market requests/needs, greatly influence the way an architect works. Professionally I have had many wonderful experiences in terms of design and creativity (houses, residential buildings, hotels, renovations of old rural or mountain buildings, etc.), challenges in which you often had to play with just a centimeter of margin to actually realize your project.

Connection between architecture and contribution to Mozilla? Good question. I see my contribution to Mozilla a bit like participating in the realization of a project, the tools change but I believe the final goal is the same: helping to build a beautiful house where people feel comfortable, where they live well, where there are common spaces, but also personal spaces where privacy must be the priority. If someone wants our “cookies” and unfortunately often not only those, they have to knock, ask permission and if we do not want to have intrusive guests, that someone has to turn around, go away and let us do our things without sticking their nose in. This is my idea of ​​Mozilla, this is the reason that pushed me to believe in its values ​​(The user and his privacy first) and to contribute as a volunteer, and this is what I would like to continue to believe even if someone might say that I am naive, that “they are all the same”.

My duty as an architect is like that of a good parent, when necessary I must always warn my clients about why I would advise against certain solutions that I, from professional experience, already know are difficult to implement or that could lead to future management and functionality problems. In any case I always look for solutions that can satisfy my clients’ desires. Design magazines are beautiful, but it is not always possible to reproduce a furnishing solution in living environments that are completely different from the spaces of a showroom set up to perfection for a photo shoot… Mozilla must continue to do what it has always done, educate and protect users, even those who do not use its browser or its products, from those “design magazines” that could lead them to inadvertently make bad choices that they could regret one day.

Q: Can you tell us more about the Italian locale team in SUMO and how do you collaborate with each other?

First of all, it’s a fantastic team! Everyone does what they do best, there are those who help users in need on the forums, those who translate, those who check the translations and do QA by reporting things that need to be corrected or changed, from punctuation errors to lack of fluency or clarity in the translation, those who help with images for articles because often the translator needs the specific image for an operating system that he does not have.

As for translations, which is my main activity, we usually work together with 4- 5 collaborators/friends, and we use a consolidated procedure. Translation of an article, opening a specific discussion for the article in the forum section dedicated to translations with the link of the first translation and the request for QA. Intervention of anyone who wants to report/suggest a correction or a change to be made, modification, link to the new revised version based on the suggestions, rereading and if everything is ok, approval and publication. The translation section is public — like all the other sections of the Mozilla Italia forum — and anyone can participate in the discussion.

We are all friends, volunteers, some of us know each other only virtually, others have had the chance to meet in person. The atmosphere is really pleasant and even when a discussion goes on too long, we find a way to lighten the mood with a joke or a tease. No one acts as the professor, we all learn something new. Obviously, there are those like me who are more familiar with the syntax/markup and the tools of the SUMO Wiki and those who are less, but this is absolutely not a problem to achieve the final result which is to provide a valid guide to users.

Q: Looking back on your contribution to SUMO, what was the most memorable experience for you? Anything that you’re most proud of?

It’s hard to say… I’m not a tech geek, I don’t deal with code, scripts or computer language so my contribution is limited to translating everything that can be useful to Italian users of Mozilla products/programs. So I would say: the first time I reached the 100% translation percentage of all the articles in the Italian dashboard. I have always been very active and available over the years with the various Content Managers of SUMO. When I received their requests for collaboration, I did tests, opened bugs related to the platform, and contributed to the developers’ requests by testing the procedures to solve those bugs.

As for the relationship with the Mozilla community, the most memorable experience was undoubtedly my participation in the Europe MozCamp 2009 in Prague, my “first time”, my first meeting with so many people who then became dear friends, not only in the virtual world. I remember being very excited about that invitation and fearful for my English, which was and is certainly not the best. An episode: Prague, the first Mozilla talk I attended. I was trying to understand as much as possible what the speaker was saying in English. I heard this strange word “eltenen… eltenen… eltenen” repeated several times. What did it mean? After a while I couldn’t take it anymore, I turned to an Italian friend who was more expert in the topics discussed and above all who knew the English language well. Q: What the hell does “eltenen” mean? A: “Localization”. Q: “Localization???” A: “l10n… L ten n… L ocalizatio n”. Silence, embarrassment, damn acronyms!

How could I forget my first trip outside of Europe to attend the Mozilla Summit in Whistler, Canada in the summer of 2010? It was awesome, I was much more relaxed, decided not to think about the English language barrier and was able to really contribute to the discussions that we, SUMO localizers and contributors from so many countries around the world, were having to talk about our experience, try to fix the translation platform to make it better for us and discuss all the potential issues that Firefox was having at the time. I really talked a lot and I think the “Mozillians” I interacted with even managed to understand what I was saying in English :)

The subsequent meetings, the other All Hands I attended, were all a great source of enthusiasm and energy! I met some really amazing people!

Q: Lastly, can you share tips for those who are interested in contributing to Italian content localization or contributing to SUMO in general?

Every time a new localizer starts collaborating with us I don’t forget all the help I received years ago! I bend over backwards to put them at ease, to guide them in their first steps and to be able to transmit to them the same passion that was transmitted to me by those who had to review with infinite patience my first efforts as a localizer. So I would say: first of all, you must have passion and a desire to help people. If you came to us it’s probably because you believe in this project, in this way of helping people. You can know the language you are translating from very well, but if you are not driven by enthusiasm everything becomes more difficult and boring. Don’t be afraid to make mistakes, if you don’t understand something ask, you’re among friends, among traveling companions. As long as an article is not published we can correct it whenever we want and even after publication. We were all beginners once and we are all here to learn. Take an article, start translating it and above all keep it updated.

If you are helping on the support forums, be kind and remember that many users are looking for help with a problem and often their problems are frustrating. The best thing to do is to help the user find the answer they are looking for. If a user is rude, don’t start a battle that is already lost. You are not obligated to respond, let the moderators intervene. It is not a question of wanting to be right at all costs but of common sense.

 

Don Martilinks for 29 Oct 2024

Satire Without Purpose Will Wander In Dark Places Broadly labelling the entirety of Warhammer 40,000 as satire is no longer sufficient to address what the game has become in the almost 40 years since its inception. It also fails to answer the rather awkward question of why, exactly, these fascists who are allegedly too stupid to understand satire are continually showing up in your satirical community in the first place.

Why I’m staying with Firefox for now – Michael Kjörling [T]he most reasonable option is to keep using Firefox, despite the flaws of the organization behind it. So far, at least these things can be disabled through settings (for example, their privacy-preserving ad measurement), and those settings can be prepared in advance.

Google accused of shadow campaigns redirecting antitrust scrutiny to Microsoft, Google’s Shadow Campaigns (so wait a minute, Microsoft won’t let companies use their existing Microsoft Windows licenses for VMs in the Google cloud, and Google is doing a sneaky advocacy campaign? Sounds like content marketing for Amazon Linux®

Scripting News My friends at Automattic showed me how to turn on ActivityPub on a WordPress site. I wrote a test post in my simple WordPress editor, forgetting that it would be cross-posted to Mastodon. When I just checked in on Masto, there was the freaking post. After I recovered from passing out, I wondered what happens if I update the post in my editor, and save it to the WordPress site that’s hooked up to Masto via ActivityPub. So I made a change and saved it. I waited and waited, nothing happened. I got ready to add a comment saying ahh I guess it doesn’t update, when—it updated. (Like being happy when a new web site opening in a new browser, a good sign that ActivityPub is the connecting point for this kind of connected innovation.) Related: The Web Is a Customer Service Medium (Ftrain.com) by Paul Ford.

China Telecom’s next 150,000 servers will mostly use local processors Among China Telecom’s server buys this year are machines running processors from local champion Loongson, which has developed an architecture that blends elements of RISC-V and MIPS.

Removal of Russian coders spurs debate about Linux kernel’s politics Employees of companies on the Treasury Department’s Office of Foreign Assets Control list of Specially Designated Nationals and Blocked Persons (OFAC SDN), or connected to them, will have their collaborations subject to restrictions, and cannot be in the MAINTAINERS file.

The TikTokification of Social Media May Finally Be Its Undoing by Julia Angwin. If tech platforms are actively shaping our experiences, after all, maybe they should be held liable for creating experiences that damage our bodies, our children, our communities and our democracy.

Cheap Solar Panels Are Changing the World The latest global report from the International Energy Agency (IEA) notes that solar is on track to overtake all other forms of energy by 2033.

Conceptual models of space colonization - Charlie’s Diary (one more: Kurt Vonnegut’s concept for spreading genetic material)

(protip: you can always close your browser tabs with creepy tech news, there will be more in a few minutes… Location tracking of phones is out of control. Here’s how to fight back. LinkedIn fined $335 million in EU for tracking ads privacy breaches Pinterest faces EU privacy complaint over tracking ads Dems want tax prep firms charged for improper data sharing Dow Jones says Perplexity is “freeriding,” sues over copyright infringement You Have a ‘Work Number’ on This Site, and You Should Freeze It Roblox stock falls after Hindenburg blasts the social gaming platform over bots and pedophiles)

It Was Ten Years Ago Today that David Rosenthal predicted that cryptocurrency networks will be dominated by a few, perhaps just one, large participant.

Writing Projects (good start for a checklist before turning in a writing project. Maybe I should write Git hooks for these.)

Word.(s). (Includes some good vintage car ads. Remember when most car ads were about the car, not just buttering up the driver with how successful you must be to afford this thing?)

Social Distance and the Patent System [I]t was clear from our conversation that [Judge Paul] Michel doesn’t have a very deep understanding of the concerns of many in the software industry. And, more to the point, he clearly wasn’t very interested in understanding those concerns better or addressing them. On a theoretical level, he knew that there was a lot of litigation in the software industry and that a lot of people were upset about it. But like Fed and the unemployment rate, this kind of theoretical knowledge doesn’t always create a sense of urgency. One has to imagine that if people close to Michel—say, a son who was trying to start a software company—were regularly getting hit by frivolous patent lawsuits, he would suddenly take the issue more seriously. But successful software entrepreneurs are a small fraction of the population, and most likely no judges of the Federal Circuit have close relationships with one.

(Rapids is the script that gathers these, and it got a clean bill of health from the feed reader score report after I fixed the Last-Modified/If-Modified-Since and Etag handling. So expect more link dump posts here, I guess.)

Wil ClouserMozilla Accounts password hashing upgrades

We’ve recently finished two significant changes to how Mozilla Accounts handles password hashes which will improve security and increase flexibility around changing emails. The changes are entirely transparent to end-users and are applied automatically when someone logs in.

Randomizing Salts

If a system is going to store passwords, best practice is to hash the password with a unique salt per row. When accounts was first built we used an account’s email address as the unique salt for password hashing. This saved a column in the database and some bandwidth but overall I think was a poor idea. It meant people couldn’t re-use their email addresses and it leaves PII sitting around unnecessarily.

Instead, a better idea is just to generate a random salt. We’ve now transitioned Mozilla Accounts to random salts.

Increasing Key Stretching Iterations

Eight years ago Ryan Kelly filed bug 1320222 to review Mozilla Accounts’ client-side key stretching capabilities and sparked a spirited conversation about iterations and the priority of the bug. Overall, this is routine maintenance - we expect any amount of stretching we do will have to be revisited periodically due to hardware improving and the value we choose is a compromise between security and time to login, particularly on older hardware.

Since we were generating new hashes for the random salts already we took the opportunity to increase our PBKDF2 iterations from 1000 to 650000 – a number we’re seeing others in the industry using. This means logging in with slower hardware (like older mobile phones) may be noticeably slower. Below is an excerpt from the analysis we did showing a Macbook from 2007 will take an additional ~3 seconds to log in:

Key Stretch Iterations Overhead on 2007 Macbook Overhead on 2021 MacBook Pro M1
100,000 0.4800024 seconds 0.00000681 seconds
200,000 0.9581234 seconds 0.00000169 seconds
300,000 1.4539928 seconds 0.00000277 seconds
400,000 1.9337903 seconds 0.00029750 seconds
500,000 2.4146366 seconds 0.00079127 seconds
600,000 2.9482827 seconds 0.00112186 seconds
700,000 3.3960513 seconds 0.00117956 seconds
800,000 3.8675677 seconds 0.00117956 seconds
900,000 4.3614942 seconds 0.00141616 seconds

Implementation

Dan Schomburg did the heavy lifting to make this a smooth and successful project. He built the v2 system alongside v1 so both hashes are generated simultaneously and if the v2 exists the login system will use that. This lets us roll the feature out slowly and gives us control if we need to disable it or roll back.

We tested the code for several months on our staging server before rolling it out in production. When we did enable it in production it was over the course of several weeks via small percentages while we watched for unintended side-effects and bug reports.

I’m pleased to say everything appers to be working smoothly. As always, if you notice any issues please let us know.

Don Martitypefaces that aren’t on this blog (yet?)

Right now I’m not using these, but they look useful and/or fun.

  • Departure Mono: vintage-looking, pixelated, lo-fi technical vibe.

  • Atkinson Hyperlegible Font was carefully developed by the Braille Institute to help low-vision readers. It improves legibility and readability through clear, and distinctive letters and numbers.

I’m trying to keep this site fairly small and fast, so getting by with Modern Font Stacks as much as possible.

Related

colophon

Bonus links

(these are all web development, editing, and business, more or less. Yes, I’m still working on my SCALE proposal, deadline coming up.)

Before you buy a domain name, first check to see if it’s haunted

Discover Wiped Out MFA Spend By Following These Four Basic Steps (This headline underrates the content. If all web advertisers did these tips, then 90% of the evil stuff on the Internet would be gone—most of the web’s problems are funded by advertisers and agencies who fail to pay attention to the context in which their ads appear.)

Janky remote backups without root on the far end

My solar-powered and self-hosted website

Let’s bring back browsing

Hell Gate NYC doubled its subscription revenue in its second year as a worker-owned news outlet

Is Matt Mullenweg defending WordPress or sabotaging it?

Gosub – An open-source browser engine

Take that

Thunderbird Android client is K-9 Mail reborn, and it’s in solid beta

A Bicycle for the Mind – Prologue

Why I Migrated My Newsletter From Substack to Eleventy and Buttondown - Richard MacManus

My Blog Engine is the Erlang Build Tool

A Developer’s Guide to ActivityPub and the Fediverse

Don Martipersonal AI in the rugpull economy

Doc Searls writes, in Personal Agentic AI,

Wouldn’t it be good for corporate AI agents to have customer hands to shake that are also equipped with agentic AI? Wouldn’t those customers be better than ones whose agency is merely human, and limited to only what corporate AI agents allow?

The obvious answer for business decision-makers today is: lol, no, a locked-in customer is worth more. If, as a person who likes to watch TV, you had an AI agent, then the agent could keep track of sports seasons and the availability of movies and TV shows, and turn your streaming subscriptions on and off. In the streaming business, like many others, the management consensus is to make things as hard and manual as possible on the customer side, and save the automation for the company side. Just keeping up with watching a National Football League team is hard…even for someone who is ON the team. Automation asymmetry, where the seller gets to reduce service costs while the customer has to do more and more manual work, is seen as a big win by the decision-makers on the high-automation side.

Big company decision-makers don’t want to let smaller companies have their own agentic tools, either. Getting a DMCA Exemption to let McDonald’s franchisees fix their ice cream machines was a big deal that required a lengthy process with the US Copyright Office. Many other small businesses are locked in to the manual, low-information side of a business relationship with a larger one. (Web advertising is another example. Google shoots at everyone’s feet, and agencies, smaller firms, and browser extension developers dance.)Google employees and shareholders would be better off if it were split into two companies that could focus on useful projects for independent customers who had real choices.

The first wave of user reactions to AI is happening, and it’s adversarial. Artists on sites like DeviantArt went first, and now Reddit users are deliberately posting fake answers to feed Google’s AI. On the shopping side, avoiding the output of AI and made-for-AI deceptive crap is becoming a must-have mainstream skill, as covered in How to find helpful content in a sea of made-for-Google BS and How Apple and Microsoft’s trusted brands are being used to scam you. As Baldur Bjarnason writes,

The public has for a while now switched to using AI as a negative—using the term artificial much as you do with artificial flavouring or that smile’s artificial. It’s insincere creativity or deceptive intelligence.

Other news is even worse. In today’s global conflict between evil oligarchs and everyone else, AI is firmly aligned with the evil oligarch side.

But today’s Big AI situation won’t last. Small-scale and underground AI has sustainable advantages over the huge but money-losing contenders. And it sounds like Doc is already thinking post-bubble.

Adversarial now, but what about later?

So how do we get from the AI adversarial situation we have now to the win-win that Doc is looking for? Part of the answer will be resolving the legal issues. Today’s Napster-like free-for-all environment won’t persist, so eventually we will have an AI scene in which companies that want to use your work for training have to get permission and disclose provenance.

The other part of the path from today’s situation—where big companies have AI that enables scam culture and chickenization while individuals and small companies are stuck rowing through funnels and pipelines—is personal, aligned AI that balances automation asymmetries. Whether it’s solving CAPTCHAs, getting data in hard-to-parse formats, or other awkward mazes, automation asymmetries mean that as a customer, you technically have more optionality than you practically have time to use. But AI has a lot more time. If a company gives you user experience grief, with the right tools you can get back to where you would have been if they had applied less obfuscation in the first place. (icymi: Video scraping: extracting JSON data from a 35 second screen capture for less than 1/10th of a cent Not a deliberate obfuscation example, but an approach that can be applied.)

So we’re going to see something like this AI cartoon by Tom Fishburne (thanks to Doc for the link) for privacy labour. Companies are already getting expensive software-as-a-service to make privacy tasks harder for the customers, which means that customers are going to get AI services to make it easier. Eventually some companies will notice the extra layers, pay attention to the research, and get rid of the excess grief on their end so you can stop running de-obfuscation on your end. That will make it work better for everyone. (GPC all the things! Data Rights Protocol)

The biggest win from personal AI will, strangely enough, be in de-personalizing your personal information environment. By doing the privacy labour for you, the agentic AI will limit your addressability and reduce personalization risks. The risks to me from buying the less suitable of two legit brands are much lower than the risk of getting stuck with some awful crap that was personalized to me and not picked up on by norms enforcers like Consumer Reports. Getting more of my privacy labour done for me will not just help me personally do better #mindfulConsumption, but also increase the rewards for win-win moves by sellers. Personalization might be nifty, but filtering out crap and rip-offs is a bigger immediate win: Sunday Internet optimism Doc writes, When you limit what customers can bring to markets, you limit what can happen in those markets. As far as I can tell, the real promise for agentic AI isn’t just in enabling existing processes or making them more efficient. It’s in establishing a credible deterrent to enshittification—if you’re trying to rip me off, don’t talk to me, talk to my bot army.

For just a minute, put yourself in the shoes of a product manager with a proposal for some legit project that they’re trying to get approved. If that proposal is up against a quick win for the company, like one based on creepy surveillance, it’s going to lose. But if the customers have the automation power to lower the ROI from creepy growth hacking, the legit project has a chance. And that pushes up the long-term value of the entire company. An individual locked-in customer is more valuable to the brand than an individual independent customer, but a brand with independent customers is more valuable than a brand with an equal number of locked-in customers.

Anyway, hope to see you at VRM Day.

Bonus links

Space is Dead. Why Do We Keep Writing About It?

It’s Time to Build the Exoplanet Telescope

The tech startups shaking up construction in Europe

Support.Mozilla.OrgWhat’s up with SUMO – Q3 2024

Each quarter, we gather insights on all things SUMO to celebrate our team’s contributions and showcase the impact of our work.

The SUMO community is powered by an ever-growing global network of contributors. We are so grateful for your contributions, which help us improve our product and support experiences, and further Mozilla’s mission to make the internet a better place for everyone.

This quarter we’re modifying our update to highlight key takeaways, outline focus areas for Q4, and share our plans to optimize our tools so we can measure the impact of your contributions more effectively.

Below you’ll find our report organized by the following sections: Q3 Highlights at-a-glance, an overview of our Q4 Priorities & Focus Areas, Contributor Spotlights and Important Dates, with a summary of special events and activities to look forward to! Let’s dive right in:

Q3 Highlights at-a-glance

Forums: We saw over 13,000 questions posted to SUMO in Q3, up 83% from Q2. The increased volume was largely driven by the navigation redesign in July.

  • We were able to respond to over 6,300 forum questions, a 49% increase from Q2!
  • Our response rate was ~15 hours, which is a one-hour improvement over Q2, with a helpfulness rating of 66%.
  • August was our busiest and most productive month this year. We saw more than 4,300 questions shared in the forum, and we were able to respond to 52.7% of total in-bounds.
  • Trends in forum queries included questions about site breakages, account and data recovery concerns, sync issues, and PPA feedback.

Knowledge Base: We saw 473 en-US revisions from 45 contributors, and more than 3,000 localization revisions from 128 contributors which resulted in an overall helpfulness rating of 61%, our highest quarterly average rating YTD!

  • Our top contributor was AliceWyman. We appreciate your eagle eyes and dedication to finding opportunities to improve our resources.
  • For localization efforts, our top contributor was Michele Rodaro. We are grateful for your time, efforts and expert language skills.

Social: On our social channels, we interacted with over 1,100 tweets and saw more than 6,000 app reviews.

  • Our top contributor on Twitter this quarter was Isaac H who responded to over 200 tweets, expertly navigating our channels to share helpful resources, provide troubleshooting support, and help redirect feature requests to Mozilla Connect. Thank you, Isaac!
  • On the play store, our top contributor was Dmitry K who replied to over 400 reviews! Thank you for giving helpful feedback, advice and for providing such a warm and welcoming experience for users.

SUMO platform updates: There were 5 major platform updates in Q3. Our focus this quarter was to improve navigation for users by introducing new standardized topics across products, and update the forum moderation tool to allow our support agents to moderate these topics for forum posts. Categorizing questions more accurately with our new unified topics will provide us with a foundation for better data analysis and reporting.

We also introduced improvements to our messaging features, localized KB display times, fixed a bug affecting pageviews in the KB dashboard, and added a spam tag to make moderation work easier for the forum moderators.

We acknowledge there was a significant increase in spam questions that began in July which is starting to trend downwards. We will continue to monitor the situation closely, and are taking note of moderator recommendations on a future resolution. We appreciate your efforts to help us combat this problem!

Check out SUMO Engineering Board to see what the platform team is cooking up in the engine room. You’re welcome to join our monthly Community Calls to learn more about the latest updates to Firefox and chat with the team.

Firefox Releases: We released Firefox 128, Firefox 129 and Firefox 130 in Q3 and we made significant updates to our wiki template for the Firefox train release.

Q4 Priorities & Focus Areas

  • CX: Enhancing the user experience and streamlining support operations.
  • Kitsune: Improved article helpfulness survey and tagging improvements to help with more granular content categorization.
  • SUMO: For the rest of 2024, we’re working on an internal SUMO Community Report, FOSDEM 2025 preparation, Firefox 20th anniversary celebration, and preparing for an upcoming Community Campaign around QA.

Contributor Spotlights

We have seen 37 new contributors this year, with 10 new contributors joining the team this quarter. Among them, ThePillenwerfer, Khalid, Mozilla-assistent, and hotr1pak, who shared more than 100 contributions between July–September. We appreciate your efforts!

Cheers to our top contributors this quarter:

SUMO top contributors in Q3

Our multi-channel contributors made a significant impact by supporting the community across more than one channel (and in some cases, all three!) 

All in all it was an amazing quarter! Thanks for all you do.

Important dates

  • October 29th: Firefox 132 will be released
  • October 30th: RSVP to join our next Community Call! All are welcome. We do our best to create a safe space for everyone to contribute. You can join on video or audio, at your discretion. You are also welcome to share questions in advance via the contributor forum, or our Matrix channel.
  • November 9th: Firefox’s 20th Birthday!
  • November 14th Save the date for an AMA with the Firefox leadership team
  • FOSDEM ’25: Stay tuned! We’ll put a call out for volunteers and for talks in early November

Stay connected

Thanks for reading! If you have any feedback or recommendations on future features for this update, please reach out to Kiki and Andrea.

The Mozilla BlogCelebrating Chicago’s creators and small businesses at Firefox’s ‘Free to Browse’ event

With winter on the horizon, Chicago is ready to show that nothing — not wind, nor snow — can cool the fire of a united community. 

As we toast Firefox’s 20th anniversary, we’re hosting “Free to Browse: Celebrating Chicago’s Creatives,” an IRL browsing experience to amplify the voices of 20 local creators and small businesses. The event will explore how they’re creatively impacting their communities, as well as showcase the innovation that has defined the last 20 years of Firefox’s journey. We’re teaming up with these 20 local small businesses as part of our national campaign “Nothing Personal, Just Browsing,” which highlights that when you choose Firefox, you choose a more private online experience. 

“Free to Browse” is free and open to the public and will take place Nov. 16 from 4:00 p.m. to 10:30 p.m. CT at Inside Town, a local art collective in Chicago that celebrates diverse artists. The three-story space will bring the online world to life through a completely immersive experience. Guests can “browse” the skills of the featured small businesses, explore their services and shop for exclusive items, goods and more. It’ll be an engaging environment featuring musical performances and interactive art while celebrating Firefox’s impactful journey and technological legacy. We’re all about making the web a private and safe open space for everyone, and there’s no better way to cultivate that than with music, art, food and community.

The best parts of the internet are built by the communities that shape them. We’re proud to celebrate these 20 bold and innovative businesses in Chicago that, like Firefox, are community-focused and not afraid to be different and challenge the status quo: 

1. Lon Renzell, music producer/engineer and the founder of Studio SHAPES, a recording studio for musical creativity. | @renzell.wav

2. Kevin Woods, founder of streetwear brand and re-sale store, “The Pop Up.” @ogkwoods 

3. Tatum Lynea, executive pastry chef and partner, named Chicago’s 2024 pastry chef of the year. |  @tatumlynea

4. Demir Mujagic, founder of Published Studios, a specialty design/print boutique. | @published.studios 

5. Prosper Bambo, founder of Congruent Space, an interactive platform integrating art, design and fashion. | @prosperbambo

6. Akele Parnell, co-founder of ÜMI Farms, a cannabis ecosystem which includes craft brands and retail dispensaries. | @akele_j 

7. Makafui Searcy, conceptual designer and founding director of the Fourtunehouse Art Center. | @makafuikofisearcy

8. Oluwaseyi Adeleke, creative director and fashion designer, focused on storytelling through a Black lens. | @olu.originals 

9. Manny Mendoza, co-founder and chef of Herbal Notes, a cannabis lifestyle and experience collective. | @chefmanofrom18th

10. Angelica Rivera, founder of Semillas, a Mexican and Puerto Rican-owned floral design, plant, event experiences and coffee shop. | @sincerelyanngee  

11. Kristoffer McAfee, artist/designer/traveler/scholar/business owner. | @km_designhq

12. Damiane Nickles, painter/marketer and founder of “Not A Plant Shop.” | @notaplantshop

13. Danielle Moore, founder and creative director of Semicolon Books. | @danni.aint.write

14. Trevor Holloway, founder of Inside Town art collective. | @trevorholloway

15. Nicole Humphrey, creative consultant and founder of NAHcreate. | @childofgenius

16. Jason Ivy, singer-songwriter, actor and filmmaker. | @thejasonivy

17. Jackson Flores, co-founder of DishRoulette Kitchen, an SMB development center dedicated to addressing economic inequality. | @jacksonsays

18. Andre Muir, visual artist and filmmaker. | @andremuir

19. Diana Pietrzyk, multidimensional creative, designer and artist.  | @dyanapyehchek

20. Preme, interdisciplinary artist, co-founder of Congruent Space and art director for Chicago music collective Goodbye Tomorrow. | @preme___xy 

Here’s a preview of the art these brilliant creators will have on display at the event:

This celebration isn’t just about the past 20 years of Firefox. It’s a stepping stone for the next 20 years of building an open and accessible internet for all. We’re excited to kick it off with an unforgettable experience in Chicago.

See you there!

Get Firefox

Get the browser that protects what’s important

The post Celebrating Chicago’s creators and small businesses at Firefox’s ‘Free to Browse’ event appeared first on The Mozilla Blog.

Mozilla Localization (L10N)L10n report: October 2024 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New community/locales added

We’re grateful for the Abzhaz community’s initiative in reaching out to localize our products. Thank you for your valuable involvement!

New content and projects

What’s new or coming up in Firefox desktop

Search Mode Switcher

A new feature in development has become available (behind a flag) with the release of the latest Nightly version 133: the Search Mode Switcher. You may have already seen strings for this land in Pontoon, but this feature enables you to enter a search term into the address bar and search through multiple engines. After entering the search term and selecting a provider, the search term will persist (instead of showing the site’s URL) and then you can select a different provider by clicking an icon on the left of the bar.

Firefox Search Mode Switcher

You can test this now in version 133 of Nightly by entering about:config in the address bar and pressing enter, proceed past the warning, and search for the following flag: browser.urlbar.scotchBonnet.enableOverride. Toggling the flag to true will enable the feature.

New profile selector

Starting in Version 134 of Nightly a new feature to easily select, create, and change profiles within Firefox will begin rolling out to a small number of users worldwide. Strings are planned to be made available for localization soon.

Sidebar and Vertical Tabs

Finally, as previously mentioned in the previous L10n Report, features for a new sidebar with expanded functionality along with the ability to change your tab layout from horizontal to vertical are available to test in Nightly through the Firefox Labs feature in your settings. Just go to your Nightly settings, select the Firefox Lab section from the left, and enable the feature by clicking the checkbox. Since these are experimental there may continue to be occasional string changes or additions. While you check out these features in your languages, if you have thoughts on the features themselves, we welcome you to share feedback through Mozilla Connect.

What’s new or coming up in web projects

AMO and AMO Frontend

To improve user experience, the AMO team plans to implement changes that will enable only locales meeting a specific completion threshold. Locales with very low completion percentages will be disabled in production but will remain available on Pontoon for teams to continue working on them. The exact details and timeline will be communicated once the plan is finalized.

Mozilla Accounts

Currently Mozilla Accounts is going through a redesign of some of its log-in pages’ user experiences. So we will continue to see small updates here and there for the rest of the year. There is also a planned update to the Mozilla Accounts payment sub platform. We expect to see a new file added to the project before the end of the year – but a large number of the strings will be the same as now. We will be migrating those translations so they don’t need to be translated again, but there will be a number of new strings as well.

Mozilla.org

The Mozilla.org site is undergoing a series of redesigns, starting with updates to the footer and navigation bars. These changes will continue through the rest of the year and beyond. The next update will focus on the About page. Additionally, the team is systematically removing obsolete strings and replacing them with updated or new strings, ensuring you have enough time to catch up while minimizing effort on outdated content.

There are a few new Welcome pages made available to a select few locales. Each of these pages have a different deadline. Make sure to complete them before they are due.

What’s new or coming up in SUMO

The SUMO platform just got a navigation redesign in July to improve navigation for users & contributors. The team also introduced new topics that are standardized across products, which lay the foundation for better data analysis and reporting. Most of the old topics, and their associated articles and questions, have been mapped to the new taxonomy, but a few remain that will be manually mapped to their new topics.

On the community side, we also introduced improvements & fixes on the messaging feature, changing the KB display time in format appropriate to locale, fixed the bug so we can properly display pageviews number in the KB dashboard, and add a spam tag in the list of question if it’s marked as spam to make moderation work easier for the forum moderators.

There will be a community call coming up on Oct 30 at 5pm UTC where we will be talking about Firefox 20th anniversary celebration and Firefox 132 release. Check out the agenda for more detail.

What’s new or coming up in Pontoon

Enhancements to Pontoon Search

We’re excited to announce that Pontoon now allows for more sophisticated searches for strings, thanks to the addition of the new search panel!

When searching for a string, clicking on the magnifying glass icon will open a dropdown, allowing users to select any combination of search options to help refine their search. Please note that the default search behavior has changed, as string identifiers must now be explicitly enabled in search options.

Pontoon Enhanced Search Options

User status banners

As part of the effort to introduce badges/achievements into Pontoon, we’ve added status banners under user avatars in the translation workspace. Status banners reflect the permissions of the user within the respective locale and project, eliminating the need to visit their profile page to view their role.

Namely, team managers will get the ‘MNGR’ tag, translators get the ‘TRNSL’ tag, project managers get the ‘PM’ tag, and those with site-wide admin permissions receive the ‘ADMIN’ tag. Users who have joined within the last three months will get the ‘NEW USER’ tag for their banner. Status banners also appear in comments made under translations.

Screenshot of Pontoon showing the Translate UI, with user displaying the new banner for Manager and AdminNew Pontoon logo

We hope you love the new Pontoon logo as much as we do! Thanks to all of you who expressed your preference by participating in the survey.

Pontoon New Logo

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Mozilla Privacy BlogMozilla Participates to Ofcom’s Draft Transparency Reporting Guidance

On 4th October 2024, Mozilla provided our input to Ofcom’s consultation on its draft transparency reporting guidance. Transparency plays a crucial role in promoting accountability and public trust, particularly when it comes to how tech platforms handle harmful or illegal content online and we were pleased to share our research, insight, and input with Ofcom.

Scope of the Consultation

Ofcom’s proposed guidance aims to improve transparency reporting, allowing the public, researchers, and regulators to better understand how categorized services operate and whether they are doing enough to respect users’ rights and protect users from harm.

We support this effort and believe additional clarifications are needed to ensure that Ofcom’s transparency process fully meets its objectives. The following clarifications will ensure that the transparency reporting process effectively holds tech companies accountable, safeguards users, fosters public trust, and allows for effective use of transparency reporting by different stakeholders.

The Importance of Standardization

One of our key recommendations is the need for greater standardization in transparency elements. Mozilla’s research on public ad repositories developed by many of the largest online platforms finds that there are large discrepancies across these transparency tools, making it difficult for researchers and regulators to compare information across platforms.

Ofcom’s guidance must ensure that transparency reports are clear, systematic, and easy to compare year-to-year. We recommend that Ofcom provide explicit guidelines on the specific data platforms must provide in their transparency reports and the formats in which they should be reported. This will enable platforms to comply uniformly and make it easier for regulators and researchers to monitor patterns over time.

In particular, we encourage Ofcom to distinguish between ‘core’ and ‘thematic’ information in transparency reports. We understand that core information will be required consistently every year, while thematic data will focus on specific regulatory priorities, such as emerging areas of concern. However, it is important that platforms are given enough advance notice to prepare their systems for thematic information to avoid any disproportionate compliance burden. This is particularly important for smaller businesses who have limited resources and may find it challenging to comply with new reporting criteria, compared to big tech companies.

We also recommend that data about content engagement and account growth should be considered ‘core’ information that needs to be collected and reported on a regular basis. This data is essential for monitoring civic discourse and election integrity.

Engaging a Broader Range of Stakeholders

Mozilla also believes that a broad range of stakeholders should be involved in shaping and reviewing transparency reporting. Ofcom’s consultative approach with service providers is commendable.  We encourage further expansion of this engagement to include stakeholders such as researchers, civil society organizations, and end-users.

Based on our extensive research, we recommend “transparency delegates.” Transparency delegates are experts who can act as intermediaries between platforms and the public, by using their expertise to evaluate platforms’ transparency in a particular area (for example, AI) and to convey relevant information to a wider audience. This could help ensure that transparency reports are accessible and useful to a range of audiences, from policymakers to everyday users who may not have the technical expertise to interpret complex data.

Enhancing Data Access for Researchers

Transparency reports alone are not enough to ensure accountability. Mozilla emphasizes the importance of giving independent researchers access to platform data. In our view, data access is not just a tool for academic inquiry but a key component of public accountability. Ofcom should explore mechanisms for providing researchers with access to data in a way that protects user privacy while allowing for independent scrutiny of platform practices.

This access is crucial for understanding how content moderation practices affect civic discourse, public safety, and individual rights online. Without it, we risk relying too heavily on self-reported data, which can be inconsistent or incomplete.  Multiple layers of transparency are needed, in order to build trust in the quality of platform transparency disclosures.

Aligning with Other Regulatory Frameworks

Finally, we encourage Ofcom to align its transparency requirements with those set out in other major regulatory frameworks, particularly the EU’s Digital Services Act (DSA). Harmonization will help reduce the compliance burden on platforms and allow users and researchers to compare transparency reports more easily across jurisdictions.

Mozilla looks forward to continuing our work with Ofcom and other stakeholders to create a more transparent and accountable online ecosystem.

 

The post Mozilla Participates to Ofcom’s Draft Transparency Reporting Guidance appeared first on Open Policy & Advocacy.

This Week In RustThis Week in Rust 570

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is trait-gen, an attribute macro to generate the trait implementations for several types without needing custom declarative macros, code repetition, or blanket implementations.

Thanks to Luke Peterson for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.
Crates Ecosystem

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

464 pull requests were merged in the last week

Rust Compiler Performance Triage

Some tidy improvements from switching to next generation trait solver (solely for coherence checking) and from simplifying our dataflow analysis framework. There were some binary size regressions associated with PR 126557 (adding #[track_caller] to allocating methods of Vec and VecDeque), which I have handed off to T-libs to choose whether to investigate further.

Triage done by @pnkfelix. Revision range: 5ceb623a..3e33bda0

0 Regressions, 3 Improvements, 6 Mixed; 3 of them in rollups 47 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo Language Team
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Reference Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-10-23 - 2024-11-20 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Your problem is that you’re trying to borrow from the dead.

/u/masklinn on /r/rust

Thanks to Maciej Dziardziel for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdMaximize Your Day: Focus Your Inbox with ‘Grouped by Sort’

For me, staying on top of my inbox has always seemed like an unattainable goal. I’m not an organized person by nature. Periodic and severe email anxiety (thanks, grad school!) often meant my inbox was in the quadruple digits (!).

Lately, something’s shifted. Maybe it’s working here, where people care a lot about making email work for you. These past few months, my inbox has stayed if not manageable, then pretty close to it. I’ve only been here a year, which has made this an easier goal to reach. Treating my email like laundry is definitely helping!

But how do you get a handle on your inbox when it feels out of control? R.L. Dane, one of our fans on Mastodon, reminded us Thunderbird has a powerful, built-in tool than can help: the ‘Grouped by Sort’ feature!

Email Management for All Brains

For those of us who are neurodiverse, email management can be a challenge. Each message that arrives in your inbox, even without a notification ding or popup, is a potential distraction. An email can contain a new task for your already busy to-do list. Or one email can lead you down a rabbit hole while other emails pile up around it. Eventually, those emails we haven’t archived, replied to, or otherwise processed take on a life of their own.

Staring at an overgrown inbox isn’t fun for anyone. It’s especially overwhelming for those of us who struggle with executive function – the skills that help us focus, plan, and organize. A full or overfull inbox doesn’t seem like a hurdle we can overcome. We feel frozen, unsure where to even begin tackling it, and while we’re stuck trying to figure out what to do, new emails keep coming. Avoiding our inboxes entirely starts to seem like the only option – even if this is the most counterproductive thing we can do.

So, how in the world do people like us dig out of our inboxes?

Feature for Focus: Grouped by Sort

We love seeing R.L. Dane’s regular Thunderbird tips, tricks, and hacks for productivity. In fact, he was the one who brought this feature to our attention on a Mastodon post! We were thrilled when we asked if we could turn it to a productivity post and got an excited “Yes!” in response.

As he pointed out, using Grouped by Sort, you can focus on more recently received emails. Sorting by Date, this feature will group your emails into the following collapsible categories:

  • Today
  • Yesterday
  • Last 7 Days
  • Last 14 Days
  • Older

Turning on Grouped by Sort is easy. Click the message list display options, then click ‘Sort by.’ (In the top third, toggle the ‘Date’ option. In the second third, select your preferred order of Descending or Ascending. Finally, in the bottom third, toggle ‘Grouped by Sort.’

Now you’re ready to whittle your way through an overflowing inbox, one group at a time.

And once you get down to a mostly empty and very manageable inbox, you’ll want to find strategies and habits to keep it there. Treating your email like laundry is a great place to start. We’d love to hear your favorite email management habits in the comments!

Resources

ADDitude Magazine: https://www.additudemag.com/addressing-e-mail/

Dixon Life Coaching: https://www.dixonlifecoaching.com/post/why-high-achievers-with-adhd-love-and-hate-their-email-inbox

The post Maximize Your Day: Focus Your Inbox with ‘Grouped by Sort’ appeared first on The Thunderbird Blog.

Mozilla Privacy BlogMozilla Responds to BIS’ Proposed Rule on Reporting Requirements for the Development of Advanced AI Models and Computing Clusters

Lately, we’ve talked a lot about the importance of ensuring that governments take into account open source, especially when it comes to AI. We submitted comments to NIST on Dual-Use Foundation Models and NTIA on the benefits of openness, and advocated in Congress. As frontier models and big tech continue to dominate the policy discussion, we need to ensure that open source remains top of mind for policymakers and regulators. At Mozilla, we know that open source is a fundamental driver of software that benefits people instead of a few big tech corporations, and it helps enable breakthroughs in medicine, science, and allows smaller companies to compete with tech giants. That’s why we’ll continue to raise the voice of the open source community in regulatory circles whenever we can – and most recently, at the Department of Commerce.

Last month, the Bureau of Industry and Security (BIS) released a proposed rule about reporting requirements for developing advanced AI models and computing clusters. This rule stems from the White House’s 2023 Executive Order on AI, which focuses on the safe and trustworthy development of AI. BIS asked for feedback from industry and stakeholders on topics such as the notification schedule for entities covered by the rule, how information is collected and stored, and what thresholds would trigger reporting requirements for these AI models and clusters.

While BIS’ proposed rule seeks to balance national security with economic concerns, it doesn’t adequately take into account the needs of the open source community or provide clarity as to how the proposed rule may affect them. This is critical given some of the most capable and widely used AI models are open source or partially open source. Open source software is a key driver of technological progress in AI and creates tremendous economic and security benefits for the United States. In our full comments, we set out how BIS can further engage with the open-source community and we emphasize the value that open-source offers for both the economy and national security. Below are some key points from our feedback to BIS:

1. BIS should clarify how the proposed rules would apply to open-source projects, especially since many don’t have a specific owner, are distributed globally, and are freely available. Ideally BIS could work with organizations like the Open Source Initiative (OSI) to come up with a framework.

2. As BIS updates the technical conditions for collection thresholds in response to technological advancements, we suggest setting a minimum update cycle of six months. This is crucial given the rapid pace of change in the AI landscape. It’s also necessary to maintain BIS’ core focus on the regulation of frontier models and to not unnecessarily stymie innovation across the broader AI ecosystem.

3. BIS should provide additional clarity about what ‘planned applicable activities’ and when a project is considered ‘planned.’

Mozilla appreciates BIS’s efforts to try and balance the benefits and risks of AI when it comes to national and economic security. We hope that BIS further considers the potential impact of the proposed rule and future regulatory actions on the open source community and appropriately weighs the myriad benefits which open source AI and open source software more broadly produce for America’s national and economic interests. We look forward to providing views as the US Government continues work on these important issues.

The post Mozilla Responds to BIS’ Proposed Rule on Reporting Requirements for the Development of Advanced AI Models and Computing Clusters appeared first on Open Policy & Advocacy.

Francesco LodoloThe (pre)history of Mozilla’s localization repository infrastructure

With many new faces joining Mozilla, as either staff or volunteer localizers, most are only familiar with the current, more streamlined localization infrastructure.

I thought it might be interesting to take a look back at the technical evolution of Mozilla’s localization systems. Having personally navigated every version — first as a community localizer from 2004 to 2013, and later as staff — I’ll share my perspective. That said, I might not have all the details exactly right (or I may have removed some for the sake of my sanity), so feel free to point out any inaccuracies.

Giovanni (center) and I (left) from Mozilla Italia at a booth to promote Firefox, back in 2007. Probably one of the older photos I have around.<figcaption class="wp-element-caption">Attending one of the earliest events organized by the Italian Community (2007)</figcaption>

Early days: Centralized version control

Back in the early 2000s, smartphones weren’t a thing, Windows XP was an acceptable operating system — especially in comparison to Windows Me — and distributed version controls weren’t as common. Let’s be honest, centralized version controls were not fun: every commit meant interacting directly with the server. You had to remember to update your local copy, commit your changes, and then hope no one else had committed in the meantime — otherwise, you were stuck resolving conflicts.

Given the high technical barriers, localizers at that time were primarily technical users, not discouraged by crappy text editors — encoding issues, BOMs, and other amenities — and command line tools.

To make things more complicated, localizers had to deal with 2 different systems:

  • CVS (Concurrent Versioning System) was used for products like Mozilla Suite, Phoenix/Firefox, etc. To increase confusion, it used branch names that followed the Gecko versions (e.g. MOZILLA_1_8_BRANCH), and those didn’t map at all to product versions. Truth be told, the whole release cadence and cycle felt like complete chaos back then, at least as a volunteer.
  • SVN (Subversion) was used to localize mozilla.org, addons.mozilla.org (AMO), and other web projects.

With time, desktop and web-based applications emerged to support localizers, hiding some of the complexity of version control systems and providing translation management features:

  • Mozilla Translator (a local Java application. Yes kids, Java).
  • Narro.
  • Pootle.
  • Verbatim: a customized Pootle instance run by Mozilla, used to localize web projects like addons.mozilla.org. This was shut down in 2015 and projects transitioned to Pontoon.
  • Pontoon (here’s the first idea and repository, if you’re curious).
  • Aisle, an internal experiment based on C9 that never got past the initial tests.

This proliferation of new tools led to a couple of key principles that are still valid to this day:

  • The repository, not the TMS (Translation Management System), is the source of truth.
  • TMSs need to support bidirectional synchronization between their internal data storage and the repository, i.e. they need to read updated translated content from the repository and store it internally (establishing a conflict resolution policy), not just write updates.

This might look trivial, but it’s an outlier in the localization industry, where the tool is the source of truth, and synchronization only happens in one direction (from the TMS to the repository).

The shift to Mercurial

At the end of 2007, Mozilla made the decision to transition from CVS to Mercurial, this time opting for a distributed version control system. For localization, this meant making the move to Mercurial as well, though it took a few more months of work. This marked the beginning of a new era where the infrastructure quickly started becoming more complex.

As code development was happening in mozilla-central, localization was supposed to be stored in a matching l10n-central repository. But here’s the catch: instead of one repository, the decision was to use one repository per locale, each one including the localization for all shipping projects (Firefox, Firefox for Android, etc.). I’m not sure how many repositories that meant at the time — based on the dependencies of this bug, probably around 30 — but as of today, there are 156 l10n-central repositories, while Firefox Nightly only ships in 111 locales (a few of them added recently).

The next massive change was the adoption of the rapid release cycle in 2011:

  • 3 new sets of repositories had to be created for the corresponding Firefox versions: l10n/mozilla-aurora, l10n/mozilla-beta, l10n/mozilla-release.
  • Localizers working against Nightly in l10n-central would need to manually move their updates to l10n/mozilla-aurora, which was becoming the main target for localization.
  • At the end of the cycle (“merge day”), someone in the localization team would manually move content from Aurora to Beta, overwriting any changes.
  • In order to allow localizers to make small fixes to Beta, 2 separate projects were set up in Pontoon (one working against Aurora, one against Beta), and it was up to localizers to keep them in sync, given that content in Beta would be overwritten on merge day.

If you’re still trying to keep count, we’re now at about 600 Mercurial repositories to localize a project like Firefox (and a few hundreds more added later for Firefox OS, one for each locale and version, but that’s a whole different story).

I won’t go into the fine details, but at this point localizers were also supposed to “sign off” on the version of their localization that they wanted to ship. Over time, this was done by:

  • Calling out which changeset you wanted to ship in an email thread.
  • Later, requesting sign-off in a web app called Elmo (because it was hosted on l10n.mozilla.org, (e)l.m.o., got it?). Someone in the localization team had to manually go through each request, check the diff from the previous sign-off to ensure that it would not break Firefox, and either accept or reject it. For context, at the time DTDs were still heavily in use for localization, and a broken translation could easily brick the browser (yellow screen of death). 
  • With the drop of Aurora in 2017, the localization team started reviewing and managing sign-offs in Elmo without waiting for localizers to make a request. Yay for localizers, one less thing to do.
  • In 2020, partly because of the lay-offs that impacted the team, we completely dropped the sign-off process and decommissioned Elmo, automatically taking the latest changeset in each l10n repository.
Sad Elmo sitting on a park bench, in a gloomy weather, with rain puddles on the street.

The new kid on the block: GitHub

In 2015 we started migrating repositories from SVN to GitHub. At the time, that meant mostly web projects, managed by Pascal Chevrel and me, with the notable exception of Firefox for iOS. That part of localization had a whole infrastructure of its own: a web dashboard to track progress, a tool called langchecker to update files and identify errors, and even a file format called dotlang (.lang) that was used for a while to localize mozilla.org (we switched to Fluent in 2019).

The move to GitHub removed a lot of bureaucracy, as the team could create new repositories and grant access to localizers without going through an external team, like it was the case for Mercurial. Still today, GitHub is the go-to choice for new projects, although the introduction of SAML single sign-on created a significant hurdle when it comes to add external contributors to a project.

Introduction of cross-channel for Firefox

Remember the 600 repositories? Still there… Also, the most observant among you might wonder: didn’t Mozilla had another version of Firefox (Extended Support Release, or ESR)? You’re correct, but the compromise there was that ESR would be string-frozen, so we didn’t need another ~150 repositories: we used the content from mozilla-release at the time of launch, and that’s it, no more updates.

In 2017, the Aurora channel was “removed”, leaving Nightly (based on mozilla-central), Developer Edition and Beta (based on mozilla-beta), Release (based on mozilla-release) and ESR. I use quotes, because “aurora” is still technically the internal channel name for Dev Edition.

That was a challenge, as Aurora represented the main target for localization. That change forced us to move all locales to work on Nightly around April 2017. 

Later in the year, Axel Hecht came up with a core concept that still supports how we localize Firefox nowadays: cross-channel. What if, instead of having to extract strings from 4 huge code repositories, we create a tool that generates a superset of the strings shipping in all supported versions (channels) of Firefox, and put them in a nimble, string-only repository? That’s exactly what cross-channel did, allowing us to drop ~300 repositories (plus ~150 already dropped because of the removal of Aurora). It also gave us the opportunity to support localization updates in release and ESR. At this point, localization for any shipping version of Firefox comes out of a single repository for each locale (e.g. l10n-central/fr for French).

Chart representing the flow in the build system with cross-channel.<figcaption class="wp-element-caption">Code repositories are used to generate cross-channel content, which in turn is used to feed Pontoon, storing translations in l10n-central repositories. From the chart, it’s also visible how English (en-US) is treated as a special case, going directly from code repositories to the build system.</figcaption>

In hindsight, cross-channel was overly complex: it would not only create the superset content, but it would also replay the Mercurial history of the commit introducing the change. The content would land in the cross-channel repository with a reference to the original changeset (example), making it possible to annotate the file via Mercurial’s web interface. In order to do that, the code hooked directly into Mercurial internals, and it would break frequently thanks to the complexity of Mozilla’s repositories. In 2021 the code was changed to stop replaying history and only merging content.

At this point, in late 2017, Firefox localization relied on ~150 l10n repositories, and 2 source repositories for cross-channel — one used as a quarantine, the other, called gecko-strings, connected to Pontoon to expose strings for community localization.

Current Firefox infrastructure

Fast-forward to 2024, with Mozilla’s decision to move development to Git, we had an opportunity to simplify things even further, and rethink some of the initial choices:

Thunderbird has adopted a similar structure, with their own 3 repositories.

The team completed the migration to Git in June, ahead of the rest of the organization, and all current versions of Firefox ship from the firefox-l10n repository (including ESR 115 and ESR 128).

Visual timeline of changes described in the article, from CSV to Git.

Conclusions

So, this was the not-so-short story of how Mozilla’s localization infrastructure has evolved over time, with a focus on Firefox. Looking back, it’s remarkable to see how far we’ve come. Today, we’re in a much better place, also considering the constant effort to improve Pontoon and other tools used by the community.

As I approach one of my many anniversaries — I have one for when I started as a volunteer (January 2004), when I became a member of staff as a contractor (April 2013), one “official” when I became an employee (November 2018) — it’s humbling to think about what a small team has accomplished over the past 22 years. These milestones remind me of the incredible contributions of so many brilliant individuals at Mozilla, whose passion helped build the foundations we stand on today.

It’s also bittersweet to go back and read emails from over 15 years ago, remembering just how pivotal the community was in shaping Firefox into what it is today. The dedication of volunteers and localizers helped make Firefox a truly global browser, and their impact is still felt — and sometimes missed — today.

Picture of Mozilla L10n folks at the first Mozilla Summit (Whistler, 2008), back when Mozilla was still inviting volunteers to Company events.<figcaption class="wp-element-caption">Mozilla L10n Community in Whistler, 2008 (Photo by Tristan Nitot)</figcaption>

Anne van KesterenWebKit and web-platform-tests

Let me state upfront that this strategy of keeping WebKit synchronized with parts of web-platform-tests has worked quite well for me, but I’m not at all an expert in this area so you might want to take advice from someone else.

Once I've identified what tests will be impacted by my changes to WebKit, including what additional coverage might be needed, I create a branch in my local web-platform-tests checkout to make the necessary changes to increase coverage. I try to be a little careful here so it'll result in a nice pull request against web-platform-tests later. I’ve been a web-platform-tests contributor quite a while longer than I’ve been a WebKit contributor so perhaps it’s not surprising that my approach to test development starts with web-platform-tests.

I then run import-w3c-tests web-platform-tests/[testsDir] -s [wptParentDir] on the WebKit side to ensure it has the latest tests, including any changes I made. And then I usually run them and revise, as needed.

This has worked surprisingly well for a number of changes I made to date and hasn’t let me down. Two things to be mindful of:

  • On macOS, don’t put development work, especially WebKit, inside ~/Documents. You might not have a good time.
  • [wptParentDir] above needs to contain a directory named web-platform-tests, not wpt. This is annoyingly different from the default you get when cloning web-platform-tests (the repository was renamed to wpt at some point). Perhaps something to address in import-w3c-tests.

Chris H-CNine-Year Moziversary

On this day (or near it) in 2015, I joined the Mozilla project by starting work as a full-time employee of Mozilla Corporation. I’m two hardware refreshes in (I was bad for doing them on time, leaving my 2017 refresh until 2018 and my 2020 refresh until 2022! (though, admittedly, the 2020 refresh was actually pushed to the end of 2021 by a policy change in early 2020 moving from 2-year to 3-year refreshes)) and facing a third in February. Organizationally, I’m three CEOs and sixty reorgs in.

I’m still working on Data, same as last year. And I’m still trying to move Firefox Desktop to use solely Glean for its data collection system. Some of my predictions from last year’s moziversary post came true: I continued working on client code in Firefox Desktop, I hardly blogged at all, we continue to support collections in all of Legacy Telemetry’s systems (though we’ve excitingly just removed some big APIs), Glean has continued to gain ground in Firefox Desktop (we’re up to 4134 metrics at time of writing), and “FOG Migration” has continued to not happen (I suppose it was one missed prediction that top-down guidance would change — it hasn’t, but interpretations of it sure have), and I’m publishing this moziversary blog post a little ahead of my moziversary instead of after it.

My biggest missed prediction was “We will quietly stop talking about AI so much, in the same way most firms have stopped talking about Web3 this year”. Mozilla, both Corporation and Foundation, seem unable to stop talking about AI (a phrase here meaning “large generative models built on extractive data mining which use chatbot UI”). Which, I mean, fair: it’s consuming basically all the oxygen and money in the industry at the moment. We have to have a position on it, and it’s appropriating “Open” language that Mozilla has a vested interest in protecting (though you’d be excused for forgetting that given how little we’ve tried to work with the FSF and assorted other orgs trying to shepherd the ideas and values of Open Source in the recent past). But we’ve for some reason been building products around these chatbots without interrogating whether that’s a good thing.

And you’d think with all our worry about what a definition of Open Source might mean, we’d make certain to only release products that are Open Source. But no.

I understand why we’re diving into products and trying to release innovative things in product shape… but Mozilla is famously terrible at building products. We’re okay at building services (I’m a fan of both Monitor and Relay). But where we seem to truly excel is in building platforms and infrastructure.

We build Firefox, the only independent browser, a train that runs on the rails of the Web. We build Common Voice, a community and platform for getting underserved languages (where which languages are used is determined by the community) the support they need. We built Rust, a memory-safe systems language that is now succeeding without Mozilla’s help. We built Hubs, a platform for bringing people together in virtual space with nothing but a web browser.

We’re just so much better at platforms and infrastructure. Why we don’t lean more into that, I don’t know.

Well, I _do_ know. Or I can guess. Our golden goose might be cooked.

How can Mozilla make money if our search deal becomes illegal? Maintaining a browser is expensive. Hosting services is expensive. Keeping the tech giants on their toes and compelling them to be better is expensive. We need money, and we’ve learned that there is no world where donations will be enough to fund even just the necessary work let alone any innovations we might try.

How do you monetize a platform? How do you monetize infrastructure?

Governments do it through taxation and funding. But Mozilla Corporation isn’t a government agency. It’s a conventional Silicon Valley private capital corporation (its relationship to Mozilla Foundation is unconventional, true, but I argue that’s irrelevant to how MoCo organizes itself these days). And the only process by which Silicon Valley seems to understand how to extract money to pay off their venture capitalists is products and consumers.

Now, Mozilla Corporation doesn’t have venture capital. You can read in the State of Mozilla that we operate at a profit each and every year with net assets valued at over a billion USD. But the environment in which MoCo operates — the place from which we hire our C-Suite, the place where the people writing the checks live — is saturated in venture capital and the ways of thinking it encourages.

This means Mozilla Corporation acts like its Bay Area peers, even though it’s special. Even though it doesn’t have to.

This means it does layoffs even when it doesn’t need to. Even when there’s no shareholders or fund managers to impress.

This means it increasingly speaks in terms of products and customers instead of projects and users.

This means it quickly loses sight of anything specifically Mozilla-ish about Mozilla (like the community that underpins specific systems crucial to us continuing to exist (support and l10n for two examples) as well as the general systems of word-of-mouth and keeping Mozilla and Firefox relevant enough that tech press keep writing about us and grandpas keep installing us) because it doesn’t fit the patterns of thought that developed while directing leveraged capital.

(( Which I don’t like, if my tone isn’t coming across clearly enough for you to have guessed. ))

Okay, that’s more than enough editorial for a Moziversary post. Let’s get to the predictions for the next year:

  • I still won’t blog as much as I’d like,
  • “FOG Migration” might actually happen! We’ve finally managed to convince Firefox folks just how great Glean is and they might actually commit official resources! I predict that we’re still sending Legacy Telemetry by the end of next year, but only bits and pieces. A weak shadow of what we send today.
  • There’ll be an All Hands, but depending on the result of the US federal election in November I might not attend because its location has been announced as Washington DC and I don’t know if the United States will be in a state next year to be trusted to keep me safe,
  • We will stop putting AI in everything and hoping to accidentally make a product that’ll somehow make money and instead focus on finding problems Mozilla can solve and only then interrogating whether AI will help
  • The search for the new CEO will not have completed by next October so I’ll still be three CEOs in, instead of four
  • I will execute on my hardware refresh on time this February, and maybe also get a new monitor so I’m not using my personal one for work.

Let’s see how it goes! Til next time.

:chutten

The Talospace ProjectRunning Thunderbird with the OpenPower Baseline JIT

The issues with Ion and Wasm in OpenPower Firefox notwithstanding, the Baseline JIT works well in Firefox ESR128, and many of you use it (including yours truly). Of course, that makes Thunderbird look sluggish without it.

I wasn't able to get a full LTO-PGO build for Thunderbird to build properly so far with gcc (workin' on it), but with the JIT patches for ESR128 an LTO optimized build will complete and run, and that's good enough for now. The diff for the .mozconfig is more or less the following:

export CC=/usr/bin/gcc
export CXX=/usr/bin/g++

mk_add_options MOZ_MAKE_FLAGS="-j24"

#ac_add_options --enable-application=browser
#ac_add_options MOZ_PGO=1
#
ac_add_options --enable-project=comm/mail
mk_add_options MOZ_OBJDIR=@TOPSRCDIR@/tbobj

ac_add_options --enable-optimize="-O3 -mcpu=power9 -fpermissive"
ac_add_options --enable-release
ac_add_options --enable-linker=bfd
ac_add_options --enable-lto=full
ac_add_options --without-wasm-sandboxed-libraries
ac_add_options --with-libclang-path=/usr/lib64

export GN=/home/censored/bin/gn # if you haz
export RUSTC_OPT_LEVEL=2

You can use a unified .mozconfig like this to handle both the browser and the E-mail client; if you do, to build the browser the commented lines should be uncommented and the two lines below the previously commented section should be commented.

You'll need comm-central embedded in your ESR128 tree as per the build instructions, and you may want to create an .hg/hgignore file inside your ESR128 source directory as well to keep changes to the core and Tbird from clashing, something like

^tbobj/
^comm/

which will ignore those directories but isn't a change to .hgignore that you have to manually edit out. Once constructed, your built client will be in tbobj/. If you were using a prebuilt Thunderbird before, you may need to start it with tbobj/dist/bin/thunderbird -p default-release (substitute your profile name if it differs) to make sure you get your old mailbox back, though as always backup your profile first.

Firefox Add-on ReviewsYouTube your way — browser extensions put you in charge of your video experience

YouTube wants you to experience YouTube in very prescribed ways. But with the right browser extension, you’re free to alter YouTube to taste. Change the way the site looks, behaves, and delivers your favorite videos. 

Enhancer for YouTube

With dozens of customization features, Enhancer for YouTube has the power to dramatically reorient the way you watch videos. 

While a bunch of customization options may seem overwhelming, Enhancer for YouTube actually makes it very simple to navigate its settings and select just your favorite features. You can even choose which of your preferred features will display in the extension’s easy access interface that appears just beneath the video player.

<figcaption class="wp-element-caption">Enhancer for YouTube offers easy access controls just beneath the video player.</figcaption>

Key features… 

  • Customize video player size 
  • Change YouTube’s look with a dark theme
  • Volume booster
  • Ad blocking (with ability to whitelist channels you OK for ads)
  • Take quick screenshots of videos
  • Change playback speed
  • Set default video quality from low to high def
  • Shortcut configuration

Return YouTube Dislike

Do you like the Dislike? YouTube removed the display that revealed the number of thumbs-down Dislikes a video has, but with Return YouTube Dislike you can bring back the brutal truth. 

“Does exactly what the name suggests. Can’t see myself without this extension. Seriously, bad move on YouTube for removing such a vital tool.”

Firefox user OFG

“i have never smashed 5 stars faster.”

Firefox user 12918016

YouTube High Definition

Though its primary function is to automatically play all YouTube videos in their highest possible resolution, YouTube High Definition has a few other fine features to offer. 

In addition to automatic HD, YouTube High Definition can…

  • Customize video player size
  • HD support for clips embedded on external sites
  • Specify your ideal resolution (4k – 144p)
  • Set a preferred volume level 
  • Also automatically plays the highest quality audio

YouTube NonStop

So simple. So awesome. YouTube NonStop remedies the headache of interrupting your music with that awful “Video paused. Continue watching?” message. 

Works on YouTube and YouTube Music. You’re now free to navigate away from your YouTube tab for as long as you like and not fret that the rock will stop rolling. 

Unhook: Remove YouTube Recommended Videos & Comments

Instant serenity for YouTube! Unhook lets you strip away unwanted distractions like the promotional sidebar, endscreen suggestions, trending tab, and much more. 

More than two dozen customization options make this an essential extension for anyone seeking escape from YouTube rabbit holes. You can even hide notifications and live chat boxes. 

“This is the best extension to control YouTube usage, and not let YouTube control you.”

Firefox user Shubham Mandiya

PocketTube

If you subscribe to a lot of YouTube channels PocketTube is a fantastic way to organize all your subscriptions by themed collections. 

Group your channel collections by subject, like “Sports,” “Cooking,” “Cat videos” or whatever. Other key features include…

  • Add custom icons to easily identify your channel collections
  • Customize your feed so you just see videos you haven’t watched yet, prioritize videos from certain channels, plus other content settings
  • Integrates seamlessly with YouTube homepage 
  • Sync collections across Firefox/Android/iOS using Google Drive and Chrome Profiler
<figcaption class="wp-element-caption">PocketTube keeps your channel collections neatly tucked away to the side. </figcaption>

AdBlocker for YouTube

It’s not just you who’s noticed a lot more ads lately. Regain control with AdBlocker for YouTube

The extension very simply and effectively removes both video and display ads from YouTube. Period. Enjoy a faster, more focused YouTube. 

SponsorBlock

It’s a terrible experience when you’re enjoying a video or music on YouTube and you’re suddenly interrupted by a blaring ad. SponsorBlock solves this problem in a highly effective and original way. 

Leveraging the power of crowd sourced information to locate where—precisely— interruptive sponsored segments appear in videos, SponsorBlock learns where to automatically skip sponsored segments with its ever growing database of videos. You can also participate in the project by reporting sponsored segments whenever you encounter them (it’s easy to report right there on the video page with the extension). 

SponsorBlock can also learn to skip non-music portions of music videos and intros/outros, as well. If you’d like a deeper dive of SponsorBlock we profiled its developer and open source project on Mozilla Distilled

We hope one of these extensions enhances the way you enjoy YouTube. Feel free to explore more great media extensions on addons.mozilla.org

 

Firefox Add-on ReviewsHow to turn your household pet into a Firefox theme

Themes are a fun way to change the visual appearance of Firefox and give the browser a look that’s all your own. You’re free to explore more than a half-million community created themes on addons.mozilla.org (AMO), or better yet, create your own custom theme. Best of all — create a theme featuring a beloved pet! Then you can take your little buddy with you wherever you go on the web. 

(You’ll need a Mozilla account to create and publish Firefox themes on AMO.)

Prepare your pet pic for upload

I find it helpful to first size my image properly. For Firefox themes, we recommend images with a height between 100 – 200 pixels. So I might first prepare an image with a couple of sizing options, perhaps one at 100 pixel height and another at 200, and see what works best. (Note: as you resize an image, be sure its height and width parameters change in sync so your image maintains proper dimensions.)

Tootsie strikes a pose to become a Firefox theme.<figcaption class="wp-element-caption">Tootsie strikes a pose to become a Firefox theme. </figcaption>

Depending on what type of image editing software you have on your computer (PC users can resize pics with the standard Photo or Paint apps, while Mac users may be familiar with Preview), find the controls to resize and save your images in the recommended range. Supported file formats include PNG, JPG, APNG, SVG, or GIF (not animated) and can be up to 6.9MB. 

Upload your pet pic & select custom colors

Go to AMO’s Theme Generator page and…

  • Name your theme
  • Upload your image
  • Select colors for the header background, text and icons
<figcaption class="wp-element-caption">Point-and-click color palettes make it easy to create complementary color combinations. </figcaption>

Once you like the way your new theme looks in the preview display, click Finish Theme and you’re done! All new theme submissions must first pass a review process, but that usually only takes a day or two, after which you’ll receive an email notifying you that your personalized pet theme is ready to install on Firefox. Now Tootsie accompanies me everywhere online, although sometimes she just stares at me. 

For more tips on creating Firefox themes, please see this Theme Generator guide or visit the Extension Workshop

The Rust Programming Language BlogAnnouncing Rust 1.82.0

The Rust team is happy to announce a new version of Rust, 1.82.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.82.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.82.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.82.0 stable

cargo info

Cargo now has an info subcommand to display information about a package in the registry, fulfilling a long standing request just shy of its tenth anniversary! Several third-party extensions like this have been written over the years, and this implementation was developed as cargo-information before merging into Cargo itself.

For example, here's what you could see for cargo info cc:

cc #build-dependencies
A build-time dependency for Cargo build scripts to assist in invoking the native
C compiler to compile native C code into a static archive to be linked into Rust
code.
version: 1.1.23 (latest 1.1.30)
license: MIT OR Apache-2.0
rust-version: 1.63
documentation: https://docs.rs/cc
homepage: https://github.com/rust-lang/cc-rs
repository: https://github.com/rust-lang/cc-rs
crates.io: https://crates.io/crates/cc/1.1.23
features:
  jobserver = []
  parallel  = [dep:libc, dep:jobserver]
note: to see how you depend on cc, run `cargo tree --invert --package cc@1.1.23`

By default, cargo info describes the package version in the local Cargo.lock, if any. As you can see, it will indicate when there's a newer version too, and cargo info cc@1.1.30 would report on that.

Apple target promotions

macOS on 64-bit ARM is now Tier 1

The Rust target aarch64-apple-darwin for macOS on 64-bit ARM (M1-family or later Apple Silicon CPUs) is now a tier 1 target, indicating our highest guarantee of working properly. As the platform support page describes, every change in the Rust repository must pass full tests on every tier 1 target before it can be merged. This target was introduced as tier 2 back in Rust 1.49, making it available in rustup. This new milestone puts the aarch64-apple-darwin target on par with the 64-bit ARM Linux and the X86 macOS, Linux, and Windows targets.

Mac Catalyst targets are now Tier 2

Mac Catalyst is a technology by Apple that allows running iOS applications natively on the Mac. This is especially useful when testing iOS-specific code, as cargo test --target=aarch64-apple-ios-macabi --target=x86_64-apple-ios-macabi mostly just works (in contrast to the usual iOS targets, which need to be bundled using external tooling before they can be run on a native device or in the simulator).

The targets are now tier 2, and can be downloaded with rustup target add aarch64-apple-ios-macabi x86_64-apple-ios-macabi, so now is an excellent time to update your CI pipeline to test that your code also runs in iOS-like environments.

Precise capturing use<..> syntax

Rust now supports use<..> syntax within certain impl Trait bounds to control which generic lifetime parameters are captured.

Return-position impl Trait (RPIT) types in Rust capture certain generic parameters. Capturing a generic parameter allows that parameter to be used in the hidden type. That in turn affects borrow checking.

In Rust 2021 and earlier editions, lifetime parameters are not captured in opaque types on bare functions and on functions and methods of inherent impls unless those lifetime parameters are mentioned syntactically in the opaque type. E.g., this is an error:

//@ edition: 2021
fn f(x: &()) -> impl Sized { x }
error[E0700]: hidden type for `impl Sized` captures lifetime that does not appear in bounds
 --> src/main.rs:1:30
  |
1 | fn f(x: &()) -> impl Sized { x }
  |         ---     ----------   ^
  |         |       |
  |         |       opaque type defined here
  |         hidden type `&()` captures the anonymous lifetime defined here
  |
help: add a `use<...>` bound to explicitly capture `'_`
  |
1 | fn f(x: &()) -> impl Sized + use<'_> { x }
  |                            +++++++++

With the new use<..> syntax, we can fix this, as suggested in the error, by writing:

fn f(x: &()) -> impl Sized + use<'_> { x }

Previously, correctly fixing this class of error required defining a dummy trait, conventionally called Captures, and using it as follows:

trait Captures<T: ?Sized> {}
impl<T: ?Sized, U: ?Sized> Captures<T> for U {}

fn f(x: &()) -> impl Sized + Captures<&'_ ()> { x }

That was called "the Captures trick", and it was a bit baroque and subtle. It's no longer needed.

There was a less correct but more convenient way to fix this that was often used called "the outlives trick". The compiler even previously suggested doing this. That trick looked like this:

fn f(x: &()) -> impl Sized + '_ { x }

In this simple case, the trick is exactly equivalent to + use<'_> for subtle reasons explained in RFC 3498. However, in real life cases, this overconstrains the bounds on the returned opaque type, leading to problems. For example, consider this code, which is inspired by a real case in the Rust compiler:

struct Ctx<'cx>(&'cx u8);

fn f<'cx, 'a>(
    cx: Ctx<'cx>,
    x: &'a u8,
) -> impl Iterator<Item = &'a u8> + 'cx {
    core::iter::once_with(move || {
        eprintln!("LOG: {}", cx.0);
        x
    })
//~^ ERROR lifetime may not live long enough
}

We can't remove the + 'cx, since the lifetime is used in the hidden type and so must be captured. Neither can we add a bound of 'a: 'cx, since these lifetimes are not actually related and it won't in general be true that 'a outlives 'cx. If we write + use<'cx, 'a> instead, however, this will work and have the correct bounds.

There are some limitations to what we're stabilizing today. The use<..> syntax cannot currently appear within traits or within trait impls (but note that there, in-scope lifetime parameters are already captured by default), and it must list all in-scope generic type and const parameters. We hope to lift these restrictions over time.

Note that in Rust 2024, the examples above will "just work" without needing use<..> syntax (or any tricks). This is because in the new edition, opaque types will automatically capture all lifetime parameters in scope. This is a better default, and we've seen a lot of evidence about how this cleans up code. In Rust 2024, use<..> syntax will serve as an important way of opting-out of that default.

For more details about use<..> syntax, capturing, and how this applies to Rust 2024, see the "RPIT lifetime capture rules" chapter of the edition guide. For details about the overall direction, see our recent blog post, "Changes to impl Trait in Rust 2024".

Native syntax for creating a raw pointer

Unsafe code sometimes has to deal with pointers that may dangle, may be misaligned, or may not point to valid data. A common case where this comes up are repr(packed) structs. In such a case, it is important to avoid creating a reference, as that would cause undefined behavior. This means the usual & and &mut operators cannot be used, as those create a reference -- even if the reference is immediately cast to a raw pointer, it's too late to avoid the undefined behavior.

For several years, the macros std::ptr::addr_of! and std::ptr::addr_of_mut! have served this purpose. Now the time has come to provide a proper native syntax for this operation: addr_of!(expr) becomes &raw const expr, and addr_of_mut!(expr) becomes &raw mut expr. For example:

#[repr(packed)]
struct Packed {
    not_aligned_field: i32,
}

fn main() {
    let p = Packed { not_aligned_field: 1_82 };

    // This would be undefined behavior!
    // It is rejected by the compiler.
    //let ptr = &p.not_aligned_field as *const i32;

    // This is the old way of creating a pointer.
    let ptr = std::ptr::addr_of!(p.not_aligned_field);

    // This is the new way.
    let ptr = &raw const p.not_aligned_field;

    // Accessing the pointer has not changed.
    // Note that `val = *ptr` would be undefined behavior because
    // the pointer is not aligned!
    let val = unsafe { ptr.read_unaligned() };
}

The native syntax makes it more clear that the operand expression of these operators is interpreted as a place expression. It also avoids the term "address-of" when referring to the action of creating a pointer. A pointer is more than just an address, so Rust is moving away from terms like "address-of" that reaffirm a false equivalence of pointers and addresses.

Safe items with unsafe extern

Rust code can use functions and statics from foreign code. The type signatures of these foreign items are provided in extern blocks. Historically, all items within extern blocks have been unsafe to use, but we didn't have to write unsafe anywhere on the extern block itself.

However, if a signature within the extern block is incorrect, then using that item will result in undefined behavior. Would that be the fault of the person who wrote the extern block, or the person who used that item?

We've decided that it's the responsibility of the person writing the extern block to ensure that all signatures contained within it are correct, and so we now allow writing unsafe extern:

unsafe extern {
    pub safe static TAU: f64;
    pub safe fn sqrt(x: f64) -> f64;
    pub unsafe fn strlen(p: *const u8) -> usize;
}

One benefit of this is that items within an unsafe extern block can be marked as safe to use. In the above example, we can call sqrt or read TAU without using unsafe. Items that aren't marked with either safe or unsafe are conservatively assumed to be unsafe.

In future releases, we'll be encouraging the use of unsafe extern with lints. Starting in Rust 2024, using unsafe extern will be required.

For further details, see RFC 3484 and the "Unsafe extern blocks" chapter of the edition guide.

Unsafe attributes

Some Rust attributes, such as no_mangle, can be used to cause undefined behavior without any unsafe block. If this were regular code we would require them to be placed in an unsafe {} block, but so far attributes have not had comparable syntax. To reflect the fact that these attributes can undermine Rust's safety guarantees, they are now considered "unsafe" and should be written as follows:

#[unsafe(no_mangle)]
pub fn my_global_function() { }

The old form of the attribute (without unsafe) is currently still accepted, but might be linted against at some point in the future, and will be a hard error in Rust 2024.

This affects the following attributes:

  • no_mangle
  • link_section
  • export_name

For further details, see the "Unsafe attributes" chapter of the edition guide.

Omitting empty types in pattern matching

Patterns which match empty (a.k.a. uninhabited) types by value can now be omitted:

use std::convert::Infallible;
pub fn unwrap_without_panic<T>(x: Result<T, Infallible>) -> T {
    let Ok(x) = x; // the `Err` case does not need to appear
    x
}

This works with empty types such as a variant-less enum Void {}, or structs and enums with a visible empty field and no #[non_exhaustive] attribute. It will also be particularly useful in combination with the never type !, although that type is still unstable at this time.

There are some cases where empty patterns must still be written. For reasons related to uninitialized values and unsafe code, omitting patterns is not allowed if the empty type is accessed through a reference, pointer, or union field:

pub fn unwrap_ref_without_panic<T>(x: &Result<T, Infallible>) -> &T {
    match x {
        Ok(x) => x,
        // this arm cannot be omitted because of the reference
        Err(infallible) => match *infallible {},
    }
}

To avoid interfering with crates that wish to support several Rust versions, match arms with empty patterns are not yet reported as “unreachable code” warnings, despite the fact that they can be removed.

Floating-point NaN semantics and const

Operations on floating-point values (of type f32 and f64) are famously subtle. One of the reasons for this is the existence of NaN ("not a number") values which are used to represent e.g. the result of 0.0 / 0.0. What makes NaN values subtle is that more than one possible NaN value exists. A NaN value has a sign (that can be checked with f.is_sign_positive()) and a payload (that can be extracted with f.to_bits()). However, both the sign and payload of NaN values are entirely ignored by == (which always returns false). Despite very successful efforts to standardize the behavior of floating-point operations across hardware architectures, the details of when a NaN is positive or negative and what its exact payload is differ across architectures. To make matters even more complicated, Rust and its LLVM backend apply optimizations to floating-point operations when the exact numeric result is guaranteed not to change, but those optimizations can change which NaN value is produced. For instance, f * 1.0 may be optimized to just f. However, if f is a NaN, this can change the exact bit pattern of the result!

With this release, Rust standardizes on a set of rules for how NaN values behave. This set of rules is not fully deterministic, which means that the result of operations like (0.0 / 0.0).is_sign_positive() can differ depending on the hardware architecture, optimization levels, and the surrounding code. Code that aims to be fully portable should avoid using to_bits and should use f.signum() == 1.0 instead of f.is_sign_positive(). However, the rules are carefully chosen to still allow advanced data representation techniques such as NaN boxing to be implemented in Rust code. For more details on what the exact rules are, check out our documentation.

With the semantics for NaN values settled, this release also permits the use of floating-point operations in const fn. Due to the reasons described above, operations like (0.0 / 0.0).is_sign_positive() (which will be const-stable in Rust 1.83) can produce a different result when executed at compile-time vs at run-time. This is not a bug, and code must not rely on a const fn always producing the exact same result.

Constants as assembly immediates

The const assembly operand now provides a way to use integers as immediates without first storing them in a register. As an example, we implement a syscall to write by hand:

const WRITE_SYSCALL: c_int = 0x01; // syscall 1 is `write`
const STDOUT_HANDLE: c_int = 0x01; // `stdout` has file handle 1
const MSG: &str = "Hello, world!\n";

let written: usize;

// Signature: `ssize_t write(int fd, const void buf[], size_t count)`
unsafe {
    core::arch::asm!(
        "mov rax, {SYSCALL} // rax holds the syscall number",
        "mov rdi, {OUTPUT}  // rdi is `fd` (first argument)",
        "mov rdx, {LEN}     // rdx is `count` (third argument)",
        "syscall            // invoke the syscall",
        "mov {written}, rax // save the return value",
        SYSCALL = const WRITE_SYSCALL,
        OUTPUT = const STDOUT_HANDLE,
        LEN = const MSG.len(),
        in("rsi") MSG.as_ptr(), // rsi is `buf *` (second argument)
        written = out(reg) written,
    );
}

assert_eq!(written, MSG.len());

Output:

Hello, world!

Playground link.

In the above, a statement such as LEN = const MSG.len() populates the format specifier LEN with an immediate that takes the value of MSG.len(). This can be seen in the generated assembly (the value is 14):

lea     rsi, [rip + .L__unnamed_3]
mov     rax, 1    # rax holds the syscall number
mov     rdi, 1    # rdi is `fd` (first argument)
mov     rdx, 14   # rdx is `count` (third argument)
syscall # invoke the syscall
mov     rax, rax  # save the return value

See the reference for more details.

Safely addressing unsafe statics

This code is now allowed:

static mut STATIC_MUT: Type = Type::new();
extern "C" {
    static EXTERN_STATIC: Type;
}
fn main() {
     let static_mut_ptr = &raw mut STATIC_MUT;
     let extern_static_ptr = &raw const EXTERN_STATIC;
}

In an expression context, STATIC_MUT and EXTERN_STATIC are place expressions. Previously, the compiler's safety checks were not aware that the raw ref operator did not actually affect the operand's place, treating it as a possible read or write to a pointer. No unsafety is actually present, however, as it just creates a pointer.

Relaxing this may cause problems where some unsafe blocks are now reported as unused if you deny the unused_unsafe lint, but they are now only useful on older versions. Annotate these unsafe blocks with #[allow(unused_unsafe)] if you wish to support multiple versions of Rust, as in this example diff:

 static mut STATIC_MUT: Type = Type::new();
 fn main() {
+    #[allow(unused_unsafe)]
     let static_mut_ptr = unsafe { std::ptr::addr_of_mut!(STATIC_MUT) };
 }

A future version of Rust is expected to generalize this to other expressions which would be safe in this position, not just statics.

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.82.0

Many people came together to create Rust 1.82.0. We couldn't have done it without all of you. Thanks!

Mozilla Performance BlogAnnouncing PerfCompare: the new comparison tool !

About two years ago, I joined the performance test team to help build PerfCompare, an improved performance tool designed to replace Perfherder’s Compare View. Around that time, we introduced PerfCompare to garner enthusiasm and feedback in creating a new workflow that would reduce the cognitive load and confusion of its predecessor. And, if we’re being honest, a tool that would also be more enjoyable from a design perspective for comparing the results of performance tests. But most importantly, we wanted to add new, relevant features while keeping Firefox engineers foremost in mind.

PerfCompare's first home page

Started from the bottom… PerfCompare’s first home page

Now, after working with Senior Product Designer Dasha Andriyenko to create a sleek, intuitive UI/UX, integrating feedback from engineers and leaders across different teams, and achieving key milestones, we’re excited to announce that PerfCompare is live and ready to use at perf.compare.

PerfCompare's current home page

Now we’re on top! PerfCompare today!

Time to celebrate! 🎉

PerfCompare’s ultimate purpose is to become a tool that empowers developers to make performance testing a core part of their development process.

We are targeting the end of this year to deprecate Compare View and make PerfCompare the primary tool to help Firefox developers analyze the performance impact of their patches.

We are in the process of updating the Firefox source docs, but documentation for PerfCompare can be found at PerfCompare Documentation. It provides details on all the new features currently available on PerfCompare and instructions on how to use the tool.

Some key highlights regarding features include:

  • Allowing comparisons of up to three new revisions/patches versus the base revision of a repository (mozilla-central, autoland, etc.)

Search results with selected base and new revisions selected

  • Searching revisions by short hash, long hash, or author email
  • A more visible and separate workflow for comparing revisions over time

Compare over time with one revision selected

  • Editing the compared revisions on the results page to compute new comparisons for an updated results table without having to return to the home page
  • Expanded rows in the results table with graphs for the base and new revisions

Expanded row in results table with graph


And there’s much more in the works!

I’d like to extend a huge congratulations to the performance test team, Dasha, and everyone who has contributed feedback and suggestions to our user research, team meetings, and presentations. We owe PerfCompare’s launch and continued improvement to you!

If you have any questions or comments about PerfCompare, you can find us in the #PerfCompare matrix channel or join our #PerfCompareUserResearch channel. If you experience any issues, please report them on Bugzilla.

Spidermonkey Development Blog75x faster: optimizing the Ion compiler backend

In September, machine learning engineers at Mozilla filed a bug report indicating that Firefox was consuming excessive memory and CPU resources while running Microsoft’s ONNX Runtime (a machine learning library) compiled to WebAssembly.

This post describes how we addressed this and some of our longer-term plans for improving WebAssembly performance in the future.

The problem

SpiderMonkey has two compilers for WebAssembly code. First, a Wasm module is compiled with the Wasm Baseline compiler, a compiler that generates decent machine code very quickly. This is good for startup time because we can start executing Wasm code almost immediately after downloading it. Andy Wingo wrote a nice blog post about this Baseline compiler.

When Baseline compilation is finished, we compile the Wasm module with our more advanced Ion compiler. This backend produces faster machine code, but compilation time is a lot higher.

The issue with the ONNX module was that the Ion compiler backend took a long time and used a lot of memory to compile it. On my Linux x64 machine, Ion-compiling this module took about 5 minutes and used more than 4 GB of memory. Even though this work happens on background threads, this was still too much overhead.

Optimizing the Ion backend

When we investigated this, we noticed that this Wasm module had some extremely large functions. For the largest one, Ion’s MIR control flow graph contained 132856 basic blocks. This uncovered some performance cliffs in our compiler backend.

VirtualRegister live ranges

In Ion’s register allocator, each VirtualRegister has a list of LiveRange objects. We were using a linked list for this, sorted by start position. This caused quadratic behavior when allocating registers: the allocator often splits live ranges into smaller ranges and we’d have to iterate over the list for each new range to insert it at the correct position to keep the list sorted. This was very slow for virtual registers with thousands of live ranges.

To address this, I tried a few different data structures. The first attempt was to use an AVL tree instead of a linked list and that was a big improvement, but the performance was still not ideal and we were also worried about memory usage increasing even more.

After this we realized we could store live ranges in a vector (instead of linked list) that’s optionally sorted by decreasing start position. We also made some changes to ensure the initial live ranges are sorted when we create them, so that we could just append ranges to the end of the vector.

The observation here was that the core of the register allocator, where it assigns registers or stack slots to live ranges, doesn’t actually require the live ranges to be sorted. We therefore now just append new ranges to the end of the vector and mark the vector unsorted. Right before the final phase of the allocator, where we again rely on the live ranges being sorted, we do a single std::sort operation on the vector for each virtual register with unsorted live ranges. Debug assertions are used to ensure that functions that require the vector to be sorted are not called when it’s marked unsorted.

Vectors are also better for cache locality and they let us use binary search in a few places. When I was discussing this with Julian Seward, he pointed out that Chris Fallin also moved away from linked lists to vectors in Cranelift’s port of Ion’s register allocator. It’s always good to see convergent evolution :)

This change from sorted linked lists to optionally-sorted vectors made Ion compilation of this Wasm module about 20 times faster, down to 14 seconds.

Semi-NCA

The next problem that stood out in performance profiles was the Dominator Tree Building compiler pass, in particular a function called ComputeImmediateDominators. This function determines the immediate dominator block for each basic block in the MIR graph.

The algorithm we used for this (based on A Simple, Fast Dominance Algorithm by Cooper et al) is relatively simple but didn’t scale well to very large graphs.

Semi-NCA (from Linear-Time Algorithms for Dominators and Related Problems by Loukas Georgiadis) is a different algorithm that’s also used by LLVM and the Julia compiler. I prototyped this and was surprised to see how much faster it was: it got our total compilation time down from 14 seconds to less than 8 seconds. For a single-threaded compilation, it reduced the time under ComputeImmediateDominators from 7.1 seconds to 0.15 seconds.

Fortunately it was easy to run both algorithms in debug builds and assert they computed the same immediate dominator for each basic block. After a week of fuzz-testing, no problems were found and we landed a patch that removed the old implementation and enabled the Semi-NCA code.

Sparse BitSets

For each basic block, the register allocator allocated a (dense) bit set with a bit for each virtual register. These bit sets are used to check which virtual registers are live at the start of a block.

For the largest function in the ONNX Wasm module, this used a lot of memory: 199477 virtual registers x 132856 basic blocks is at least 3.1 GB just for these bit sets! Because most virtual registers have short live ranges, these bit sets had relatively few bits set to 1.

We replaced these dense bit sets with a new SparseBitSet data structure that uses a hashmap to store 32 bits per entry. Because most of these hashmaps contain a small number of entries, it uses an InlineMap to optimize for this: it’s a data structure that stores entries either in a small inline array or (when the array is full) in a hashmap. We also optimized InlineMap to use a variant (a union type) for these two representations to save memory.

This saved at least 3 GB of memory but also improved the compilation time for the Wasm module to 5.4 seconds.

Faster move resolution

The last issue that showed up in profiles was a function in the register allocator called createMoveGroupsFromLiveRangeTransitions. After the register allocator assigns a register or stack slot to each live range, this function is responsible for connecting pairs of live ranges by inserting moves.

For example, if a value is stored in a register but is later spilled to memory, there will be two live ranges for its virtual register. This function then inserts a move instruction to copy the value from the register to the stack slot at the start of the second live range.

This function was slow because it had a number of loops with quadratic behavior: for a move’s destination range, it would do a linear lookup to find the best source range. We optimized the main two loops to run in linear time instead of being quadratic, by taking more advantage of the fact that live ranges are sorted.

With these changes, Ion can compile the ONNX Wasm module in less than 3.9 seconds on my machine, more than 75x faster than before these changes.

Adobe Photoshop

These changes not only improved performance for the ONNX Runtime module, but also for a number of other WebAssembly modules. A large Wasm module downloaded from the free online Adobe Photoshop demo can now be Ion-compiled in 14 seconds instead of 4 minutes.

The JetStream 2 benchmark has a HashSet module that was affected by the quadratic move resolution code. Ion compilation time for it improved from 2.8 seconds to 0.2 seconds.

New Wasm compilation pipeline

Even though these are great improvements, spending at least 14 seconds (on a fast machine!) to fully compile Adobe Photoshop on background threads still isn’t an amazing user experience. We expect this to only get worse as more large applications are compiled to WebAssembly.

To address this, our WebAssembly team is making great progress rearchitecting the Wasm compiler pipeline. This work will make it possible to Ion-compile individual Wasm functions as they warm up instead of compiling everything immediately. It will also unlock exciting new capabilities such as (speculative) inlining.

Stay tuned for updates on this as we start rolling out these changes in Firefox.

- Jan de Mooij, engineer on the SpiderMonkey team

Hacks.Mozilla.OrgLlamafile v0.8.14: a new UI, performance gains, and more

We’ve just released Llamafile 0.8.14, the latest version of our popular open source AI tool. A Mozilla Builders project, Llamafile turns model weights into fast, convenient executables that run on most computers, making it easy for anyone to get the most out of open LLMs using the hardware they already have.

New chat interface

The key feature of this new release is our colorful new command line chat interface. When you launch a Llamafile we now automatically open this new chat UI for you, right there in the terminal. This new interface is fast, easy to use, and an all around simpler experience than the Web-based interface we previously launched by default. (That interface, which our project inherits from the upstream llama.cpp project, is still available and supports a range of features, including image uploads. Simply point your browser at port 8080 on localhost).

llamafile

Other recent improvements

This new chat UI is just the tip of the iceberg. In the months since our last blog post here, lead developer Justine Tunney has been busy shipping a slew of new releases, each of which have moved the project forward in important ways. Here are just a few of the highlights:

Llamafiler: We’re building our own clean sheet OpenAI-compatible API server, called Llamafiler. This new server will be more reliable, stable, and most of all faster than the one it replaces. We’ve already shipped the embeddings endpoint, which runs three times as fast as the one in llama.cpp. Justine is currently working on the completions endpoint, at which point Llamafiler will become the default API server for Llamafile.

Performance improvements: With the help of open source contributors like k-quant inventor @Kawrakow Llamafile has enjoyed a series of dramatic speed boosts over the last few months. In particular, pre-fill (prompt evaluation) speed has improved dramatically on a variety of architectures:

  • Intel Core i9 went from 100 tokens/second to 400 (4x).
  • AMD Threadripper went from 300 tokens/second to 2,400 (8x).
  • Even the modest Raspberry Pi 5 jumped from 8 tokens/second to 80 (10x!).

When combined with the new high-speed embedding server described above, Llamafile has become one of the fastest ways to run complex local AI applications that use methods like retrieval augmented generation (RAG).

Support for powerful new models: Llamafile continues to keep pace with progress in open LLMs, adding support for dozens of new models and architectures, ranging in size from 405 billion parameters all the way down to 1 billion. Here are just a few of the new Llamafiles available for download on Hugging Face:

  • Llama 3.2 1B and 3B: offering extremely impressive performance and quality for their small size. (Here’s a video from our own Mike Heavers showing it in action.)
  • Llama 3.1 405B: a true “frontier model” that’s possible to run at home with sufficient system RAM.
  • OLMo 7B: from our friends at the Allen Institute, OLMo is one of the first truly open and transparent models available.
  • TriLM: a new “1.58 bit” tiny model that is optimized for CPU inference and points to a near future where matrix multiplication might no longer rule the day.

Whisperfile, speech-to-text in a single file: Thanks to contributions from community member @cjpais, we’ve created Whisperfile, which does for whisper.cpp what Llamafile did for llama.cpp: that is, turns it into a multi-platform executable that runs nearly everywhere. Whisperfile thus makes it easy to use OpenAI’s Whisper technology to efficiently convert speech into text, no matter which kind of hardware you have.

Get involved

Our goal is for Llamafile to become a rock-solid foundation for building sophisticated locally-running AI applications. Justine’s work on the new Llamafiler server is a big part of that equation, but so is the ongoing work of supporting new models and optimizing inference performance for as many users as possible. We’re proud and grateful that some of the project’s biggest breakthroughs in these areas, and others, have come from the community, with contributors like @Kawrakow, @cjpais, @mofosyne, and @Djip007 routinely leaving their mark.

We invite you to join them, and us. We welcome issues and PRs in our GitHub repo. And we welcome you to become a member of Mozilla’s AI Discord server, which has a dedicated channel just for Llamafile where you can get direct access to the project team. Hope to see you there!

 

The post Llamafile v0.8.14: a new UI, performance gains, and more appeared first on Mozilla Hacks - the Web developer blog.

Don MartiAnother easy-ish state law: the No Second-class Citizenship Act

Tired of Big Tech companies giving consumer protections, fraud protections, and privacy protections to their users in other countries but not to people at home in the USA? Here’s another state law we could use, and I bet it could be a two-page PDF.

If a company has more than 10% of our state’s residents as customers or users, and also does business in 50 or more countries, then if they offer a privacy or consumer protection feature in a non-US location they must also offer it in our state within 90 days.

Have it enforced Texas SB 8 style, by individuals, so harder for Big Tech sockpuppet orgs to challenge.

Reference

Antitrust challenge to Facebook’s ‘superprofiling’ finally wraps in Germany — with Meta agreeing to data limits | TechCrunch We’ve asked Meta to confirm whether changes will be implemented globally — or only inside the German market where the Bundeskartellamt has jurisdiction.

Related

there ought to be a law (Big Tech lobbyists are expensive—instead of grinding out the PDFs they expect, make them fight an unpredictable distributed campaign of random-ish ideas, coded into bills that take the side of local small businesses?)

Bonus links

How the long-gone Habsburg Empire is still visible in Eastern European bureaucracies today The formal institutions of the empire ceased to exist with the collapse of the Habsburg Empire after World War I, breaking up into separate nation states that have seen several waves of drastic institutional changes since. We might therefore wonder whether differences in trust and corruption across areas that belonged to different empires in the past really still survive to this day.

TikTok knows its app is harming kids, new internal documents show : NPR (this kind of stuff is why I’ll never love your brand—if a brand is fine with advertising on surveillance apps with all we know about how they work, then I’m enough opposed to them on fundamental issues that all transactions will be based on lack of trust.)

Cloudflare Destroys Another Patent Troll, Gets Its Patents Released To The Public (time for some game theory)

Conceptual models of space colonization (One that’s missing: Kurt Vonnegut’s concept involving large-scale outward transfer of genetic material. Probably most likely to happen if you add in Von Neumann machines and the systems required to grow live colonists from genetic data—which don’t exist but are not physically or economically impossible…)

Cash incinerator OpenAI secures its $6.6 billion lifeline — ‘in the spirit of a donation’ (fwiw, there are still a bunch of copyright cases out there, too. (AI legal links) Related: The Subprime AI Crisis)

The cheap chocolate system The giant chocolate companies want cocoa beans to be a commodity. They don’t want to worry about origin or yield–they simply want to buy indistinguishable cheap cacao. In fact, the buyers at these companies feel like they have no choice but to push for mediocre beans at cut rate prices, regardless of the human cost. (so it’s like adtech you eat?)

How web bloat impacts users with slow devices CPU performance for web apps hasn’t scaled nearly as quickly as bandwidth so, while more of the web is becoming accessible to people with low-end connections, more of the web is becoming inaccessible to people with low-end devices even if they have high-end connections.

Niko MatsakisThe `Overwrite` trait and `Pin`

In July, boats presented a compelling vision in their post pinned places. With the Overwrite trait that I introduced in my previous post, however, I think we can get somewhere even more compelling, albeit at the cost of a tricky transition. As I will argue in this post, the Overwrite trait effectively becomes a better version of the existing Unpin trait, one that effects not only pinned references but also regular &mut references. Through this it’s able to make Pin fit much more seamlessly with the rest of Rust.

Just show me the dang code

Before I dive into the details, let’s start by reviewing a few examples to show you what we are aiming at (you can also skip to the TL;DR, in the FAQ).

I’m assuming a few changes here:

  • Adding an Overwrite trait and changing most types to be !Overwrite by default.
    • The Option<T> (and maybe others) would opt-in to Overwrite, permitting x.take().
  • Integrating pin into the borrow checker, extending auto-ref to also “auto-pin” and produce a Pin<&mut T>. The borrow checker only permits you to pin values that you own. Once a place has been pinned, you are not permitted to move out from it anymore (unless the value is overwritten).

The first change is “mildly” backwards incompatible. I’m not going to worry about that in this post, but I’ll cover the ways I think we can make the transition in a follow up post.

Example 1: Converting a generator into an iterator

We would really like to add a generator syntax that lets you write an iterator more conveniently.1 For example, given some slice strings: &[String], we should be able to define a generator that iterates over the string lengths like so:

fn do_computation() -> usize {
    let hashes = gen {
        let strings: Vec<String> = compute_input_strings();
        for string in &strings {
            yield compute_hash(&string);
        }
    };
    
    // ...
}

But there is a catch here! To permit the borrow of strings, which is owned by the generator, the generator will have to be pinned.2 That means that generators cannot directly implement Iterator, because generators need a Pin<&mut Self> signature for their next methods. It is possible, however, to implement Iterator for Pin<&mut G> where G is a generator.3

In today’s Rust, that means that using a generator as an iterator would require explicit pinning:

fn do_computation() -> usize {
    let hashes = gen {....};
    let hashes = pin!(hashes); // <-- explicit pin
    if let Some(h) = hashes.next() {
        // process first hash
    };
    // ...
}

With pinned places, this feels more builtin, but it still requires users to actively think about pinning for even the most basic use case:

fn do_computation() -> usize {
    let hashes = gen {....};
    let pinned mut hashes = hashes;
    if let Some(h) = hashes.next() {
        // process first hash
    };
    // ...
}

Under this proposal, users would simply be able to ignore pinning altogether:

fn do_computation() -> usize {
    let mut hashes = gen {....};
    if let Some(h) = hashes.next() {
        // process first hash
    };
    // ...
}

Pinning is still happening: once a user has called next, they would not be able to move hashes after that point. If they tried to do so, the borrow checker (which now understands pinning natively) would give an error like:

error[E0596]: cannot borrow `hashes` as mutable, as it is not declared as mutable
 --> src/lib.rs:4:22
  |
4 |     if let Some(h) = hashes.next() {
  |                      ------ value in `hashes` was pinned here
  |     ...
7 |     move_somewhere_else(hashes);
  |                         ^^^^^^ cannot move a pinned value
help: if you want to move `hashes`, consider using `Box::pin` to allocate a pinned box
  |
3 |     let mut hashes = Box::pin(gen { .... });
  |                      +++++++++            +

As noted, it is possible to move hashes after pinning, but only if you pin it into a heap-allocated box. So we can advise users how to do that.

Example 2: Implementing the MaybeDone future

The pinned places post included an example future called MaybeDone. I’m going to implement that same future in the system I describe here. There are some comments in the example comparing it to the version from the pinned places post.

enum MaybeDone<F: Future> {
    //         ---------
    //         I'm assuming we are in Rust.Next, and so the default
    //         bounds for `F` do not include `Overwrite`.
    //         In other words, `F: ?Overwrite` is the default
    //         (just as it is with every other trait besides `Sized`).
    
    Polling(F),
    //      -
    //      We don't need to declare `pinned F`.
    
    Done(Option<F::Output>),
}

impl<F: Future> MaybeDone<F> {
    fn maybe_poll(self: Pin<&mut Self>, cx: &mut Context<'_>) {
        //        --------------------
        //        I'm not bothering with the `&pinned mut self`
        //        sugar here, though certainly we could still
        //        add it.
        if let MaybeDone::Polling(fut) = self {
            //                    ---
            //       Just as in the original example,
            //       we are able to project from `Pin<&mut Self>`
            //       to a `Pin<&mut F>`.
            //
            //       The key is that we can safely project
            //       from an owner of type `Pin<&mut Self>`
            //       to its field of type `Pin<&mut F>`
            //       so long as the owner type `Self: !Overwrite`
            //       (which is the default for structs in Rust.Next).
            if let Poll::Ready(res) = fut.poll(cx) {
                *self = MaybeDone::Done(Some(res));
            }
        }
    }

    fn is_done(&self) -> bool {
        matches!(self, &MaybeDone::Done(_))
    }

    fn take_output(&mut self) -> Option<F::Output> {
        //         ---------
        //   In pinned places, this method had to be
        //   `&pinned mut self`, but under this design,
        //   it can be a regular `&mut self`.
        //   
        //   That's because `Pin<&mut Self>` becomes
        //   a subtype of `&mut Self`.
        if let MaybeDone::Done(res) = self {
            res.take()
        } else {
            None
        }
    }
}
Example 3: Implementing the Join combinator

Let’s complete the journey by implementing a Join future:

struct Join<F1: Future, F2: Future> {
    // These fields do not have to be declared `pinned`:
    fut1: MaybeDone<F1>,
    fut2: MaybeDone<F2>,
}

impl<F1, F2> Future for Join<F1, F2>
where
    F1: Future,
    F2: Future,
{
    type Output = (F1::Output, F2::Output);

    fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
        //  --------------------
        // Again, I've dropped the sugar here.
        
        // This looks just the same as in the
        // "Pinned Places" example. This again
        // leans on the ability to project
        // from a `Pin<&mut Self>` owner so long as
        // `Self: !Overwrite` (the default for structs
        // in Rust.Next).
        self.fut1.maybe_poll(cx);
        self.fut2.maybe_poll(cx);
        
        if self.fut1.is_done() && self.fut2.is_done() {
            // This code looks the same as it did with pinned places,
            // but there is an important difference. `take_output`
            // is now an `&mut self` method, not a `Pin<&mut Self>`
            // method. This demonstrates that we can also get
            // a regular `&mut` reference to our fields.
            let res1 = self.fut1.take_output().unwrap();
            let res2 = self.fut2.take_output().unwrap();
            Poll::Ready((res1, res2))
        } else {
            Poll::Pending
        }
    }
}

How I think about pin

OK, now that I’ve lured you in with code examples, let me drive you away by diving into the details of Pin. I’m going to cover the way that I think about Pin. It is similar to but different from how Pin is presented in the pinned places post – in particular, I prefer to think about places that pin their values and not pinned places. In any case, Pin is surprisingly subtle, and I recommend that if you want to go deeper, you read boat’s history of Pin post and/or the stdlib documentation for Pin.

The Pin<P> type is a modifier on the pointer P

The Pin<P> type is unusual in Rust. It looks similar to a “smart pointer” type, like Arc<T>, but it functions differently. Pin<P> is not a pointer, it is a modifier on another pointer, so

  • a Pin<&T> represents a pinned reference,
  • a Pin<&mut T> represents a pinned mutable reference,
  • a Pin<Box<T>> represents a pinned box,

and so forth.

You can think of a Pin<P> type as being a pointer of type P that refers to a place (Rust jargon for a location in memory that stores a value) whose value v has been pinned. A pinned value v can never be moved to another place in memory. Moreover, v must be dropped before its place can be reassigned to another value.

Pinning is part of the “lifecycle” of a place

The way I think about, every place in memory has a lifecycle:

flowchart TD
Uninitialized 
Initialized
Pinned

Uninitialized --
    p = v where v: T
--> Initialized

Initialized -- 
    move out, drop, or forget
--> Uninitialized

Initialized --
    pin value v in p
    (only possible when T is !Unpin)
--> Pinned

Pinned --
    drop value
--> Uninitialized

Pinned --
    move out or forget
--> UB

Uninitialized --
    free the place
--> Freed

UB[💥 Undefined behavior 💥]
  

When first allocated, a place p is uninitialized – that is, p has no value at all.

An uninitialized place can be freed. This corresponds to e.g. popping a stack frame or invoking free.

p may at some point become initialized by an assignment like p = v. At that point, there are three ways to transition back to uninitialized:

  • The value v could be moved somewhere else, e.g. by moving it somewhere else, like let p2 = p. At that point, p goes back to being uninitialized.
  • The value v can be forgotten, with std::mem::forget(p). At this point, no destructor runs, but p goes back to being considered uninitialized.
  • The value v can be dropped, which occurs when the place p goes out of scope. At this point, the destructor runs, and p goes back to being considered uninitialized.

Alternatively, the value v can be pinned in place:

  • At this point, v cannot be moved again, and the only way for p to be reused is for v to be dropped.

Once a value is pinned, moving or forgetting the value is not allowed. These actions are “undefined behavior”, and safe Rust must not permit them to occur.

A digression on forgetting vs other ways to leak

As most folks know, Rust does not guarantee that destructors run. If you have a value v whose destructor never runs, we say that value is leaked. There are however two ways to leak a value, and they are quite different in their impact:

  • Option A: Forgetting. Using std::mem::forget, you can forget the value v. The place p that was storing that value will go from initialized to uninitialized, at which point the place p can be freed.
    • Forgetting a value is undefined behavior if that value has been pinned, however!
  • Option B: Leak the place. When you leak a place, it just stays in the initialized or pinned state forever, so its value is never dropped. This can happen, for example, with a ref-count cycle.
    • This is safe even if the value is pinned!

In retrospect, I wish that Option A did not exist – I wish that we had not added std::mem::forget. We did so as part of working through the impact of ref-count cycles. It seemed equivalent at the time (“the dtor doesn’t run anyway, why not make it easy to do”) but I think this diagram shows why it adding forget made things permanently more complicated for relatively little gain.4 Oh well! Can’t win ’em all.

Values of types implementing Unpin cannot be pinned

There is one subtle aspect here: not all values can be pinned. If a type T implements Unpin, then values of type T cannot be pinned. When you have a pinned reference to them, they can still squirm out from under you via swap or other techniques. Another way to say the same thing is to say that values can only be pinned if their type is !Unpin (“does not implement Unpin”).

Types that are !Unpin can be called address sensitive, meaning that once they pinned, there can be pointers to the internals of that value that will be invalidated if the address changes. Types that implement Unpin would therefore be address insensitive. Traditionally, all Rust types have been address insensitive, and therefore Unpin is an auto trait, implemented by most types by default.

Pin<&mut T> is really a “maybe pinned” reference

Looking at the state machine as I describe it here, we can see that possessing a Pin<&mut T> isn’t really a pinned mutable reference, in the sense that it doesn’t always refer to a place that is pinning its value. If T: Unpin, then it’s just a regular reference. But if T: !Unpin, then a pinned reference guarantees that the value it refers to is pinned in place.

This fits with the name Unpin, which I believe was meant to convey that idea that, even if you have a pinned reference to a value of type T: Unpin, that value can become unpinned. I’ve heard the metaphor of “if T: Unpin, you can left out the pin, swap in a different value, and put the pin back”.

Pin picked a peck of pickled pain

Everyone agrees that Pin is confusing and a pain to use. But what makes it such a pain?

If you are attempting to author a Pin-based API, there are two primary problems:

  1. Pin<&mut Self> methods can’t make use of regular &mut self methods.
  2. Pin<&mut Self> methods can’t access fields by default. Crates like pin-project-lite make this easier but still require learning obscure concepts like structural pinning.

If you attempting to consume a Pin-based API, the primary annoyance is that getting a pinned reference is hard. You can’t just call Pin<&mut Self> methods normally, you have to remember to use Box::pin or pin! first. (We saw this in Example 1 from this post.)

My proposal in a nutshell

This post is focused on a proposal with two parts:

  1. Making Pin-based APIs easier to author by replacing the Unpin trait with Overwrite.
  2. Making Pin-based APIs easier to call by integrating pinning into the borrow checker.

I’m going to walk through those in turn.

Making Pin-based APIs easier to author

Overwrite as the better Unpin

The first part of my proposalis a change I call s/Unpin/Overwrite/. The idea is to introduce Overwrite and then change the “place lifecycle” to reference Overwrite instead of Unpin:

flowchart TD
Uninitialized 
Initialized
Pinned

Uninitialized --
    p = v where v: T
--> Initialized

Initialized -- 
    move out, drop, or forget
--> Uninitialized

Initialized --
    pin value v in p
    (only possible when
T is 👉!Overwrite👈) --> Pinned Pinned -- drop value --> Uninitialized Pinned -- move out or forget --> UB Uninitialized -- free the place --> Freed UB[💥 Undefined behavior 💥]

For s/Unpin/Overwrite/ to work well, we have to make all !Unpin types also be !Overwrite. This is not, strictly speaking, backwards compatible, since today !Unpin types (like all types) can be overwritten and swapped. I think eventually we want every type to be !Overwrite by default, but I don’t think we can change that default in a general way without an edition. But for !Unpin types in particular I suspect we can get away with it, because !Unpin types are pretty rare, and the simplification we get from doing so is pretty large. (And, as I argued in the previous post, there is no loss of expressiveness; code today that overwrites or swaps !Unpin values can be locally rewritten.)

Why swaps are bad without s/Unpin/Overwrite/

Today, Pin<&mut T> cannot be converted into an &mut T reference unless T: Unpin.5 This because it would allow safe Rust code to create Undefined Behavior by swapping the referent of the &mut T reference and hence moving the pinned value. By requiring that T: Unpin, the DerefMut impl is effectively limiting itself to references that are not, in fact, in the “pinned” state, but just in the “initialized” state.

As a result, Pin<&mut T> and &mut T methods don’t interoperate today

This leads directly to our first two pain points. To start, from a Pin<&mut Self> method, you can only invoke &self methods (via the Deref impl) or other Pin<&mut Self> methods. This schism separates out the “regular” methods of a type from its pinned methods; it also means that methods doing field assignments don’t compile:

fn increment_field(self: Pin<&mut Self>) {
    self.field = self.field + 1;
}

This errors because compiling a field assignment requires a DerefMut impl and Pin<&mut Self> doesn’t have one.

With s/Unpin/Overwrite/, Pin<&mut Self> is a subtype of &mut self

s/Unpin/Overwrite/ allows us to implement DerefMut for all pinned types. This is because, unlike Unpin, Overwrite affects how &mut works, and hence &mut T would preserve the pinned state for the place it references. Consider the two possibilities for the value of type T referred to by the &mut T:

  • If T: Overwrite, then the value is not pinnable, and so the place cannot be in the pinned state.
  • If T: !Overwrite, the value could be pinned, but we also cannot overwrite or swap it, and so pinning is preserved.

This implies that Pin<&mut T> is in fact a generalized version of &mut T. Every &'a mut T keeps the value pinned for the duration of its lifetime 'a, but a Pin<&mut T> ensures the value stays pinned for the lifetime of the underlying storage.

If we have a DerefMut impl, then Pin<&mut Self> methods can freely call &mut self methods. Big win!

Today you must categorize fields as “structurally pinned” or not

The other pain point today with Pin is that we have no native support for “pin projection”6. That is, you cannot safely go from a Pin<&mut Self> reference to a Pin<&mut F> method that referring to some field self.f without relying on unsafe code.

The most common practice today is to use a custom crate like pin-project-lite. Even then, you also have to make a choice for each field between whether you want to be able to get a Pin<&mut F> reference or a normal &mut F reference. Fields for which you can get a pinned reference are called structurally pinned and the criteria for which one you should use is rather subtle. Ultimately this choice is required because Pin<&mut F> and &mut F don’t play nicely together.

Pin projection is safe from any !Overwrite type

With s/Unpin/Overwrite/, we can scrap the idea of structural pinning. Instead, if we have a field owner self: Pin<&mut Self>, pinned projection is allowed so long as Self: !Overwrite. That is, if Self: !Overwrite, then I can always get a Pin<&mut F> reference to some field self.f of type F. How is that possible?

Actually, the full explanation relies on borrow checker extensions I haven’t introduced yet. But let’s see how far we get without them, so that we can see the gap that the borrow checker has to close.

Assume we are creating a Pin<&'a mut F> reference r to some field self.f, where self: Pin<&mut Self>:

  • We are creating a Pin<&'a mut F> reference to the value in self.f:
    • If F: Overwrite, then the value is not pinnable, so this is equivalent to an ordinary &mut F and we have nothing to prove.
    • Else, if F: !Overwrite, then we have to show that the value in self.f will not move for the remainder of its lifetime.
      • Pin projection from ``*selfis only valid ifSelf: !Overwriteandself: Pin<&‘b mut Self>, so we know that the value in *self` is pinned for the remainder of its lifetime by induction.
      • We have to show then that the value v_f in self.f will never be moved until the end of its lifetime.

There are three ways to move a value out of self.f:

  • You can assign a new value to self.f, like self.f = ....
    • This will run the destructor, ending the lifetime of the value v_f.
  • You can create a mutable reference r = &mut self.f and then…
    • assign a new value to *r: but that will be an error because F: !Overwrite.
    • swap the value in *r with another: but that will be an error because F: !Overwrite.

QED. =)

Making Pin-based APIs easier to call

Today, getting a Pin<&mut> requires using the pin! macro, going through Box::pin, or some similar explicit action. This adds “syntactic salt” to calling a Pin<&mut Self> some other abstraction rooted in unsafe (e.g., Box::pin). There is no built-in way to safely create a pinned reference. This is fine but introduces ergonomic hurdles

We want to make calling a Pin<&mut Self> method as easy as calling an &mut self method. To do this, we need to extra the compiler’s notion of “auto-ref” to include the option of “auto-pin-ref”:

// Instead of this:
let future: Pin<&mut impl Future> = pin!(async { ... });
future.poll(cx);

// We would do this:
let mut future: impl Future = async { ... };
future.poll(cx); // <-- Wowee!

Just as a typical method call like vec.len() expands to Vec::len(&vec), the compiler would be expanding future.poll(cx) to something like so:

Future::poll(&pinned mut future, cx)
//           ^^^^^^^^^^^ but what, what's this?

This expansion though includes a new piece of syntax that doesn’t exist today, the &pinned mut operation. (I’m lifting this syntax from boats’ pinned places proposal.)

Whereas &mut var results in an &mut T reference (assuming var: T), &pinned mut var borrow would result in a Pin<&mut T>. It would also make the borrow checker consider the value in future to be pinned. That means that it is illegal to move out from var. The pinned state continues indefinitely until var goes out of scope or is overwritten by an assignment like var = ... (which drops the heretofore pinned value). This is a fairly straightforward extension to the borrow checker’s existing logic.

New syntax not strictly required

It’s worth noting that we don’t actually need the &pinned mut syntax (which means we don’t need the pinned keyword). We could make it so that the only way to get the compiler to do a pinned borrow is via auto-ref. We could even add a silly trait to make it explicit, like so:

trait Pinned {
    fn pinned(self: Pin<&mut Self>) -> Pin<&mut Self>;
}

impl<T: ?Sized> Pinned for T {
    fn pinned(self: Pin<&mut T>) -> Pin<&mut T> {
        self
    }
}

Now you can write var.pinned(), which the compiler would desugar to Pinned::pinned(&rustc#pinned mut var). Here I am using rustc#pinned to denote an “internal keyword” that users can’t type.7

Frequently asked questions

So…there’s a lot here. What’s the key takeaways?

The shortest version of this post I can manage is8

  • Pinning fits smoothly into Rust if we make two changes:
    • Limit the ability to swap types by default, making Pin<&mut T> a subtype of &mut T and enabling uniform pin projection.
    • Integrate pinning in the auto-ref rules and the borrow checker.
Why do you only mention swaps? Doesn’t Overwrite affect other things?

Indeed the Overwrite trait as I defined it is overkill for pinning. The more precise, we might imagine two special traits that affect how and when we can drop or move values:

trait DropWhileBorrowed: Sized { }
trait Swap: DropWhileBorrowed { }
  • Given a reference r: &mut T, overwriting its referent *r with a new value would require T: DropWhileBorrowed;
  • Swapping two values of type T requires that T: Swap.
    • This is true regardless of whether they are borrowed or not.

Today, every type is Swap. What I argued in the previous post is that we should make the default be that user-defined types implement neither of these two traits (over an edition, etc etc). Instead, you could opt-in to both of them at once by implementing Overwrite.

But we could get all the pin benefits by making a weaker change. Instead of having types opt out from both traits by default, they could only opt out of Swap, but continue to implement DropWhileBorrowed. This is enough to make pinning work smoothly. To see why, recall the pinning state diagram: dropping the value in *r (permitted by DropWhileBorrowed) will exit the “pinned” state and return to the “uninitialized” state. This is valid. Swapping, in contrast, is UB.

Two subtle observations here worth calling out:

  1. Both DropWhileBorrowed and Swap have Sized as a supertrait. Today in Rust you can’t drop a &mut dyn SomeTrait value and replace it with another, for example. I think it’s a bit unclear whether unsafe could do this if it knows the dynamic type of value behind the dyn. But under this model, it would only be valid for unsafe code do that drop if (a) it knew the dynamic type and (b) the dynamic type implemented DropWhileBorrowed. Same applies to Swap.
  2. The Swap trait applies longer than just the duration of a borrow. This is because, once you pin a value to create a Pin<&mut T> reference, the state of being pinned persists even after that reference has ended. I say a bit more about this in another FAQ below.

EDIT: An earlier draft of this post named the trait Swap. This was wrong, as described in the FAQ on subtle reasoning.

Why then did you propose opting out from both overwrites and swaps?

Opting out of overwrites (i.e., making the default be neither DropWhileBorrowed nor Swap) gives us the additional benefit of truly immutable fields. This will make cross-function borrows less of an issue, as I described in my previous post, and make some other things (e.g., variance) less relevant. Moreover, I don’t think overwriting an entire reference like *r is that common, versus accessing individual fields. And in the cases where people do do it, it is easy to make a dummy struct with a single field, and then overwrite r.value instead of *r. To me, therefore, distinguishing between DropWhileBorrowed and Swap doesn’t obviously carry its weight.

Can you come up with a more semantic name for Overwrite?

All the trait names I’ve given so far (Overwrite, DropWhileBorrowed, Swap) answer the question of “what operation does this trait allow”. That’s pretty common for traits (e.g., Clone or, for that matter, Unpin) but it is sometimes useful to think instead about “what kinds of types should implement this trait” (or not implement it, as the case may be).

My current favorite “semantic style name” is Mobile, which corresponds to implementing Swap. A mobile type is one that, while borrowed, can move to a new place. This name doesn’t convey that it’s also ok to drop the value, but that follows, since if you can swap the value to a new place, you can presumably drop that new place.

I don’t have a “semantic” name for DropWhileBorrowed. As I said, I’m hard pressed to characterize the type that would want to implement DropWhileBorrowed but not Swap.

What do DropWhileBorrowed and Swap have in common?

These traits pertain to whether an owner who lends out a local variable (i.e., executes r = &mut lv) can rely on that local variable lv to store the same value after the borrow completes. Under this model, the answer depends on the type T of the local variable:

  • If T: DropWhileBorrowed (or T: Swap, which implies DropWhileBorrowed), the answer is “no”, the local variable may point at some other value, because it is possible to do *r = /* new value */.
  • But if T: !DropWhileBorrowed, then the owner can be sure that lv still stores the same value (though lv’s fields may have changed).

Let’s use an analogy. Suppose I own a house and I lease it out to someone else to use. I expect that they will make changes on the inside, such as hanging up a new picture. But I don’t expect them to tear down the house and build a new one on the same lot. I also don’t expect them to drive up a flatbed truck, load my house onto it, and move it somewhere else (while proving me with a new one in return). In Rust today, a reference r: &mut T reference allows all of these things:

  • Mutating a field like r.count += 1 corresponds to hanging up a picture. The values inside r change, but r still refers to the same conceptual value.
  • Overwriting *r = t with a new value t is like tearing down the house and building a new one. The original value that was in r no longer exists.
  • Swapping *r with some other reference *r2 is like moving my house somewhere else and putting a new house in its place.

EDIT: Wording refined based on feedback.

What does it mean to be the “same value”?

One question I received was what it meant for two structs to have the “same value”? Imagine a struct with all public fields – can we make any sense of it having an identity? The way I think of it, every struct has a “ghost” private field $identity (one that doesn’t exist at runtime) that contains its identity. Every StructName { } expression has an implicit $identity: new_value() that assigns the identity a distinct value from every other struct that has been created thus far. If two struct values have the same $identity, then they are the same value.

Admittedly, if a struct has all public fields, then it doesn’t really matter whether it’s identity is the same, except perhaps to philosophers. But most structs don’t.

An example that can help clarify this is what I call the “scope pattern”. Imagine I have a Scope type that has some private fields and which can be “installed” in some way and later “deinstalled” (perhaps it modifies thread-local values):

pub struct Scope {...}

impl Scope {
    fn new() -> Self { /* install scope */ }
}

impl Drop for Scope {
    fn drop(&mut self) {
        /* deinstall scope */
    }
}

And the only way for users to get their hands on a “scope” is to use with_scope, which ensures it is installed and deinstalled properly:

pub fn with_scope(op: impl FnOnce(&mut Scope)) {
    let mut scope = Scope::new();
    op(&mut scope);
}

It may appear that this code enforces a “stack discipline”, where nested scopes will be installed and deinstalled in a stack-like fashion. But in fact, thanks to std::mem::swap, this is not guaranteed:

with_scope(|s1| {
    with_scope(|s2| {
        std::mem::swap(s1, s2);
    })
})

This could easily cause logic bugs or, in unsafe is involved, something worse. This is why lending out scopes requires some extra step to be safe, such as using a &-reference or adding a “fresh” lifetime paramteer of some kind to ensure that each scope has a unique type. In principle you could also use a type like &mut dyn ScopeTrait, because the compiler disallows overwriting or swapping dyn Trait values: but I think it’s ambiguous today whether unsafe code could validly do such a swap.

EDIT: Question added based on feedback.

There’s a lot of subtle reasoning in this post. Are you sure this is correct?

I am pretty sure! But not 100%. I’m definitely scared that people will point out some obvious flaw in my reasoning. But of course, if there’s a flaw I want to know. To help people analyze, let me recap the two subtle arguments that I made in this post and recap the reasoning.

Lemma. Given some local variable lv: T where T: !Overwrite mutably borrowed by a reference r: &'a mut T, the value in lv cannot be dropped, moved, or forgotten for the lifetime 'a.

During 'a, the variable lv cannot be accessed directly (per the borrow checker’s usual rules). Therefore, any drops/moves/forgets must take place to *r:

  • Because T: !Overwrite, it is not possible to overwrite or swap *r with a new value; it is only legal to mutate individual fields. Therefore the value cannot be dropped or moved.
  • Forgetting a value (via std::mem::forget) requires ownership and is not accesible while lv is borrowed.

Theorem A. If we replace T: Unpin and T: Overwrite, then Pin<&mut T> is a safe subtype of &mut T.

The argument proceeds by cases:

  • If T: Overwrite, then Pin<&mut T> does not refer to a pinned value, and hence it is semantically equivalent to &mut T.
  • If T: !Overwrite, then Pin<&mut T> does refer to a pinned value, so we must show that the pinning guarantee cannot be disturbed by the &mut T. By our lemma, the &mut T cannot move or forget the pinned value, which is the only way to disturb the pinning guarantee.

Theorem B. Given some field owner o: O where O: !Overwrite with a field f: F, it is safe to pin-project from Pin<&mut O> to a Pin<&mut F> reference referring to o.f.

The argument proceeds by cases:

  • If F: Overwrite, then Pin<&mut F> is equivalent to &mut F. We showed in Theorem A that Pin<&mut O> could be upcast to &mut O and it is possible to create an &mut F from &mut O, so this must be safe.
  • If F: !Overwrite, then Pin<&mut F> refers to a pinned value found in o.f. The lemma tells us that the value in o.f will not be disturbed for the duration of the borrow.

EDIT: It was pointed out to me that this last theorem isn’t quite proving what it needs to prove. It shows that o.f will not be disturbed for the duration of the borrow, but to meet the pin rules, we need to ensure that the value is not swapped even after the borrow ends. We can do this by committing to never permit swaps of values unless T: Overwrite, regardless of whether they are borrowed. I meant to clarify this in the post but forgot about it, and then I made a mistake and talked about Swap – but Swap is the right name.

What part of this post are you most proud of?

Geez, I’m so glad you asked! Such a thoughtful question. To be honest, the part of this post that I am happiest with is the state diagram for places, which I’ve found very useful in helping me to understand Pin:

flowchart TD
Uninitialized 
Initialized
Pinned

Uninitialized --
    `p = v` where `v: T`
--> Initialized

Initialized -- 
    move out, drop, or forget
--> Uninitialized

Initialized --
    pin value `v` in `p`
    (only possible when `T` is `!Unpin`)
--> Pinned

Pinned --
    drop value
--> Uninitialized

Pinned --
    move out or forget
--> UB

Uninitialized --
    free the place
--> Freed

UB[💥 Undefined behavior 💥]
  

Obviously this question was just an excuse to reproduce it again. Some of the key insights that it helped me to crystallize:

  • A value that is Unpin cannot be pinned:
    • And hence Pin<&mut Self> really means “reference to a maybe-pinned value” (a value that is pinned if it can be).
  • Forgetting a value is very different from leaking the place that value is stored:
    • In both cases, the value’s Drop never runs, but only one of them can lead to a “freed place”.

In thinking through the stuff I wrote in this post, I’ve found it very useful to go back to this diagram and trace through it with my finger.

Is this backwards compatible?

Maybe? The question does not have a simple answer. I will address in a future blog post in this series. Let me say a few points here though:

First, the s/Unpin/Overwrite/ proposal is not backwards compatible as I described. It would mean for example that all futures returned by async fn are no longer Overwrite. It is quite possible we simply can’t get away with it.

That’s not fatal, but it makes things more annoying. It would mean there exist types that are !Unpin but which can be overwritten. This in turn means that Pin<&mut Self> is not a subtype of &mut Self for all types. Pinned mutable references would be a subtype for almost all types, but not those that are !Unpin && Overwrite.

Second, a naive, conservative transition would definitely be rough. My current thinking is that, in older editions, we add T: Overwrite bounds by default on type parameters T and, when you have a T: SomeTrait bound, we would expand that to include a Overwrite bound on associated types in SomeTrait, like T: SomeTrait<AssocType: Overwrite>. When you move to a newer edition I think we would just not add those bounds. This is kind of a mess, though, because if you call code from an older edition, you are still going to need those bounds to be present.

That all sounds painful enough that I think we might have to do something smarter, where we don’t always add Overwrite bounds, but instead use some kind of inference in older editions to avoid it most of the time.

Conclusion

My takeaway from authoring this post is that something like Overwrite has the potential to turn Pin from wizard level Rust into mere “advanced Rust”, somewhat akin to knowing the borrow checker really well. If we had no backwards compatibility constraints to work with, it seems clear that this would be a better design than Unpin as it is today.

Of course, we do have backwards compatibility constraints, so the real question is how we can make the transition. I don’t know the answer yet! I’m planning on thinking more deeply about it (and talking to folks) once this post is out. My hope was first to make the case for the value of Overwrite (and to be sure my reasoning is sound) before I invest too much into thinking how we can make the transition.

Assuming we can make the transition, I’m wondering two things. First, is Overwrite the right name? Second, should we take the time to re-evaluate the default bounds on generic types in a more complete way? For example, to truly have a nice async story, and for myraid other reasons, I think we need must move types. How does that fit in?


  1. The precise design of generators is of course an ongoing topic of some controversy. I am not trying to flesh out a true design here or take a position. Mostly I want to show that we can create ergonomic bridges between “must pin” types like generators and “non pin” interfaces like Iterator in an ergonomic way without explicit mentioning of pinning. ↩︎

  2. Boats has argued that, since no existing iterator can support borrows over a yield point, generators might not need to do so either. I don’t agree. I think supporting borrows over yield points is necessary for ergonomics just as it was in futures↩︎

  3. Actually for Pin<impl DerefMut<Target: Generator>>↩︎

  4. I will say, I use std::mem::forget quite regularly, but mostly to make up for a shortcoming in Drop. I would like it if Drop had a separate method, fn drop_on_unwind(&mut self), and we invoked that method when unwinding. Most of the time, it would be the same as regular drop, but in some cases it’s useful to have cleanup logic that only runs in the case of unwinding. ↩︎

  5. In contrast, a Pin<&mut T> reference can be safely converted into an &T reference, as evidenced by Pin’s Deref impl. This is because, even if T: !Unpin, a &T reference cannot do anything that is invalid for a pinned value. You can’t swap the underlying value or read from it. ↩︎

  6. Projection is the wonky PL term for “accessing a field”. It’s never made much sense to me, but I don’t have a better term to use, so I’m sticking with it. ↩︎

  7. We have a syntax k#foo for explicitly referred to a keyword foo. It is meant to be used only for keywords that will be added in future Rust editions. However, I sometimes think it’d be neat to internal-ish keywords (like k#pinned) that are used in desugaring but rarely need to be typed explicitly; you would still be able to write k#pinned if for whatever reason you wanted to. And of course we could later opt to stabilize it as pinned (no prefix required) in a future edition. ↩︎

  8. I tried asking ChatGPT to summarize the post but, when I pasted in my post, it replied, “The message you submitted was too long, please reload the conversation and submit something shorter.” Dang ChatGPT, that’s rude! Gemini at least gave it the old college try. Score one for Google. Plus, it called my post “thought-provoking!” Aww, I’m blushing! ↩︎

Don Marticonvert TTF to WOFF2 on Fedora Linux

If you have a font in TTF (TrueType) format and need WOFF2 for web use, there is a woff2_compress utility packaged for Fedora (but still missing a man page and --help feature.) The package is woff2-tools.

sudo dnf install woff2-tools woff2_compress example.ttf

Also packaged for Debian: Details of package woff2 in sid

WOFF

For the older WOFF format (which I needed in order to have the font show up on a really old browser) the tool is sfnt2woff-zopfli.

Install and run with:

sudo dnf install sfnt2woff-zopfli sfnt2woff-zopfli example.ttf 

References

Converting TTF fonts to WOFF2 (and WOFF) - DEV Community (covers cloning and building from source)

How to Convert Font Formats to WOFF under Linux (compares several conversion tools)

Related

colophon (This site mostly uses Modern Font Stacks but has some Inconsolata.)

Bonus links

The AI bill Newsom didn’t veto — AI devs must list models’ training data From 2026, companies that make generative AI models available in California need to list their models’ training sets on their websites — before they release or modify the models. (The California Chamber of Commerce came out against this one, citing the technical difficulty in complying. They’re probably right, especially considering that under the CCPA, businesses are required to disclose inferences about people (PDF) and it’s hard to figure out which inferences are present in a large ML model.)

Antitrust challenge to Facebook’s ‘superprofiling’ finally wraps in Germany — with Meta agreeing to data limits Meta has to offer a cookie setting that allows Facebook and Instagram users’ data to decide whether they want to allow it to combine their data with other information Meta collects about them — via third-party websites where its tracking technologies are embedded or from apps using its business tools — or kept separate. but some of the required privacy+competition fixes must be Germany-only. (imho some US state needs a law that any privacy or consumer protection feature that a large company offers to users outside the US must also be available in that state.)

IAB, Others Urge Court To Reconsider Ruling That Curbed Section 230 10/10/2024 (Some background on this one: TikTok Inspired Child Suicide Prompts a Sound Reading of Section 230 The problem with this case from TikTok’s point of view is that Big Tech wants to keep claiming that its recommendation algorithms are somehow both the company’s own free speech and speech by users. But the Third Circuit is making them pick one. Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, it follows that doing so amounts to first-party speech under § 230, too.)

California Privacy Act Sparks Website Tracking Technology Suits (This is a complicated one. Lawsuit accuses a company of breaking not one, not two, but three California privacy laws. And the California Constitution, too. Motion to dismiss mostly denied (PDF). Including a CCPA claim. Yes, there is a CCPA private right of action. CCPA claims survive a motion to dismiss where a plaintiff alleges that defendants disclosed plaintiff’s personal information without his consent due to the business’s failure to maintain reasonable security practices. In this case, Google Analytics tracking on a therapy site. I have some advice on how to get out in front of this kind of case, will share later.)

Digital Scams More Likely to Hurt Black and Latino Consumers - Consumer Reports Compounding the problem, experts believe, is that Black and Latino consumers are disproportionately targeted by a wide variety of digital scams. (This is a big reason why the I have nothing to hide argument about privacy doesn’t work. When a user who is less likely to be discriminated against chooses to participate in a system with personalization risks, that user’s information helps make user-hostile personalization against others work better. Privacy is a collective problem.)

ClassicPress: WordPress without the block editor [LWN.net] Once installed (or migrated), ClassicPress looks and feels like old-school WordPress.

Google never cared about privacy It was a bit of a tell how the DV360 product team demonstrated zero sense of urgency around making it easier for some buyers to test Privacy Sandbox, let alone releasing test results to prove it worked. The Chrome cookie deprecation delays, the inability of any ad tech expert or observer to convincingly explain how Google could possibly regulate itself — all of these deserve renewed scrutiny, given what we now know. (Google Privacy Sandbox was never offered as an option for YouTube, either. The point of janky in-browser ads is to make the slick YouTube ads, which have better reporting, look better to advertisers who have to allocate budget between open web and YouTube.)

Taylor Swift: Singer, Songwriter, Copyright Innovator [R]ecord companies are now trying to prohibit re-recordings for 20 or 30 years, not just two or three. And this has become a key part of contract negotiations. Will they get 30 years? Probably not, if the lawyer is competent. But they want to make sure that the artist’s vocal cords are not in good shape by the time they get around to re-recording.

Mozilla Security BlogBehind the Scenes: Fixing an In-the-Wild Firefox Exploit

At Mozilla, browser security is a critical mission, and part of that mission involves responding swiftly to new threats. Tuesday, around 8 AM Eastern time, we received a heads-up from the Anti-Virus company ESET, who alerted us to a Firefox exploit that had been spotted in the wild. We want to give a huge thank you to ESET for sharing their findings with us—it’s collaboration like this that keeps the web a safer place for everyone.

We’ve already released a fix for this particular issue, so when Firefox prompts you to upgrade, click that button. If you don’t know about Session Restore, you can ask Firefox to restore your previous session on restart.

The sample ESET sent us contained a full exploit chain that allowed remote code execution on a user’s computer. Within an hour of receiving the sample, we had convened a team of security, browser, compiler, and platform engineers to reverse engineer the exploit, force it to trigger its payload, and understand how it worked.

During exploit contests such as pwn2own, we know ahead of time when we will receive an exploit, can convene the team ahead of time, and receive a detailed explanation of the vulnerabilities and exploit. At pwn2own 2024, we shipped a fix in 21 hours, something that helped us earn an industry award for fastest to patch. This time, with no notice and some heavy reverse engineering required, we were able to ship a fix in 25 hours. (And we’re continually examining the process to help us drive that down further.)

While we take pride in how quickly we respond to these threats, it’s only part of the process. While we have resolved the vulnerability in Firefox, our team will continue to analyze the exploit to find additional hardening measures to make deploying exploits for Firefox harder and rarer. It’s also important to keep in mind that these kinds of exploits aren’t unique to Firefox. Every browser (and operating system) faces security challenges from time to time. That’s why keeping your software up to date is crucial across the board.

As always, we’ll keep doing what we do best—strengthening Firefox’s security and improving its defenses.

The post Behind the Scenes: Fixing an In-the-Wild Firefox Exploit appeared first on Mozilla Security Blog.

Mozilla Privacy BlogHow Lawmakers Can Help People Take Control of Their Privacy

At Mozilla, we’ve long advocated for universal opt-out mechanisms that empower people to easily assert their privacy rights. A prime example of this is Global Privacy Control (GPC), a feature built into Firefox. When enabled, GPC sends a clear signal to websites that the user does not wish to be tracked or have their personal data sold.

California’s landmark privacy law, the CCPA, mandates that tools like GPC must be respected, giving consumers greater control over their data. Encouragingly, similar provisions are emerging in other state laws. Yet, despite this progress, many browsers and operating systems – including the largest ones – still do not offer native support for these mechanisms.

That’s why we were encouraged by the advancement of California AB 3048, a bill that would require browsers and mobile operating systems to include an opt-out setting, allowing consumers to easily communicate their privacy preferences.

Mozilla was disappointed that AB 3048 was not signed into law. The bill was a much-needed step in the right direction.

As policymakers advance similar legislation in the future, there are small changes to the AB 3048 text that we’d propose, to ensure that the bill doesn’t create potential loopholes that undermine its core purpose and weaken existing standards like Global Privacy Control by leaving too much room for interpretation. It’s essential that rules prioritize consumer privacy and meet the expectations that consumers rightly have about treatment of their sensitive personal information.

Mozilla remains committed to working alongside California as the legislature considers its agenda for 2025, as well as other states and ultimately the U.S. Congress, to advance meaningful privacy protections for all people online. We hope to see legislation bolstering this key privacy tool reemerge in California, and advance throughout the US.

The post How Lawmakers Can Help People Take Control of Their Privacy appeared first on Open Policy & Advocacy.

Mozilla ThunderbirdContributor Highlight: Toad Hall

We’re back with another contributor highlight! We asked our most active contributors to tell us about what they do, why they enjoy it, and themselves. Last time, we talked with Arthur, and for this installment, we’re chatting with Toad Hall.

If you’ve used Support Mozilla (SUMO) to get help with Thunderbird, Toad Hall may have helped you. They are one of our most dedicated contributors, and their answers on SUMO have helped countless people.

How and Why They Use Thunderbird

Thunderbird has been my choice of email client since version 3, so I have witnessed this product evolve and improve over the years. Sometimes, new design can initially derail you. Being of an older generation, I appreciate it is not necessarily so easy to adapt to change, but I’ve always tried to embrace new ideas and found that generally, the changes are an improvement.

Thunderbird offers everything you expect from handling several email accounts in one location, filtering, address books and calendar, plus many more functionalities too numerous to mention. The built in Calendar with its Events and Tasks options is ideal for both business and personal use. In addition, you can also connect to online calendars.  I find using the pop up reminders so helpful whether it’s notifying you of an appointment, birthday or that a TV program starts in 15 minutes!  Personally, I particularly impressed that Thunderbird offers the ability to modify the view and appearance to suit my needs and preferences.

I use a Windows OS, but Thunderbird offers release versions suitable for Windows, MAC and Linux variants of Operating Systems. So there is a download which should suit everyone.  In addition, I run a beta version so I can have more recent updates, meaning I can contribute by helping to test for bugs and reporting issues before it gets to a release version.

How They Contribute

The Thunderbird Support forum would be my choice as the first place to get help on any topic or query and there is a direct link to it via the ‘Help’ > ‘Get Help’ menu option in Thunderbird. As I have many years of experience using Thunderbird, I volunteer my free time to assist others in the Thunderbird Support Forum which I find a very rewarding experience. I have also helped out writing some Support Forum Help Articles. In more recent years I’ve assisted on the Bugzilla forum helping to triage and report potential bugs. So, people can get involved with Thunderbird in various ways.

Share Your Contributor Highlight (or Get Involved!)

Thanks to Toad Hall and all our contributors who have kept us alive and are helping us thrive!

If you’re a contributor who would like to share your story, get in touch with us at community@thunderbird.net. If you want to get involved with Thunderbird, read our guide to learn about all the ways to contribute.

The post Contributor Highlight: Toad Hall appeared first on The Thunderbird Blog.

Don Martidrinking games with the Devil

Should I get into a drinking game with the Devil? No, for three important reasons unrelated to your skill at the game.

  1. The Devil can out-drink you.

  2. The Devil can drink substances that are toxic to you even in small quantities.

  3. The Devil can cheat in ways that you will not be able to detect, and take advantage of rules loopholes that you might not understand.

What if I am really good at the skills required for the game? Still no. Even if you have an accurate idea of your own skill level, it is hard to estimate the Devil’s skill level. And even if you have roughly equally matched skills, the Devil still has the three advantages above.

What if I’m already in a drinking game with the Devil? I can’t offer a lot of help here, but I have read a fair number of comic books. As far as I can tell, your best hope is to delay playing and to delay taking a drink when required to. It is possible that some more powerful entity could distract the Devil in a way that results in the end of the game.

Bonus links

IAB, Others Urge Court To Reconsider Ruling That Curbed Section 230 (this is why the legit Internet is going to win. The lawyers needed to defend the blackout challenge are expensive, and a lot of state legislators will serve for gas money. As legislators learn to introduce more, and more diverse, laws on Big Tech the cost imbalance will become clearer.)

In the Trenches with State Policymakers Working to Pass Data Privacy Laws Former state representative from Oklahoma, Collin Walke, said that one tech company with an office in his state hired about 30 more lobbyists just to lobby on the privacy bill he was trying to pass.

Risks vs. Harms: Youth & Social Media Of course, there are harms that I do think are product liability issues vis-a-vis social media. For example, I think that many privacy harms can be mitigated with a design approach that is privacy-by-default. I also think that regulations that mandate universal privacy protections would go a long way in helping people out. But the funny thing is that I don’t think that these harms are unique to children. These are harms that are experienced broadly. And I would argue that older folks tend to experience harms associated with privacy much more acutely.

Google Search user interface: A/B testing shows security concerns remain For the past few days, Google has been A/B testing some subtle visual changes to its user interface for the search results page….Despite a more simplified look and feel, threat actors are still able to use the official logo and website of the brand they are abusing. From a user’s point of view, such ads continue to be as misleading.

Ukraine’s new F-16 simulator spotlights a ‘paradigm shift’ led from Europe (Europe isn’t against technology or innovation, they’re mainly just better at focusing on real problems.)

Firefox NightlySearch Improvements Are On Their Way – These Weeks in Firefox: Issue 169

Highlights

  • The search team is planning on enabling a series of improvements to the search experience this week in Nightly! This project is called “Scotch Bonnet”.
    • We would love to hear your feedback via bug reports! We will also create a Connect page shortly.
    • The pref is browser.urlbar.scotchBonnet.enableOverride for anyone who wants a sneak preview.
  • The New Tab team has added a new experimental widget which shows a vertical list of interesting stories across multiple cells of the story grid:
    • The new tab page in Firefox is shown. The grid of stories is shown below the default set of top sites, and includes a new "tall" widget that spans several grid cells. That tall widget lists several stories vertically.

      We’re testing out a vertical list of stories in regions where stories are enabled.

    • You can test this out in Nightly by setting browser.newtabpage.activity-stream.discoverystream.contextualContent.enabled to true in about:config
    • We will be running a small experiment with this new widget, slated for Firefox 132, for regions where stories are enabled.

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Henry Wilkes (they/them) [:henry-x]
  • Meera Murthy

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Fixed mild performance regression in load times when the user browses websites that are registered as default/built-in search engines (fixed in Nightly 132, and uplifted to Beta 131) – Bug 1916240
  • Fixed startup error hit by static themes using MV3 manifest.json files – Bug 1917613
  • The WebExtensions popup notification shown when an extension is hiding Firefox tabs (using the tabs.hide method) is now anchored to the extensions button – Bug 1920706
  • Fixed browser.search.get regression (initially introduced in ESR 128 through the migration to the search-config-v2) that made the faviconUrl be set to blob urls (not accessible to other extensions). This regression has been fixed in Nightly 132 and then uplifted to Firefox 131 and ESR 128
    • Thanks to Standard8 for fixing the regression!
WebExtension APIs
  • The storage.session API now logs a warning message to raise extension developer awareness that the storage.session quota is being exceeded on channels where it is not enforced yet (currently only enforced on nightly >= 131) – Bug 1916276

DevTools

DevTools Toolbox
WebDriver BiDi
  • External:
    • Liam DeBeasi renamed the isRoot argument of getBrowsingContextInfo() to includeParentId to make the code easier to understand (bug).
  • Updates:
    • Thanks to jmaher for splitting the marionette job in several chunks (bug).
    • Julian fixed the timings for network events to be in milliseconds instead of microseconds (bug)
    • Henrik and Julian improved the framework used by WebDriver BiDi to avoid failing commands when browsing contexts are loading (bug, bug, bug)
    • Sasha updated the WebDriver BiDi implementation for cookies to use the network.cookie.CHIPS.enabled preference. The related workarounds will be removed in the near future. (bug)

Lint, Docs and Workflow

Migration Improvements

New Tab Page

  • We’re going to be doing a slow, controlled rollout to change the endpoints with which we fetch sponsored top sites and stories. This is part of a larger architectural change to unify the mechanism with which we fetch this sponsored content.

Search and Navigation

  • Scotch Bonnet (search UI update) Related Changes
    • General
      • Daisuke connected Scotch Bonnet to Nimbus 1919813
    • Intuitive Search Keywords
      • Mandy added telemetry for search restrict keywords 1917992
    • Unified Search Button
      • Dale improved the UI of the Unified Search Button by aligning it closer to the design 1908922
      • Daisuke made the Unified Search Button more consistent depending on whether it was in an open/closed state 1913234
    • Persisted Search
      • James changed Persisted Search to use a cleaner design in preparation for its use with the Unified Search Button. It now has a button on the right side to revert the address bar and show the URL. And the Persist feature works with non-default app provided engines  1919193, 1915273, 1913312
    • HTTPS Trimming
      • Marco changed it so keyboard focus immediately untrims an https address 1898155

Don Martifix Google Search

I can’t quite get Google Search back to pre-enshittification, but this is pretty close.

Remove AI crap

This will probably make the biggest change in the layout. Makes the AI material and various other growth hacking stuff disappear from the top of search results pages so it’s easier to get to the normal links.

Start a blocklist

Some sites are better at SEO than at content and keep showing up in search results. This step doesn’t help the first time that a crap site comes up, but future searches on related topics tend to get better results as I block the over-SEOed sites to let the legit sites rise to the top.

  • Firefox: Personal Blocklist

  • Google Chrome: (There is supposed to be an extension like this for Google Chrome too, but I don’t have the link.)

This one gets better as my blocklist grows. If you try this one, be patient.

Turn off ad tracking

If you use Google Search with a Google Account, go to https://myadcenter.google.com/home and set Personalized Ads to Off. This probably won’t reduce the raw number of ads, but will make it harder for Google to match you with a deceptive ad targeted at you. (The scam ads are even impersonating Google now.)

Fix click tracking

Use ClearURLs to remove tracking redirects. (Original Google results were links to sites—now they’re links back to Google which redirects to the sites, collecting extra data from you and slowing down browsing by one step. ClearURLs restores the original behavior. (To me it feels faster but I haven’t done a benchmark.)

Block search ads

This is the next step to try if scam-looking search ads are still getting through.

The FBI recommends Use an ad blocking extension when performing internet searches. (Internet Crime Complaint Center (IC3) | Cyber Criminals Impersonating Brands Using Search Engine Advertisement Services to Defraud Users)

Right now the extension that is best at blocking search ads is uBlock Origin but it looks like it will take some work to set it up for blocking search ads but not ads on legit sites. I’ll post instructions when I get that working.

Turn off browser advertising features

These are not used much today, but turning these off will probably help you get cleaner (less personalized) search results in future, so might as well check them.

Related

How Do I Protect My Privacy If I’m Seeking an Abortion? – The Markup Includes an example of how the AI Overview feature can misinform users about health and legal issues. Turn it off to help stay safe.

Bonus links

Hey Google, What’s The Chrome User Choice Mechanism Going To Look Like? (whatever the defaults are, I’ll figure out the right options and post here)

Smart TVs are like “a digital Trojan Horse” in people’s homes (Browsers are a relief after other devices)

Meta smart glasses can be used to dox anyone in seconds, study finds

Project Analyzing Human Language Usage Shuts Down Because ‘Generative AI Has Polluted the Data’

Adrian GaudebertHow much did Dawnmaker really cost?

About a year ago, I wrote a piece explaining how much we estimated making Dawnmaker would cost. Well, Dawnmaker is finished, so as promised, I'm going to revisit that and show you how much it actually cost to produce our game! Yay, more money talk!

In June 2023, I made a budget for Dawnmaker that projected the game would cost a total of 520k€ to make. A year later, I can announce that the total budget is around 320k€. Why such a big difference? Because we never managed to secure funding, and thus had to cut a lot of what we wanted to do. We did not hire a team for the production of the game, did not even do the production of the game, did not pay ourselves, and reduced our spending to the minimum.

I'm writing that the budget is 320k€, but that does not mean we actually spent that much money. The amount of money that transited through our bank account and was disbursed is about 95k€. The remaining 225k€ are my estimation for how much Arpentor Studio would have spent if Alexis and I had paid ourselves decent salaries for the whole duration of the project. So in a sense you could say that Dawnmaker only cost 95k€, and there's some truth to that, but it's also a lie. Our work has value and needs to be accounted for in budgeting. Because in the end, this is money that we lost by not doing something else that would have paid us.

Where did the money go?

So we spent 95k€ over the course of 2.5 years. Here are the main expense categories we had:

Dawnmaker budget breakdown

Even though we barely paid ourselves — we did for 4 months at a time when we thought we were getting a bunch of money, but ultimately did not — salaries are still the biggest category. If you include contracting, which is also paying people to work on our game, that makes up for 60% of the game's budget. The rest is split between Company spending (lawyers, accounting, etc.), events and travel (like going to the Game Camp every year), regular fee for online services (hosting, email, documentation) and a touch of hardware. Plus all the remaining small things that don't fit the other categories, like an ads campaign.

The financial outcome of Dawnmaker

320k€ is an incredibly big sum for such a small company, especially if you compare that to how much the game made. At the time of writing, about 6k€ made it into our bank account. Our players seem to really enjoy Dawnmaker, according to our 94% positive reviews on Steam, so I guess we can call it a critical success. But financially it's far from one: we need another 314k€ to break even!

One metric that I'm thinking about those days, as I prepare the next project, is the revenue per working day. On Dawnmaker, as of writing, Alexis and I made about 6€ per working day. That's less than one tenth of the minimal wage in France, and that's without counting the money that came out of our pockets — otherwise our revenue per day would be negative.

If you're reading this and you're thinking of starting a game studio, here's the biggest advice I can give you: start by making small games. Reduce the risk — the financial cost — by making games that are small, but take them to the finish line. You'll gain experience, you'll make yourself a portfolio that will be helpful to raise funding later, and if you will have a much better chance of having a decent revenue per working day. But I'll discuss this in more details in a future post.

Dawnmaker Characters update is available

Dawnmaker is 20% off!

Yesterday we released a major, free update for Dawnmaker, our solo turn-based strategy game. We've added 3 characters, each with their own deck and roster of buildings, as well as a ton of new content. To celebrate, we're discounting the game, 20% off for the next two weeks. If you want to experience our city building meets deckbuilding game, now is your time to get it!

Buy Dawnmaker on Steam Buy Dawnmaker on itch.io


This piece was initially sent out to the readers of our newsletter. Wanna join in on the fun? Head out to Dawnmaker's presentation page and fill the form. You'll receive regular stories about how we're making this game and the latest news of its development!

Join our community!

Don Martithere ought to be a law

Do we really need another CCPA-like state privacy law, or can states mix it up a little in 2025?

What if, instead of big boring laws intended to cover everything, legislators did more of a do the simplest thing that could possibly work approach? Big Tech lobbyists are expensive—maybe a better way to beat them is, instead of grinding out the long-ass PDFs they expect, make them fight an unpredictable distributed campaign of random-ish short bills that take the side of local small businesses?

  • Require generative AI companies to offer an opt out that is not tied to any other services such as search. AI legal links

  • Surveillance companies should need to get state surveillance licenses. Big Tech platforms: mall, newspaper, or something else?, surveillance licensing in practice

  • Require blocking of search ads on state-owned and educational computers, because of the 2022 FBI warning (that’s still up) and the threat of fake ads intended to steal people’s passwords for commonly used services such as Slack and Calendly. B L O C K in the U S A

  • Require Global Privacy Control for smart TVs and appliances, and for smart home platforms that support ordering or subscriptions. GPC all the things!. We also need an opt-out preference signal for NFC tap to pay devices. (AB 3048 in California was a good idea, but it got changed to cover browsers and phones only, so would have tended to drive surveillance to devices where it’s harder to avoid, which would be a terrible experience for users. Thank you for browsing our catalog site, use your compatible smart appliance to actually order anything.) Update 31 Oct 2024: possibly combine the GPC mandate with a reform to wiretapping laws to address the CIPA Uncertainty that a lot of companies have been on about recently. Amend CIPA and similar state wiretapping laws to state that data collection from a device or client software that supports GPC is definitely not wiretapping. That way the companies get the legal ambiguity resolved, the users get their opt-outs, sounds like a solution we can all live with.

  • Some kind of a digital tearsheet requirement to make it harder to trick advertisers into sponsoring illegal activities. notes on a California advertiser protection bill

  • Require clear explanations of consumer categories and inferences. OTHER ATTRIBUTES

  • Postal RtK/RtD/opt outs. If a postal backup is available, that sets the floor for how annoying a company can make the online process. The problem with CCPA RtK workflows

  • Add miscellaneous power user time saving improvements to existing privacy laws. State privacy law features from the power user point of view

  • Pigovian tax on databases of PII (calculated as n * log(n) to disincentivize risky centralization) taxing surveillance marketing

  • Require platform ad libraries to be crawlable by image indexers like TinEye and by trademark monitoring firms. some ways that Facebook ads are optimized for deceptive advertising

  • Euroclone law: if a company operates in 50 or more countries, and offers a consumer or privacy protection feature to the residents of some jurisdiction outside the USA, then that feature must also be offered to residents of our state. Another easy-ish state law: the No Second-class Citizenship Act

  • Federal: Keep Section 230 immunity for platforms, but pass liability through to the advertisers. Big Tech would have to clean up their act to keep brands.

  • Update existing wiretapping laws to cover modern surveillance in media where no GPC or analogous opt-out is available. In the Kathleen Vita v. New England Baptist Hospital decision, the court wrote, If the Legislature intends for the wiretap act’s criminal and civil penalties to prohibit the tracking of a person’s browsing of, and interaction with, published information on websites, it must say so expressly.

Yes, the Big Tech companies will try to get small businesses to come out and advocate for surveillance, but there are a bunch of other small business issues that limitations on surveillance could help address, by shifting the balance of power away from surveillance companies.

  • Are small business owners contending for search rankings and map listings with fake businesses pretending to be competitors in their neighborhood?

  • Is Big Tech placing bogus charges on their advertiser account–or, if they run ads on their own site, are ad companies docking their pay for unexplained “invalid traffic”?

  • Are companies taking their content for “AI” that directly competes with their sites—without letting them opt out, or offering an opt-out that would make their business unable to use other services?

  • Can a small business even get someone from Big Tech on the phone, or are companies putting their dogmatic programs of union-busting and layoffs ahead of service even to advertisers and good business customers?

  • What happens when an account gets compromised or hacked? Do small businesses have any way to get help (without knowing someone who happens to know someone at the big company?)

(Update 9 Nov 2024) Each legal victory for groups like NetChoice reveals to state lawmakers how to craft more resilient laws.Jess Miers, on Techdirt

Related

privacy economics sources, an easy experiment to support behavioral advertising Lots of claims about the benefits of personalized advertising, not so much evidence.

Calif. Governor vetoes bill requiring opt-out signals for sale of user data

Bonus links

Meta faces data retention limits on its EU ad business after top court ruling

The more sophisticated AI models get, the more likely they are to lie

As the open social web grows, a new nonprofit looks to expand the ‘fediverse’

Google’s GenAI facing privacy risk assessment scrutiny in Europe

The LLM honeymoon phase is about to end

The Department of Transportation’s Underused Privacy Authority

TikTok Inspired Child Suicide Prompts a Sound Reading of Section 230

DOJ Claims Google ‘Destroyed’ Evidence Before Antitrust Trial

The Billionaire Suing Facebook to Remove His Face From AI Scams - WSJ

Don Martilinks for 6 October 2024

Intent IQ Has Patents For Ad Tech’s Most Basic Functions – And It’s Not Afraid To Use Them (Wait a minute. If Firefox is part of the Open Innovation Network’s Linux System definition, and Firefox has ads now, does that mean OIN covers this?) 🍿

New Map Shows Community Broadband Networks Are Exploding In U.S. Community-owned broadband networks provide faster, cheaper, better service than their larger private-sector counterparts. Staffed by locals, they’re also more directly accountable and responsive to the needs of locals

So It Goes GHQ is a board game invented by Kurt Vonnegut in 1956. GHQ is to WWII what chess is to the Medieval battlefield.

The Other Bubble While SaaS is generally a good deal for small-to-mid-sized companies, the inevitable sprawl of letting SaaS into your organization means that you’re stuck with them.

Oskar Wickström: How I Built “The Monospace Web” (fun with CSS, cool vintage style serious-looking design)

Posse: Reclaiming social media in a fragmented world Rather than publishing a post onto someone else’s servers on Twitter or Mastodon or Bluesky or Threads or whichever microblogging service will inevitably come along next, the posts are published locally to a service you control.

Best practices in practice: Black, the Python code formatter I don’t have to explain what they got wrong and why it matters — they don’t even need to understand what happens when the auto-formatter runs. It just cleans things up and we move on with life.

EPIC Publishes Model Privacy Bill as Practical Solution for States (everyone ready for the 2025 privacy bill season next year? There are still some practical problems with this draft—I can see how opting out of every company that might have your data getting to be a big time suck under this. Needs to be simplified to the point where it’s practical IMHO.)

What Happened After I Outed a Reddit Mod for Affiliate Spam (you know that thing where you add reddit to your web search to find honest reviews?)

Valve Steam Deck as a stepping stone to the Linux desktop Thanks to the technology behind Steam Desk, however, you can now play Windows games on Linux without any fuss or muss. (of course, all the growth hacking on Microsoft® brand Windows might help, too)

A layered approach to content blocking Chromium’s Manifest v3 includes the declarativeNetRequest API, which delegates these functions to the browser rather than the extension. Doing so avoids the timing issues visible in privileged extensions and does not require giving the extension access to the page. While these filters are more reliable and improve privilege separation, they are also substantially weaker. You can say goodbye to more advanced anti-adblock circumvention techniques. (Good info on the tradeoffs in Manifest v3, and a possible way forward, with simpler/more secure and complex/more featureful blocking both available to the user)

(If you’re still bored after reading all these, how about trying some effective privacy tips?)

Firefox Developer ExperienceFirefox DevTools Newsletter — 131

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 131 Nightly release cycle.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla:

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Supercharging CSS variables debugging

CSS variables, or CSS custom properties if you’re a spec reader, are fantastic for creating easy reusable values through your pages. To make sure they’re as enjoyable to write in your IDE as to debug in the Inspector, all vendors added a way to quickly see the declaration value of a variable when hovering it in the rule view.

DevTools rules view with the following declaration: `height: var(--button-height)`. A tooltip point to the variable and indicates that its value is 20px

This does work nicely as long as your CSS variable does not depend on other variables. For such cases, the declaration story might not give you a good indication of what is going on.

DevTools rules view with the following declaration: `height: var(--default-toolbar-height)`. A tooltip point to the variable and indicates that its value is `var(--default-toolbar-height)`<figcaption class="wp-element-caption">Not really useful, what’s --default-toolbar-height value?</figcaption>

You’re now left with either going through the different variable declarations to try to map the intermediary values to the final one, or look in the Layout panel to check the computed value for the variable. This is not super practical and requires multiple steps, and you might already be frustrated because you’re chasing a bug for 3 hours now and you just want to go home and relax! That happened to us too many times, so we decided to show the computed value for the variable directly in the tooltip, where it’s easy for you to see (#1626234).

DevTools rules view with the following declaration: `height: var(--default-toolbar-height)`. A tooltip point to the variable and indicates that its value is `var(--default-toolbar-height)`. It also show a "computed value" section, into which we can read "calc(24px - 2 * 2px)"


This is even more helpful when you’re using custom registered properties, as the value expression can be properly, well, computed by the CSS engine and give you the final value.

The same declaration as previously, but the tooltip "computed value" section now indicates "20px" There's also a "@property" section with the following:  ```   syntax: '<length>';   inherits: true;   initial-value: 10px; ```


Since we were upgrading the variable tooltip already, we decided to make it look good too, parsing the values the way we do in the rules view already, showing color preview, striking through unused var() and light-dark() parameters, and more (#1912006) !


The variable tooltip with the following value: `var(--border-size, 1px) solid light-dark(hotpink, brown)` The 1px in `var` and `brown` in `light-dark` are struck through, indicating they're not used. The computed value section indicate that the value is `2px solid light-dark(hotpink, brown)`

What’s great with this change is that now that we have the computed value at hand, it’s easy to add color swatch next to variables relying on other variables, which we weren’t doing before (#1630950)

The following rules:  ``` .btn-primary {   color: var(--button-color); } :root {   --button-color: light-dark(var(--primary), var(--base));   --primary: gold;   --base: tomato; } ```  before `var(--button-color)`, we can see a gold color swatch, since the page is in light theme.

Even better, this allows us to show the computed value of the variable in the autocomplete popup (#1911524)!

A value is being added for the color property. The input has the `var(--` text in it, and an autocomplete popup is displayed with 3 items: - `--base tomato` - `--button-color rgb(255, 215, 0) - `--primary gold`

While doing this work and reading the spec, I learnt that you can declare empty CSS variables which are valid.

(…) writing an empty value into a custom property, like --foo: ;, is a valid (empty) value, not the guaranteed-invalid value.

https://www.w3.org/TR/css-variables-1/#guaranteed-invalid

It wasn’t possible to add an empty CSS variable from the Rules view, so we fixed this (#1912263). And then, for such empty values, we show an <empty> string so you’re just not left with an empty space, wondering if there’s a bug in DevTools (#1912267, #1912268).

The following rule is displayed in the rules view:  ``` .btn-primary {   --foo: ;   color: var(--foo); } ```  A tooltip points to `--foo`, and has the following text: `<empty>` The computed panel is also visible, showing `--foo`, which value is also `<empty>`

Enhanced Markup and CSS editing

One of my favorite feature in DevTools is the ability to increase or decrease values in the Rules view using the up and down arrows from the keyboard. In Firefox 131 you can now use the mouse wheel to do the same things, and like with the keyboard, holding Shift will make the increment bigger, and holding Alt (Option on OSX) will make the increment smaller (#1801545). Thanks a lot to Christian Sonne, who started this work!

Editing attributes in the markup view was far from ideal as the differences between an element attribute being focused and the initial state of attribute inputs was almost invisible, even to me. This wasn’t great, especially with all our work on focus indicator which aims to bring clarity to users, so we improved the situation by changing the style of the selected node when an attribute is being modified, which should help make editing less confusing (#1501959, #1907803, #1912209)

<figcaption class="wp-element-caption">Firefox 130 on the left, and Firefox 131 on the right. On the top, the class attribute being focused with the keyboard, on the bottom, the class attribute being edited via an input, with its content selected. On the left, there’s almost no visible differences between the two states.</figcaption>

Bug fixes


In Firefox 127, we did some changes to improve performance of the markup view, including how we detect if we should show the event badge on a given element. Unfortunately we also completely broke the event badge if the page was using jQuery and Array prototype was extended, for example by including Moo.js. This is fixed in this Firefox 131 and in ESR 128 as well (#1916881)

We got a report that enabling the grid highlighter in some specific conditions would stress GPU and CPU, as we were triggering too many reflows, as we were working around platform limitation to avoid rendering issues. This limitation is now gone and we can save up cycle and avoid frying your GPU (#1909170).

Finally, we made selecting a <video> element using the node picker not play/pause said video (#1913263).

And that’s it for this months folks, Thank you for reading this and using our tools, see you in a few weeks for a new round of updates 🙂


Full list of fixed bugs in DevTools for the Firefox 131 release:

Mozilla ThunderbirdThunderbird Monthly Development Digest: September 2024

Hello Thunderbird Community! I’m Toby Pilling, a new team member and I’ve spent the last couple of months getting up to speed, and have really enjoyed meeting the team and members of the community virtually, and some in person! September is now over (and so is the summer for many in our team), and we’re excited to share the latest adventures underway in the Thunderbird world. If you missed our previous update, go ahead and catch up! Here’s a quick summary of what’s been happening across the different teams:

Exchange

Progress continues on implementing move/copy operations, with the ongoing re-architecture aimed at making the protocol ecosystem more generic. Work has also started on error handling, protocol logging and a testing framework. A Rust starter pack has been provided to facilitate on-boarding of new team members with automated type generation as the first step in reducing the friction. 

Account Hub

Development of a refreshed account hub is moving forward, with design work complete and a critical path broken down into sprints. Project milestones and tasks have been established with additional members joining the development team in October. Meta bug & progress tracking.

Global Database & Conversation View

The team is focused on breaking down the work into smaller tasks and setting feature deliverables. Initial work on integrating a unique IMAP ID is being rolled out, while the conversation view feature is being fast-tracked by a focused team, allowing core refactoring to continue in parallel.

In-App Notification

This initiative will provide a mechanism to notify users of important security updates and feature releases “in-app”, in a subtle and unobtrusive manner, and has advanced at break-neck speed with impressive collaboration across each discipline. Despite some last-minute scope creep, the team has moved swiftly into the testing phase with an October release in mind. Meta Bug & progress tracking.

Source Docs Clean-up

Work continues on source documentation clean-up, with support from the release management team who had to reshape some of our documentation toolset. The completion of this project will move much of the developer documentation closer to the actual code which will make things much easier to maintain moving forwards. Stay tuned for updates to this in the coming week and follow progress here.

Account Cross-Device Import

As the launch date for Thunderbird for Android gets closer, we’re preparing a feature in the desktop client which will provide a simple and secure account transfer mechanism, so that account settings don’t have to be re-entered for new users of the Android client. A functional prototype was delivered quickly. Now that design work is complete, the project entered the 2 final sprints this week. Keep track here.

Battling OAuth Changes

As both Microsoft and Google update their OAuth support and URLs, the team has been working hard to minimize the effect of these changes on our users. Extended logging in Daily will allow for better monitoring and issue resolution as these updates roll out.

New Features Landing Soon

Several requested features are expected to debut this month or very soon:

As usual, if you want to see things as they land you can check the pushlog and try running daily. This would be immensely helpful for catching bugs early.

See ya next month.

Toby Pilling
Sr. Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: September 2024 appeared first on The Thunderbird Blog.