Mozilla Open Policy & Advocacy BlogMozilla Joins Amicus Brief Supporting Software Interoperability

UPDATE – December 20, 2024

We won!

Earlier this week the Ninth Circuit issued an opinion that thoroughly rejects the district court’s dangerous interpretation of copyright law. Recall that, under the district court’s ruling, interoperability alone could be enough for new software to be an infringing derivative work of some prior software. If upheld, this would have threatened a wide range of open source development and other software.

The Ninth Circuit corrected this mistake. It wrote that “neither the text of the Copyright Act nor our precedent supports” the district court’s “interoperability test for derivative works.” It concluded that “mere interoperability isn’t enough to make a work derivative.” Adding that “the text of the Copyright Act and our case law teach that derivative status does not turn on interoperability, even exclusive interoperability, if the work doesn’t substantially incorporate the preexisting work’s copyrighted material.”

Original post, March 11, 2024

In modern technology, interoperability between programs is crucial to the usability of applications, user choice, and healthy competition. Today Mozilla has joined an amicus brief at the Ninth Circuit, to ensure that copyright law does not undermine the ability of developers to build interoperable software.

This amicus brief comes in the latest appeal in a multi-year courtroom saga between Oracle and Rimini Street. The sprawling litigation has lasted more than a decade and has already been up to the Supreme Court on a procedural question about court costs. Our amicus brief addresses a single issue: should the fact that a software program is built to be interoperable with another program be treated, on its own, as establishing copyright infringement?

We believe that most software developers would answer this question with: “Of course not!” But the district court found otherwise. The lower court concluded that even if Rimini’s software does not include any Oracle code, Rimini’s programs could be infringing derivative works simply “because they do not work with any other programs.” This is a mistake.

The classic example of a derivative work is something like a sequel to a book or movie. For example, The Empire Strikes Back is a derivative work of the original Star Wars movie. Our amicus brief explains that it makes no sense to apply this concept to software that is built to interoperate with another program. Not only that, interoperability of software promotes competition and user choice. It should be celebrated, not punished.

This case raises similar themes to another high profile software copyright case, Google v. Oracle, which considered whether it was copyright infringement to re-implement an API. Mozilla submitted an amicus brief there also, where we argued that copyright law should support interoperability. Fortunately, the Supreme Court reached the right conclusion and ruled that re-implementing an API was fair use. That ruling and other important fair use decisions would be undermined if a copyright plaintiff could use interoperability as evidence that software is an infringing derivative work.

In today’s brief Mozilla joins a broad coalition of advocates for openness and competition, including the Electronic Frontier Foundation, Creative Commons, Public Knowledge, iFixit, and the Digital Right to Repair Coalition. We hope the Ninth Circuit will fix the lower court’s mistake and hold that interoperability is not evidence of infringement.

The post Mozilla Joins Amicus Brief Supporting Software Interoperability appeared first on Open Policy & Advocacy.

The Mozilla BlogA different take on AI safety: A research agenda from the Columbia Convening on AI openness and safety

On Nov. 19, 2024, Mozilla and Columbia University’s Institute of Global Politics held the Columbia Convening on AI Openness and Safety in San Francisco. The Convening, which is an official event on the road to the AI Action Summit to be held in France in February 2025, took place on the eve of the Convening of the International Network of AI Safety Institutes. In the convening we brought together over 45 experts and practitioners in AI to advance practical approaches to AI safety that embody the values of openness, transparency, community-centeredness and pragmatism. 

Prior to the event on Nov. 19, twelve of these experts formed our working group and collaborated over six weeks on a thorough, 40-page “backgrounder” document that helped frame and focus our-person discussions, and design tracks for participants to engage with throughout the convening. 

The Convening explored the intersection of Open Source AI and Safety, recognizing two key dynamics. First, while the open source AI ecosystem continues to gain unprecedented momentum among practitioners, it seeks more open and interoperable tools to ensure responsible and trustworthy AI deployments. Second, this community is approaching safety systems and tools differently, favoring open source values that are decentralized, pluralistic, culturally and linguistically diverse, and emphasizing transparency and auditability. Our discussions resulted in a concrete, collective and collaborative output: “A Research Agenda for a Different AI Safety,” which is organized around five working tracks.

We’re grateful to the French Government’s AI Action Summit for co-sponsoring our event as a critical milestone on the “Road to the AI Action Summit” in February, and to the French Minister for Artificial Intelligence who joined us to give closing remarks at the end of the day. 

In the coming months, we will publish the proceedings of the conference. In the meantime, a summarized readout of the discussions from the convening are provided below. 

Group photo of attendees at the Columbia Convening on AI Openness and Safety, smiling and waving while wearing blue, red, and white berets, seated and standing in a brightly lit room with large windows.

Readout from Convening:

What’s missing from taxonomies of harm and safety definitions?

Participants grappled with the premise that there is no such thing as a universally ‘aligned’ or ‘safe’ model. We explored the ways that collective input can both support better-functioning AI systems across use cases, help prevent harmful uses of AI systems, and further develop levers of accountability.  Most AI safety challenges involve complex sociotechnical systems where critical information is distributed across stakeholders and key actors often have conflicts of interest, but participants noted that open and participatory approaches can help build trust and advance human agency amidst these interconnected and often exclusionary systems. 

Participants examined limitations in existing taxonomies of harms and explored what notions of safety put forth by governments and big tech companies can fail to capture. AI-related harms are often narrowly defined by companies and developers for practical reasons, who often overlook or de-emphasize broader systemic and societal impacts on the path to product launches. The Convening’s discussions emphasized that safety cannot be adequately addressed without considering domain-specific contexts, use cases, assumptions, and stakeholders. From automated inequality in public benefits systems to algorithmic warfare, discussions highlighted how safety discussions accompanying AI systems’ deployments can become too abstract and fail to center diverse voices and the individuals  and communities who are actually harmed by AI systems. A key takeaway was to continue to ensure AI safety frameworks center human and environmental welfare, rather than predominantly corporate risk reduction. Participants also emphasized that we cannot credibly talk about AI safety without acknowledging the use of AI in warfare and critical systems, especially as there are present day harms playing out in various parts of the world.

Drawing inspiration from other safety-critical fields like bioengineering, healthcare, and public health, and lessons learned from adjacent discipline of Trust and Safety, the workshop proposed targeted approaches to expand AI safety research. Recommendations included developing use-case-specific frameworks to identify relevant hazards, defining stricter accountability standards, and creating clearer mechanisms for harm redressal. 

Safety tooling in open AI stacks

As the ecosystem of open source tools for AI safety continues to grow, developers need better ways to navigate it. Participants mapped current technical interventions and related tooling, and helped identify gaps to be filled for safer systems deployments. We discussed the need for reliable safety tools, especially as post-training models and reinforcement learning continues to evolve. Conversants noted that high deployment costs, lack of safety tooling and methods expertise, and fragmented benchmarks can also hinder safety progress in the open AI space. Resources envisioned included dynamic, standardized evaluations, ensemble evaluations, and readily available open data sets that could help ensure that safety tools and infrastructure remain relevant, useful, and accessible for developers. A shared aspiration emerged: to expand access to AI evaluations while also building trust through transparency and open-source practices.

Regulatory and incentive structures also featured prominently, as participants emphasized the need for clearer guidelines, policies, and cross-sector alignment on safety standards. The conversation noted that startups and larger corporations often approach AI safety differently due to contrasting risk exposures and resourcing realities, yet both groups need effective monitoring tools and ecosystem support. The participants explored how insufficient taxonomical standards, lack of tooling for data collection, and haphazard assessment frameworks for AI systems can hinder progress and proposed collaborative efforts between governments, companies, and non-profits to foster a robust AI safety culture. Collectively, participants envisioned a future where AI safety systems compete on quality as much as AI models themselves.

The future of content safety classifiers

AI systems developers often have a hard time finding the right content safety classifier for their specific use case and modality, especially when developers need to also fulfill other requirements around desired model behaviors, latency, performance needs, and other considerations. Developers need a better approach for standardizing reporting about classifier efficacy, and for facilitating comparisons to best suit their needs. The current lack of an open and standardized evaluation mechanism across various types of content or languages can also lead to unknown performance issues, requiring developers to perform a series of time-consuming evaluations themselves — adding additional friction to incorporating safety practices into their AI use cases.

Participants chartered a future roadmap for open safety systems based on open source content safety classifiers, defining key questions, estimating necessary resources, and articulating research agenda requirements while drawing insights from past and current classifier system deployments. We explored gaps in the content safety filtering ecosystem, considering both developer needs and future technological developments. Participants paid special attention to the challenges posed in combating child sexual abuse material and identifying other harmful content. We also noted the limiting factors and frequently Western-centric nature of current tools and datasets for this purpose, emphasizing the need for multilingual, flexible, and open-source solutions. Discussions also called for resources that are accessible to developers across diverse skill levels, such as a “cookbook” offering practical steps for implementing and evaluating classifiers based on specific safety priorities, including child safety and compliance with international regulations.

The workshop underscored the importance of inclusive data practices, urging a shift from rigid frameworks to adaptable systems that cater to various cultural and contextual needs and realities. Proposals included a central hub for open-source resources, best practices, and evaluation metrics, alongside tools for policymakers to develop feasible guidelines. Participants showed how AI innovation and safety could be advanced together, prioritizing a global approach to AI development that works in underrepresented languages and regions.

Agentic risk

With growing interest in “agentic applications,” participants discussed how to craft meaningful working definitions and mappings of the specific needs of AI-system developers in developing safe agentic systems. When considering agentic AI systems, many of the usual risk mitigation approaches for generative AI systems — such as content filtering or model tuning —  run into limitations. In particular, such approaches are often focused on non-agentic systems that only generate text or images, whereas agentic AI systems take real-world actions that carry potentially significant downstream consequences. For example, an agent might autonomously book travel, file pull requests on complex code bases, or even take arbitrary actions on the web, introducing new layers of safety complexity. Agent safety can present a fundamentally different challenge as agents perform actions that may appear benign on their own while potentially leading to unintended or harmful consequences when combined.

Discussions began with a foundational question: how much trust should humans place in agents capable of decision-making and action? Through case studies that included AI agents being used to select a babysitter and book a vacation, participants analyzed risks including privacy leaks, financial mismanagement, and misalignment of objectives. A clear distinction emerged between safety and reliability; while reliability errors in traditional AI might be inconveniences, errors in autonomous agents could cause more direct, tangible, and irreversible harm. Conversations highlighted the complexity of mitigating risks such as data misuse, systemic bias, and unanticipated agent interactions, underscoring the need for robust safeguards and frameworks.

Participants proposed actionable solutions focusing on building transparent systems, defining liability, and ensuring human oversight. Guardrails for both general-purpose and specialized agents, including context-sensitive human intervention thresholds and enhanced user preference elicitation, were also discussed. The group emphasized the importance of centralized safety standards and a taxonomy of agent actions to prevent misuse and ensure ethical behavior. With the increasing presence of AI agents in sectors like customer service, cybersecurity, and administration, Convening members stressed the urgency of this work.

Participatory inputs

Participants examined how participatory inputs and democratic engagement can support safety tools and systems throughout development and deployment pipelines, making them more pluralistic and better adapted to specific communities and contexts. Key concepts included creating sustainable structures for data contribution, incentivizing safety in AI development, and integrating underrepresented voices, such as communities in the Global Majority. Participants highlighted the importance of dynamic models and annotation systems that balance intrinsic motivation with tangible rewards. The discussions also emphasized the need for common standards in data provenance, informed consent, and participatory research, while addressing global and local harms throughout AI systems’ lifecycles.

Actionable interventions such as fostering community-driven AI initiatives, improving tools for consent management, and creating adaptive evaluations to measure AI robustness were identified. The conversation called for focusing on democratizing data governance by involving public stakeholders and neglected communities, ensuring data transparency, and avoiding “golden paths” that favor select entities. The workshop also underscored the importance of regulatory frameworks, standardized metrics, and collaborative efforts for AI safety.

Additional discussion

Some participants discussed the tradeoffs and false narratives embedded in the conversations around open source AI and national security. A particular emphasis was placed on the present harms and risks from AI’s use in military applications, where participants stressed that these AI applications cannot solely be viewed as policy or national security issues, but must also be viewed as technical issues too given key challenges and uncertainties around safety thresholds and system performance.

Conclusion

Overall, the Convening advanced discussions in a manner that showed that a pluralistic, collaborative approach to AI safety is not only possible, but also necessary. It showed that leading AI experts and practitioners can bring much needed perspectives to a debate dominated by large corporate and government actors, and demonstrated the importance of a broader range of expertise and incentives. This framing will help ground a more extensive report on AI safety that will follow from this Convening in the coming months.

We are immensely grateful to the participants in the Columbia Convening on AI Safety and Openness; as well as our incredible facilitator Alix Dunn from Computer Says Maybe, who continues to support our community in finding alignment around important socio-technical topics at the intersection of AI and Openness.

The list of participants at the Columbia Convening is below, individuals with an asterisk were members of the working group 

  • Guillaume Avrin – National Coordinator for Artificial Intelligence, Direction Générale des Entreprises
  • Adrien Basdevant – Tech Lawyer, Entropy
  • Ayah Bdeir* – Senior Advisor, Mozilla
  • Brian Behlendorf – Chief AI Strategist, The Linux Foundation 
  • Stella Biderman– Executive Director, EleutherAI 
  • Abeba Birhane – Adjunct assistant professor, Trinity College Dublin 
  • Rishi Bommasani – Society Lead, Stanford CRFM
  • Herbie Bradley – PhD Student, University of Cambridge
  • Joel Burke – Senior Policy Analyst, Mozilla 
  • Eli Chen – CTO & Co-Founder, Credo AI
  • Julia DeCook, PhD – Senior Policy Specialist, Mozilla 
  • Leon Derczynski – Principal research scientist, NVIDIA Corp & Associate professor, IT University of Copenhagen
  • Chris DiBona – Advisor, Unaffiliated
  • Jennifer Ding – Senior researcher, The Alan Turing Institute 
  • Bonaventure F. P. Dossou – PhD Student, McGill University/Mila Quebec AI Institute 
  • Alix Dunn – Facilitator, Computer Says Maybe 
  • Nouha Dziri* – Head of AI Safety, Allen Institute for AI 
  • Camille François* – Associate Professor, Columbia University’s School of International and Public Affairs
  • Krishna Gade – Founder & CEO, Fiddler AI 
  • Will Hawkins* – PM Lead for Responsible AI, Google DeepMind 
  • Ariel Herbert-Voss – Founder and CEO, RunSybil 
  • Sara Hooker – VP Research, Head of C4AI, Cohere
  • Yacine Jernite* – Head of ML and Society, HuggingFace 
  • Sayash Kapoor* – Ph.D. candidate, Princeton Center for Information Technology Policy
  • Heidy Khlaaf* – Chief AI Scientist, AI Now Institute 
  • Kevin Klyman – AI Policy Researcher, Stanford HAI 
  • David Krueger – Assistant Professor, University of Montreal / Mila 
  • Greg Lindahl – CTO, Common Crawl Foundation
  • Yifan Mai – Research Engineer, Stanford Center for Research on Foundation Models (CRFM)
  • Nik Marda* – Technical Lead, AI Governance, Mozilla
  • Petter Mattson – President, ML Commons 
  • Huu Nguyen – Co-founder, Partnership Advocate, Ontocord.ai 
  • Mahesh Pasupuleti – Engineering Manager, Gen AI, Meta 
  • Marie Pellat* – Lead Applied Science & Safety, Mistral 
  • Ludovic Péran* – AI Product Manager
  • Deb Raji* – Mozilla Fellow 
  • Robert Reich – Senior Advisor, U.S. Artificial Intelligence Safety Institute
  • Sarah Schwetmann – Co-Founder, Transluce & Research Scientist, MIT
  • Mohamed El Amine Seddik – Lead Researcher, Technology Innovation Institute 
  • Juliet Shen – Product Lead, Columbia University SIPA
  • Divya Siddarth* – Co-Founder & Executive DIrector, Collective Intelligence Project
  • Aviya Skowron* – Head of Policy and Ethics, EleutherAI 
  • Dawn Song  – Professor, Department of Electrical Engineering and Computer Science at UC Berkeley
  • Joseph Spisak* – Product Director, Generative AI @Meta 
  • Madhu Srikumar* – Head of AI Safety Governance, Partnership on AI
  • Victor Storchan – ML Engineer 
  • Mark Surman – President, Mozilla
  • Audrey Tang* – Cyber Ambassador-at-Large, Taiwan
  • Jen Weedon – Lecturer and Researcher, Columbia University 
  • Dave Willner – Fellow, Stanford University 
  • Amy Winecoff – Senior Technologist, Center for Democracy & Technology 

The post A different take on AI safety: A research agenda from the Columbia Convening on AI openness and safety appeared first on The Mozilla Blog.

The Mozilla BlogBuilding trust through transparency: A deep dive into the Anonym Transparency Portal

Continuing our series on Anonym’s technology, this post focuses on the Transparency Portal, a critical tool designed to give our partners comprehensive visibility into the processes and algorithms that handle their data. As a reminder, Mozilla acquired Anonym over the summer of 2024, as a key pillar in its effort to raise the standards of privacy in the advertising industry. These privacy concerns are well documented, as described in the US Federal Trade Commission’s recent report. Separate from Mozilla surfaces like Firefox, which work to protect users from invasive data collection, Anonym is ad tech infrastructure that focuses on improving privacy measures for data commonly shared between advertisers and ad networks.

Anonym uses Trusted Execution Environments, which include the benefit of providing  security to users through the attestation processes. As discussed in our last post, this guarantees that only approved code can be run. Anonym wanted our customers to be able to participate in this process without the burden of overly complicated technical integration. For this reason Anonym developed the Transparency Portal and a process we call binary review. Anonym’s Transparency Portal provides comprehensive review capabilities and operational control over data processing to partners.

Screenshot of the Anonym Transparency Portal homepage. The header shows the Anonym logo, navigation links, and a user profile for Graham Mudd. The sidebar menu includes options like Home, Getting Started, Your Binaries, API Integrations, Job Activity, Anonym Public Key, Data Upload, Knowledge Base, and Account Settings. The main section has a welcome message titled "Welcome to the Anonym Transparency Portal" with a description and "Get Started" button. Below are four feature tiles: Knowledge Base, Binary Approval, System Overview, and Job Activity, each with brief descriptions and icons.

The Transparency Portal: Core features

The Transparency Portal is designed to offer clear, actionable insights into how data is processed while enabling partners to maintain strict control over the use of their data. The platform’s key components include:

  • Knowledge Base
    Anonym provides comprehensive documentation of all aspects of our system, including:  1) the architecture and security practices for the trusted execution environment Anonym uses for data processing; 2) details on the methodology used for the application, such as our measurement solutions (Private Lift, Private Attribution) and 3) how Anonym uses differential privacy to help preserve the anonymity of individuals.
  • Binary Review and Approval
    Partners can review and approve each solution Anonym offers, a process we call Binary Review. On the Your Binaries tab, partners can download source code, inspect cryptographic metadata, and approve or revoke binaries (i.e. the code behind the solutions) as needed. This ensures that only vetted and authorized code can process partner data.
Screenshot of the "Your Binaries" page in the Anonym Transparency Portal. The header displays the Anonym logo, navigation links, and Graham Mudd's profile. The sidebar menu includes options like Home, Getting Started, Your Binaries, API Integrations, and more.  The main section features a detailed view of a binary labeled "Lift Binary," with a release date of 11/15/2024, 01:39 PM. It shows the binary state as "Active," version as 2.21.0, and approval state as "Approved." Below are sections with:      A binary description explaining how the solution measures the causal impact of advertising using experiments and private t-tests.     Release notes (version 2.21.0) detailing changes like adding seeded_random_generator.py, upgrading dependencies, converting timestamps, and making advertiser record ID deduplication optional.  An approval timestamp shows the binary was approved by graham@anonymdemo.com on 11/19/2024, 09:58 AM. There are buttons for "Revoke Approval" and a green "Approved" badge.  Below the detailed view, a list of other binaries is shown, including another "Lift Binary" and two "Attribution Binary" entries, with states, versions, and approval statuses displayed.
  • Code Comparison Tool
    For partners managing updates or changes to binaries, the portal includes a source code comparison tool. This tool provides line-by-line visibility into changes (aka ‘diffs’) between binary versions, highlighting additions, deletions, and modifications. Combined with detailed release notes, this feature enables partners to quickly assess updates and make informed decisions.
Screenshot of the "Lift Binary Diff" page in the Anonym Transparency Portal, comparing versions 2.20.0 and 2.21.0 of the Lift Binary. The header includes the Anonym logo, navigation links, and Graham Mudd's profile.  The page shows a binary description explaining how the solution measures the causal impact of advertising. Below it, a message indicates that only modified files are displayed in the diff, with unchanged files listed but omitted from the view.  The diff view compares the file src/main/pipelines/lib/formatter/data_cleaners.py between the two versions. Changes are highlighted:      Additions are shown in green, such as the introduction of enabled as a parameter in the __init__ method and new logic to check self.enabled.     Deletions are marked in red, such as lines without enabled logic in the earlier version.     Updates include added functionality for hashing columns and generating a new record ID with clearer documentation.  This structured side-by-side comparison makes it easy to identify code changes between the binary versions.
  • Job History Logs
    A complete log of all data processing jobs enables tracing of all data operations. Each entry details the algorithm used, the data processed, and the associated binary version, creating an immutable audit trail for operational oversight and to help support regulatory compliance.
  • Access and Role Management
    The portal allows partners to manage their internal access rights. Administrative tools enable the designation of users who can review documentation, approve binaries, and monitor processing activities.

Bridging security, transparency and control

We believe visibility and accountability are foundational requirements of any technology, and especially for systems that process consumer data, such as digital advertising. By integrating comprehensive review, approval, and audit capabilities, the Transparency Portal ensures that our partners have full visibility into how their data is used for advertising purposes while maintaining strict data security and helping to support compliance efforts.  

In our next post, we’ll delve into the role of encryption and secure data transfer in Anonym’s platform, explaining how these mechanisms work alongside the Transparency Portal and the TEE to protect sensitive data at every stage of processing.

The post Building trust through transparency: A deep dive into the Anonym Transparency Portal appeared first on The Mozilla Blog.

Mozilla ThunderbirdOpen Source, Open Data: Visualizing Our Community with Bitergia

Thunderbird’s rich history comes with a complex community of contributors. We care deeply about them and want to support them in the best way possible. But how does a project effectively do just that? This article will cover a project and partnership we’ve had for most of a year with a company called Bitergia. It helps inform the Thunderbird team on the health of our community by gathering and organizing publicly available contribution data.


In order to better understand what our contributors need to be supported and successful, we sought the ability to gather and analyze data that would help us characterize the contributions across several aspects of Thunderbird. And we needed some data experts that understood open source communities to help us achieve this endeavor. From our relationship with Mozilla projects, we recalled a past partnership between Mozilla and Bitergia, who helped it achieve a similar goal. Given Bitergia’s fantastic previous work, we explored how Thunderbird could leverage their expertise to answer questions about our community. Likewise, you can read Bitergia’s complimentary blog post on our partnership as well.

Thunderbird and Bitergia Join Forces

Thunderbird and Bitergia started comparing our data sources with their capabilities. We found a promising path forward on gathering data and presenting it in a consumable manner. The Bitergia platform could already gather information from some data sources that we needed, and we identified functionality that had to be added for some other sources. 

We now have contribution data sets gathered and organized to represent these key areas where the community is active:

  • Thunderbird Codebase Contributions – Most code changes take place in the Mercurial codebase with Phabricator as the code reviewing tool.  This Mercurial codebase is mirrored in GitHub which is more friendly and accessible to contributors. There are other important Thunderbird repositories in GitHub such as Thunderbird for Android, the developer documentation, the Thunderbird website, etc.
  • Bug ActivityBugzilla is our issue tracker and an important piece of the contribution story.
  • TranslationsMozilla Pontoon is where users can submit translations for various languages.
  • User Support ForumsThunderbird’s page on support.mozilla.org is where users can request support and provide answers to help other users.
  • Email List DiscussionsTopicbox is where mailing lists exist for various areas of Thunderbird. Users and developers alike can watch for upcoming changes and participate in ongoing conversations.

Diving into the Dashboards

Once we identified the various data sets that made sense to visualize, Bitergia put together some dashboards for us. One of the key features that we liked about Bitergia’s solution is the interactive dashboard. Anyone can see the public dashboards, without even needing an account!

All of our dashboards can be found here: https://thunderbird.biterg.io/

All of the data gathered for our dashboards was already publicly available. Now it’s well organized for understanding too! Let’s take a deeper look at what this data represents and see what insights it gives us on our community’s health.

Thunderbird Codebase Contributions

As stated earlier, the code contributions happen on our Mercurial repository, via the Phabricator reviewing tool. However, the Bitergia dashboard gathers all its data from GitHub, the Mercurial mirror pluss our other GitHub repositories. You can see a complete list of GitHub repositories that are considered at the bottom of the Git tab.

One of the most interesting things about the codebase contributions, across all of our GitHub repositories, is the breakdown of which organizations contribute. Naturally, most of the commits will come from people who are associated with Thunderbird or Mozilla. There are also many contributors who are not associated with any particular organization (the Unknown category).

One thing we hope to see, and will be watching for, is for the number of contributors outside of the Thunderbird and Mozilla organizations to increase over time. Once the Firefox and Thunderbird codebases migrate from Mercurial to git, this will likely attract new contributors and it will be interesting to see how those new contributions are spread across various organizations.

Another insightful dashboard is the graph that displays our incoming newcomers (seen from the Attracted Committers subtab). We can see that over the last year we’ve seen a steady increase in the number of people that have committed to our GitHub repositories for the first time. This is great news and a trend we hope to continue to observe!

Bug Activity

All codebases have bugs. Monitoring discovered and reported issues can help us determine not only the stability of the project itself, but also uncover who is contributing  their time to report the issues they’ve seen. Perhaps we can even run some developer-requested test cases that help us further solve the user’s issue. Bug reporting is incredibly important and valuable, so it is obviously an area we were interested in. You can view these relevant dashboards on the Bugzilla tab.

Translations

Many newcomers’ first contribution to an open source project is through translations.. For the Firefox and Thunderbird projects, Pontoon is the translation management system, and you can find the Translation contribution information on the Pontoon tab.

Naturally, any area of the project will see some oscillating contribution pattern for several reasons and translations are no different. If we look at the last 5 years of translation contribution data, there are several insights we can take away. It appears that the number of contributors drop off after an ESR release, and increase in a few chunks in the months prior to the release of the next ESR. In other words, we know that historically translations tend to happen toward the end of the ESR development cycle. Given this trend, If we compare the 115 ESR cycle (that started in earnest around January 2023) to the recent 128 ESR cycle (that started around December 2023), then we see far more new contributors, indicating a healthier contributor community in 128 than 115.

User Support Forums

Thus far we have talked about various code contributions that usually come from developers, but users supporting users is also incredibly important. We aim to foster a community that happily helps one another when they can, so let’s take a look at what the activity on our user support forums looks like in the Support Forums tab.

For more context, the data range for these screenshots of the user support forum dashboards has been set to the last 2 years instead of just the last year.

The good news is that we are getting faster at providing the first response to new questions. The first response is often the most important because it helps set the tone of the conversation.

The bad news is that we are getting slower at actually solving the new questions, i.e. marking the question as “Solved”. In the below graph, we see that over the last two years, our average time to mark an issue as “Solved” is affecting a smaller percentage of our total number of questions.

The general take away is that we need help in answering user support questions. If you are a knowledgeable Thunderbird user, please consider helping out your fellow users when you can.

Email List Discussions

Many open source projects use public mailing lists that anyone can participate in, and Thunderbird is no different. We use Topicbox as our mailing list platform to manage several topic-specific lists. The Thunderbird Topicbox is where you can find information on planned changes to the UI and codebase, beta testing, announcements and more. To view the Topicbox contributor data dashboard, head over to the Topicbox tab.

With our dashboards, we can see the experience level of discussion participants. As you might expect, there are more seasoned participants in conversations. Thankfully, less experienced people feel comfortable enough to chime in as well. We want to foster these newer contributors to keep providing their valuable input in these discussions!

Takeaways

Having collated public contributor data has helped Thunderbird identify areas where we’re succeeding. It’s also indicated areas that need improvement to best support our contributor community. Through this educational partnership with Bitergia, we will be seeking to lower the barriers of contribution and enhance the overall contribution experience.

If you are an active or potential contributor and have thoughts on specific ways we can best support you, please let us know in the comments. We value your input!

If you are a leader in an open source project and wish to gather similar data on your community, please contact Bitergia for an excellent partnership experience. Tell them that Thunderbird sent you!

The post Open Source, Open Data: Visualizing Our Community with Bitergia appeared first on The Thunderbird Blog.

Mozilla Open Policy & Advocacy BlogMozilla Welcomes the Bipartisan House Task Force Report on AI

On December 17, the bipartisan House AI Task Force, led by Representatives Jay Obernolte and Ted Lieu, along with a number of other technology policy leaders, released their long awaited report on AI.

The House Task Force Report on Artificial Intelligence provides in-depth analysis and recommendations on a range of policy issues related to AI, including the use of AI in government agencies, data privacy, research and development, civil rights, and more. The report is the culmination of nearly a year’s worth of research and discussions between the Task Force and a broad range of stakeholders, including Nik Marda of Mozilla, who provided his insights to the Task Force on the benefits and risks of open-source and closed-source models. We thank the members of the House AI Task Force and their staff for their diligent work in developing a robust report and for their willingness to consult a broad range of stakeholders from across industry, civil society, and government. We look forward to working with the Task Force on next steps, and we hope to see legislation advanced to tackle these important issues.

See Mozilla’s December 17, 2024 statement below:

Mozilla commends the House AI Task Force for their diligent work over the past year and welcomes their report detailing AI policy findings and recommendations for Congress. We were grateful for the opportunity to engage with the Task Force throughout this process, and to contribute our perspective on our key priorities, including open source, protecting people from AI-related harms, and Public AI. It’s encouraging to see these critical topics addressed in the final report.

In particular, Mozilla agrees with the Task Force findings that there is insufficient evidence to justify the restriction of open source models, and that today’s open AI models actually “encourage innovation and competition.” This finding echoes NTIA’s July 2024 report which acknowledged the benefits of open models to promote AI innovation. We’re also gratified to see the report address other vital issues like data privacy as it pertains to AI, including the use of Privacy Enhancing Technologies (PETs). We’re pleased with the continued emphasis on making foundational progress towards Public AI as well, including recommendations to monitor the current National AI Research Resource Pilot in preparation for potentially scaling the program, which Mozilla hopes to see expanded, and investing in AI-related R&D and education.

In large part to its great breadth and depth, the House AI Task Force report represents a much-needed step forward in the development of concrete AI policy legislation and will help inform the agenda for the next Congress. We look forward to continuing working with AI leaders to advance meaningful AI legislation that promotes accountability, innovation, and competition.

The post Mozilla Welcomes the Bipartisan House Task Force Report on AI appeared first on Open Policy & Advocacy.

The Mozilla BlogProposed contractual remedies in United States v. Google threaten vital role of independent browsers

Giving people the ability to shape the internet and their experiences on it is at the heart of Mozilla’s manifesto. This includes empowering people to choose how they search.

On Nov. 20, the United States Department of Justice (DOJ) filed proposed remedies in the antitrust case against Google. The judgment outlines the behavioral and structural remedies proposed by the government in order to restore search engine competition.

Mozilla is a long-time champion of competition and an advocate for reforms that create a level playing field in digital markets. We recognize the DOJ’s efforts to improve search competition for U.S. consumers. It is important to understand, however, that the outcomes of this case will have impacts that go far beyond any one company or market. 

As written, the proposed remedies will force smaller and independent browsers like Firefox to fundamentally reexamine their entire operating model. By jeopardizing the revenue streams of critical browser competitors, these remedies risk unintentionally strengthening the positions of a handful of powerful players, and doing so without delivering meaningful improvements to search competition. And this isn’t just about impacting the future of one browser company — it’s about the future of the open and interoperable web. 

Firefox and search

Since the launch of Firefox 1.0 in 2004, we have shipped with a default search engine, thinking deeply about search and how to provide meaningful choice for people. This has always meant refusing any exclusivity; instead we preinstall multiple search options and we make it easy for people to change their search engine — whether setting a general default or customizing it for individual searches

We have always worked to provide easily accessible search alternatives alongside territory-specific options — an approach we continue today. For example, in 2005, our U.S. search options included Yahoo, eBay, Creative Commons and Amazon, alongside Google. 

Today, Firefox users in the U.S. can choose between Google, Bing, DuckDuckGo, Amazon, eBay and Wikipedia directly in the address bar. They can easily add other search engines and they can also benefit from Mozilla innovations, like Firefox Suggest.

For the past seven years, Google search has been the default in Firefox in the U.S. because it provides the best search experience for our users. We can say this because we have tried other search defaults and supported competitors in search: in 2014, we switched from Google to Yahoo in the U.S. as they sought to reinvigorate their search product. There were certainly business risks, but we felt the risk was worth it to further our mission of promoting a better internet ecosystem. However, that decision proved to be unsuccessful. 

Firefox users — who demonstrated a strong preference for having Google as the default search engine — did not find Yahoo’s product up to their expectations. When we renewed our search partnership in 2017, we did so with Google. We again made certain that the agreement was non-exclusive and allowed us to promote a range of search choices to people. 

The connection between browsers and search that existed in 2004 is just as important today. Independent browsers like Firefox remain a place where search engines can compete and users can choose freely between them. And the search revenue Firefox generates is used to advance our manifesto, through the work of the Mozilla Foundation and via our products — including Gecko, Mozilla’s browser engine. 

Browsers, browser engines and the open web

Since launching Firefox in 2004, Mozilla has pioneered groundbreaking technologies, championing open-source principles and setting critical standards in online security and privacy. We also created or contributed to many developments for the wider ecosystem, some (like Rust and Let’s Encrypt) have continued to flourish outside of Mozilla. Much of this is made possible by developing and maintaining the Gecko browser engine.  

Browser engines (not to be confused with search engines) are little-known but they are the technology powering your web browser. They determine much of the speed and functionality of browsers, including many of the privacy and security properties.  

In 2013, there were five major browser engines. In 2024, due to the great expense and expertise needed to run a browser engine, there are only three left: Apple’s WebKit, Google’s Blink and Mozilla’s Gecko — which powers Firefox. 

Apple’s WebKit primarily runs on Apple devices, leaving Google and Mozilla as the main cross-platform browser engine developers. Even Microsoft, a company with a three trillion dollar market cap, abandoned its Trident browser engine in 2019. Today, its Edge browser is built on top of Google’s Blink engine.

<figcaption class="wp-element-caption">There are only three major browser engines left — Apple’s WebKit, Google’s Blink and Gecko from Mozilla. Apple’s WebKit mainly runs on Apple devices, making Gecko the only cross-platform challenger to Blink.</figcaption>

Remedies in the U.S. v Google search case

So how do browser engines tie into the search litigation? A key concern centers on proposed contractual remedies put forward by the DOJ that could harm the ability of independent browsers to fund their operations. Such remedies risk inadvertently harming browser and browser engine competition without meaningfully advancing search engine competition. 

Firefox and other independent browsers represent a small proportion of U.S. search queries, but they play an outsized role in providing consumers with meaningful choices and protecting user privacy. These browsers are not just alternatives — they are critical champions of consumer interests and technological innovation.

Rather than a world where market share is moved from one trillion dollar tech company to another, we would like to see actions which will truly improve competition — and not sacrifice people’s privacy to achieve it. True change requires addressing the barriers to competition and facilitating a marketplace that promotes competition, innovation and consumer choice — in search engines, browsers, browser engines and beyond. 

We urge the court to consider remedies that achieve its goals without harming independent browsers, browser engines and ultimately without harming the web.

We’ll be sharing updates as this matter proceeds.

The post Proposed contractual remedies in United States v. Google threaten vital role of independent browsers appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 578

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is cmd_lib, a library of command-line macros and utilities to write shell-script like tasks easily in Rust.

Thanks to Remo Senekowitsch for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR!

Updates from the Rust Project

437 pull requests were merged in the last week

Rust Compiler Performance Triage
Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo Language Team Language Reference Unsafe Code Guidelines
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-12-18 - 2025-01-15 🦀

Virtual
Asia
Europe
North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

She said yes!! (And so did I!)

Amos on Mastodon proving that Rustaceans do have a life outside of Rust. Congratulations, Amos!

Thanks to Jacob Pratt for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla BlogHow to get started on open-source development

Stylized illustration of colorful code lines in red, orange, and white on a purple background, representing programming and software development.

Open-source technology isn’t just about building software — it’s about creating solutions collaboratively, making them freely available for anyone to use and adapt. This approach lowers barriers of access and allows solutions to be tailored to varying nuanced contexts rather than applying a copy-paste approach. 

I come from a family with a heavy engineering background. Both my parents are engineers, so I always knew I wanted to pursue an engineering-related career. My dad sparked my interest in tech when he let me tinker on his work laptop at a young age. That early exposure fueled my curiosity, leading me to study computer science at Strathmore University in Kenya.

After graduating, I joined Nairobi’s iHub — the city’s first innovation hub. That’s where I met the founders of Ushahidi and began volunteering with their organization. This was my introduction to open source, and it showed me how powerful community-driven projects can be.

If you’re curious about how to get started in open-source development, here’s what I’ve learned along the way.

What is open source, and why does it matter?

Open-source technology is especially powerful for creating inclusive solutions because it allows people to adapt them to specific needs. By making it freely available, it ensures that anyone can benefit, regardless of their circumstances. This adaptability ensures that the technology can be inclusive and relevant to different cultural, economic and social settings.

One major criticism of AI systems today is the lack of visibility into how they are built and the underlying data they are trained on, especially because AI systems perpetuate biases against disenfranchised communities. Building AI tools in open-source environments fosters trust and collaborative improvement. This ensures that the tools are transparent, accessible and relevant, reducing the risk of further alienating people and communities that have historically been left out. As I see it, this practice fosters innovation by making it possible to design tools that serve everyone better.

Finding the right project

Be open to exploration. Join community channels, observe discussions and read user feedback. Don’t be afraid to ask questions — curiosity is welcomed in open-source communities. Even small contributions like fixing minor bugs or improving documentation are highly valued and can build your confidence to take on more complex tasks.

To find projects aligned with your values, immerse yourself in the right spaces. It starts with attending physical or virtual meetings focused on ethical AI, data equity or humanitarian tech. Events like All Things Open, FOSS4G and the Creative Commons Summit are excellent starting points. I also recommend following organizations like Mozilla, Datakind and Ushahidi that focus on these issues. Engaging in these communities will help you identify opportunities that align with your values and skills.

The role of community in open source

There’s no open source without community. Collaboration, inclusivity and shared ownership are essential to every successful project. For example, Ushahidi’s global community of users and contributors has driven innovations that benefit people in more than 160 countries. One of our core features, the custom forms functionality, was built by a community member and integrated into the main platform for others to use.

People are more likely to stay engaged when they feel part of something larger than a technical endeavor — when they know their work is helping to create tangible, positive change. It’s this sense of connection and shared responsibility that makes open source so powerful. To make communities more inclusive, we must actively welcome diverse voices, use inclusive language and create mentorship opportunities for underrepresented contributors.

A woman with braided hair and gold Africa-shaped earrings smiles while leaning on a balcony, with columns and greenery in the background.<figcaption class="wp-element-caption">Angela Lungati is a technologist, community builder and executive director of Ushahidi, a global nonprofit that helps communities share information to drive change.</figcaption>

Learning by doing

Open-source communities are fantastic environments for learning. In these spaces, you don’t just read about issues like AI bias or data equity — you actively work on them. Contributing to projects allows you to experiment with code, test ideas and get feedback from people with different perspectives and skill sets. This hands-on experience deepens your understanding of how technology impacts various communities and helps you develop solutions that are equitable and inclusive.

Final advice

Don’t overthink it. Start with small contributions, ask questions and immerse yourself in the community. Open source is about collaboration and persistence. The more you engage, the more you’ll learn, and over time, your contributions will grow in impact. Open source is a chance to make a real difference — to shape tools that reflect the needs and values of people everywhere. 


Angela Lungati is a technologist, community builder and executive director of Ushahidi, a global nonprofit that helps communities share information to drive change. She also serves on the boards of Creative Commons and Humanitarian OpenStreetMap Team. Angela cofounded AkiraChix and champions using technology to empower marginalized groups. A Rise25 honoree, she recently delivered the keynote at MozFest House Zambia. She also shared her views on inclusive AI in an op-ed for Context by the Thomson Reuters Foundation. You can read it here

The post How to get started on open-source development appeared first on The Mozilla Blog.

The Mozilla BlogMozilla partners with Ecosia for a better web

Illustration of overlapping browser windows with Ecosia's logo, a tree graphic, Firefox's logo, and the text "Together for a better web," alongside a search bar with a green cursor.

Your tech choices matter more than ever. That’s why at Mozilla, we believe in empowering users to make informed decisions that align with their values. In that spirit, we’re excited to announce we’re growing our partnership with Ecosia, a search engine that prioritizes sustainability, and social impact. After Germany, we are now offering the option to choose the climate-first search engine in Austria, Belgium, Italy, the Netherlands, Spain, Sweden and Switzerland.

Did you know you could choose the search engine of your choice right from your Firefox URL bar? Whether you prioritize privacy, climate protection, or simply want a search experience tailored to your preferences, we’ve got you covered.

Ecosia goes beyond data protection by addressing environmental concerns. Every search made through the search engine contributes to tree-planting projects worldwide, helping to combat deforestation and regenerate the planet. Ecosia planted over 215 million trees, across the planet biodiversity hotspots, making a tangible difference in the fight against climate change. Just like Mozilla, they are committed to creating a better internet, and world, for everyone.

Together, Mozilla, Firefox and Ecosia are contributing to a web that is more open and inclusive, but above all — one where you can make an informed choice about what tech you use and why. Your tech choices make a difference.

As Firefox and Mozilla continue to champion user empowerment and innovation, we invite you to join us in shaping a web that makes the world better. Together, let’s make a positive impact — one search at a time.

Get Firefox

Get the browser that protects what’s important

The post Mozilla partners with Ecosia for a better web appeared first on The Mozilla Blog.

About:CommunityContributor spotlight – Mayank Bansal

In the open source world, there’s a saying that “given enough eyeballs, all bugs are shallow.” At Bugzilla, we’ve taken this principle to heart with our belief that “bugs are cheap” — a philosophy that transforms challenges into opportunities for collaborative problem-solving.

In this post, you will learn more about Mayank Bansal, whose journey embodies the true spirit of open source collaboration. For over a decade, Mayank has contributed across multiple aspects of Firefox development, including web performance. With his experience, he’s known for his exceptional skill in identifying the culprit of performance regression, and has even outpaced our automated alerting system! He’s also been recently appointed as the first official Community Performance Sheriff. Read on to uncover his insider tips and best practices for meaningful open source contributions.

Q: You’ve been a part of the Mozilla community since 2012. What initially inspired you to start contributing?

I have always been interested in software performance. I started using Firefox in 2009. Sometime in 2010-2011, Firefox announced it was working on graphics hardware acceleration, which was a novel technique then. That really piqued my interest. A developer who worked on the graphics backend for Firefox wrote a blog about the progress. I tested the Firefox beta builds on some graphic intensive websites and posted my findings on their blog. The developer responded to my comments and then filed a bug on Bugzilla to track it.

That was the moment when I realized that Mozilla is not your average faceless technology company. It had real developers, fixing real issues faced by real users.

I created my Bugzilla ID and commented on the bug the dev had filed. The devs responded there and fixed the bug. I could immediately test and perceive the improvement on the previously problematic webpage.

That was the positive feedback loop that got me hooked – I file performance bugs, the devs fix it (and thank me for filing the bug!)

Q: You’ve contributed across so many components: from JavaScript and Graphics to WebGPU and the DOM. How do you manage to stay on top of such a wide range of areas?

There are a few things I do:

  1. I go through all the bugs filed in the last 24 hours in the Core component, which gives me a sense of issues reported by other Firefox users, and bugs filed by the Mozilla devs to track work on either a new feature or performance improvement.
  2. I read through the bug review comments, which gives me an idea if a particular patch is expected to improve performance.
  3. I go through the try pushes from the developers, which gives me an idea of upcoming patches and changes.
  4. I have joined some of the chat rooms on Matrix that Mozilla developers use as team chats. These are generally open to the public (for responsible participation).

A good place to start would be to start cc’ing yourself to large meta bugs (which are like placeholders for other bugs). As new bugs get filed, they will get associated with the meta bug, and you will get an email notification. And then you can go through the new bug and follow that too.

Q: How do you approach bug triaging, and what are some of the challenges you face?

From the description of the bug by the reporter, I try to guess the component where it would sit (DOM, Style, Graphics, JS, etc.). Then I see if I can reproduce that bug. If I can, I will immediately perform a bisection using the wonderful mozregression tool. If I cannot reproduce it, I try to put it in the right component and cc a developer who works in that component.  All bugs get triaged as part of Mozilla’s regular process. But cc’ing a developer does cut short some of the lag associated with any process.

I have also been testing the fuzzing bugs created by Mozilla’s fuzzing team. Wherever I can reproduce a crash from the fuzzing testcase, I will perform a bisection and inform the developer. Again, all fuzz bugs get auto-bisected and triaged. But doing it manually cuts some of the time lag.

I also regularly test old bugs and close them if the original issue is fixed now. It feels right to close an old bug and declutter Bugzilla.

Challenges I face are when the details in the bug are not sufficient to reproduce, or when the issue is platform/setting specific, or when the testcase is private and the reporter cannot share. I will ask the reporter for extra information that will help the developers, and most of the time the reporters respond back!

Q: You’ve been known to find the culprit of performance regressions faster than the automated alerting system. What strategies do you use to efficiently track down regressions?

I use AWFY to track performance of Firefox on important metrics and benchmarks. This is a real-time dashboard maintained by the Perf-sheriffing team. As soon as a regression lands, the numbers change on the dashboard. The automated alert system needs minimum 12 datapoints before an alert is generated, which may take a few hours. In this interval, I identify the regression visually, zero-in on the potential range of bugs that could have caused the regression, and then based on my understanding identify a bug that caused the regression. I can then confirm my suspicion by triggering a build with only that bug and run the benchmark that regressed.

Note that the “bisect-build-run benchmark-create graph-generate perf alert” process is fully automated. I only need to press the right buttons, which makes my life very easy!

Q: With over a decade of contributions, how do you see Mozilla’s tools and technologies evolving, and what role do you hope to play in that future?

Tooling continues to evolve in Mozilla. For example, when I started, there wasn’t much source-code analysis. Now, multiple linters are run on each commit to the main repository. Mozilla as a company puts users at the forefront – and those users also include its internal development teams! There is a continuous push to improve tooling to make the developers more efficient and spend less time in mundane activities. The tooling around performance/regression monitoring, Crash Reporting, Telemetry, Build, Fuzzing is ever evolving. In the last few years, tooling around the use of machine learning has also increased.

I see my role as complementary to tools – filling gaps where the system cannot easily make a judgement, or connecting seemingly different bugs with little context.

Q: Through your testing, you’ve discovered bugs on the web where Firefox underperforms compared to other browsers. Can you share how you approach this type of testing?

I follow all the graphics related bugs. As soon as something lands in Nightly, I immediately start stress-testing websites. I also go to sites like Codepen.io and test literally hundreds of relevant demos.  Check out some of the bugs I filed for WebGPU and Canvas. With graphics, the issues usually are mis-rendering or crashes.

With Javascript, the issues I found tend to be where we are slower than other browsers, or where the javascript engine (SpiderMonkey) has some hidden quadratic behaviour. Crashes in Javascript are mostly from fuzzing testcases.

I also modify existing testcases or Codepen demos to make them intentionally unrealistic for the browser to process and then report issues. Kudos to the Mozilla devs who try to fix as much as they can and are always happy to analyse my testcases.

In general, if anything feels slow, file a bug. If any website looks weird, file a bug. The tenet in Bugzilla is “Bugs are cheap”.

Q: What advice would you give to new contributors who want to dive in?

Start with following bugs, reading Planet Mozilla, using Firefox Nightly, and installing the Firefox Profiler. Profiler is like an X-ray – you immediately get insight into what is slow in Firefox and where exactly. I spend a lot of time profiling webpages, demos, testcases. I profile anything and everything I find.

Q: What keeps you motivated to continue to contribute to Mozilla?

Couple of motivators:  The openness and transparency of development, extremely responsive and friendly developers, feeling of contributing to a piece of software that I use day in and out, belief that Mozilla is important to the openness and democratization of the Web, and finally that my bugs get analysed and fixed.

Q: Outside of your work on Mozilla, what do you enjoy doing in your free time?

Outside of Mozilla, I work within the Investment Banking industry as a transformation consultant in areas like risk, regulatory reporting, and capital markets.

In my free time, I like to read, cook, watch Netflix, and go on long drives with my friends and family.


Interested in contributing to performance tools like Mayank? Check out our wiki to learn more.

The Rust Programming Language BlogNovember project goals update

The Rust project is currently working towards a slate of 26 project goals, with 3 of them designed as Flagship Goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

Async closure stabilization has been approved, though the stabilization has not yet landed! The lang team ultimately opted to stabilize the trait name AsyncFn rather than the keyword-based async Fn syntax that was originally proposed. This decision came after discussion on the Flavors RFC which made it clear we were not at a consensus about whether the async Trait keyword would be used more generally or not. Given that, the team felt that the AsyncFn synta was a fine "next step". If we do ultimately adopt some form of async Trait keyword syntax, then AsyncFn can become a trait alias.

Regarding return-type notation, an extension of return-type notation to cover Self::foo(..): Send landed and we landed #132047 which fixes a known ICE. Stabilization PR is now unblocked.

No major progress towards async drop reviews or team reorganization.

This month saw steady progress on our checklist. dingxiangfei2009's PR renaming derive(SmartPointer) to derive(CoercePointee) was merged and he began the work to port the RFL codebase to use the new name. Alice Ryhl opened RFC #3716 proposing a way to manage compiler flags that alter the ABI and discussion (and some implementation work) has ensued. Finally, we landed PR #119364 making target blocks in asm-goto safe by default; this was based directly on experience from RFL which showed that [safe would be more useful]. We are still working to finalize another extension to asm-goto that arose from RFL requirements, allowing const to support embedded pointers. Finally we prepared reference PR #1610 describing the change to permit Pointers to Statics in Constants that was stabilized last month.

Rust 2024 has now entered the nightly beta and is expected to stabilize as part of Rust 1.85 on 2025-02-20. It has a great many improvements that make the language more consistent and ergonomic, that further upon our relentless commitment to safety, and that will open the door to long-awaited features such as gen blocks, let chains, and the never type !. For more on the changes, see the nightly Edition Guide. The call for testing blog post contains more information and instructions on how you can try it yourself.

Goals with updates

  • min_generic_const_args now exists as a feature gate, though without any functionality, only some gated refactorings, but shouldn't be long before it has actual functionality behind it.
  • The refactoring to remove all the eval_x methods on ty::Const has been completed, making it possible to correctly implement normalization for constants.
  • Posted the October update.
  • Created more automated infrastructure to prepare the October update, making use of an LLM to summarize updates into one or two sentences for a concise table.
  • Support for cargo manifest linting is now merged, making it possible to catch breakage caused by manifest (Cargo.toml) changes, not just source code changes. An example of such breakage is the removal of a package feature: any crates that enabled the removed feature will no longer build.
  • Partial schema design and implementation of type information in lints, enabling the creation of breaking-change lints and improving diagnostic quality for a subset of type-related breaking changes.
  • Resolved multi-team questions that were blocking cross-crate checking, with the compiler team MCP merged and rustdoc improvements discussed and agreed upon.
  • The way const traits are desugared was completely restructured, making the design easier to understand and more robust against current unit tests.
  • Significant development and cleanup for the feature has been done, with several pull requests merged and two still open, bringing the feature closer to being able to dogfood on the standard library and closer to stabilization.
  • @joshtriplett opened https://github.com/rust-lang/rfcs/pull/3680. The @rust-lang/lang team has not yet truly discussed or reached a decision on that RFC.
  • @spastorino began implementation work on a prototype.
  • The sandboxed build scripts exploration is complete. We are unlikely to continue this work in next year but the research may be useful in other areas, such as the possible addition of POSIX process support to WASI or a declarative system dependency configuration in Cargo.
  • The re-design of the autodiff middle/backend was implemented, reducing the remaining LoC to be upstreamed from 2.5k to 1.1k, split into two PRs (1 and 2), which received initial feedback and are expected to land in early December.
  • The preprint of the first paper utilizing std::autodiff is available on Arxiv, with code available at ChemAI-Lab/molpipx, showcasing significantly faster compilation times in Rust compared to JAX.
  • The core data structures of PubGrub have been published as a separate version-ranges crate, enabling multiple projects to share this core abstraction and benefit from improvements without waiting for the rest of the project.
  • This is one of many steps required to publish a new 0.3.0 version of the PubGrub crate.
  • Rustdoc will now show type signatures in the search results page, and the boxing transform behaves more like Hoogle's does.
  • Improvements to matching behavior have been made to fit user expectations.
  • We stabilized -Znext-solver=coherence again in https://github.com/rust-lang/rust/pull/130654. It's looking like the stabilization will actually go through this time.
  • We're currently refactoring the way the current "typing mode" is tracked, working to fix trait-system-refactoring#106. An FCP was started to clean up the way we merge candidates when proving trait goals.
  • rust-lang/rust#125116 has been merged, marking half of the goal as formally completed.
  • Discussions on using cargo cache on CI are beginning to take form.
  • rust-lang/rust#125116 may be contested in results. The impact may not be as large as expected, even on Clippy.
  • We've been experimenting with Clippy using rustc_driver as a static library, instead of dynamic linking. This would be us both a way to check the performance impact of rustc_driver as a shared library, and a way to profile Clippy without filtering between dl_* calls.
  • The never patterns RFC was posted.
  • Feedback on the RFC suggests that the question of "which arms can be omitted" isn't as orthogonal as hoped, so the focus will switch to that.
  • The PR https://github.com/rust-lang/crates.io/pull/9423 has been merged.
  • Work is ongoing on the frontend feature.
  • Amanda's EuroRust talk on polonius from last month is also now available on YouTube.
  • Implementation work continues, mostly on a branch. Major developments include a new debugger which has accelerated progress. There are about 70 test failures left to be analyzed.
  • rust-lang/cargo#14670 and rust-lang/cargo#14749 have been posted and merged.
  • rust-lang/cargo#14792 has been posted.
  • Still in the process of determining the cause of the deadlock through local testing and compiler code analysis.
  • Help wanted: Try to reproduce deadlocks described in the issue list.
  • We decided to close this goal as we have not been making steady progress. We are evaluating what to propose the 2025h1 round of goals.

Goals without updates

The following goals have not received updates in the last month:

Cameron KaiserCHRP removal shouldn't affect Linux Power Macs

A recent patch removed support for the PowerPC Common Hardware Reference Platform from the Linux kernel. [UPDATE: Looks like this has been retracted.] However, Power Macs, even New World systems, were never "pure" CHRP, and there were very few true CHRP systems ever made (Amiga users may encounter the Pegasos and Pegasos II, but few others existed, even from IBM). While Mac OS 8 had some support for CHRP, New World Macs are a combination of CHRP and PReP (the earlier standard), and the patch specifically states that it should not regress Apple hardware. That said, if you're not running MacOS or Mac OS X, you may be better served by one of the BSDs — I always recommend NetBSD, my personal preference — or maybe even think about MorphOS, if you're willing to buy a license and have supported hardware.

Frederik BraunHome assistant can not be secured for internet access

The Goal: Smart Heating Control

Home automation is a cool toy but also allows my house hold to be more energy efficient: My aim was to configure my home's heating to switch off when my family is away and turn back on when we return. This is achieved with home …

Don Martiweb development (and related) links

When IBM Built a War Room for Executives Engelbart’s Mother of All Demos showed how advanced computing could create a shared, collaborative environment of allied individuals, all direct users of the same system, befitting of a laboratory of computer enthusiasts in Menlo Park, Calif. Dunlop’s Executive Terminal demo showed how many of these same advanced technologies could be directed along another path, that of a strictly hierarchical organization, highly attuned to rank and defined roles and specialties. (Related: What Was The ‘Dowding System’?, CIC [Combat Information Center] Yesterday and Today. A lot of people in decision-making roles in 1960s corporations were WWII veterans.)

“Rules” that terminal programs follow Programs behave surprisingly consistently.

Pluralistic: Tech’s benevolent-dictator-for-life to authoritarian pipeline (10 Dec 2024) [I]f progressives in your circle never bothered you about your commercial affairs, perhaps that’s because those affairs didn’t matter when you were grinding out code in your hacker house, but they matter a lot now that you have millions of users and thousands of employees. (There is also a long established connection between the direct mail/database/surveillance marketing business and cultural conservative politics—the more that the tech industry focuses on surveillance advertising, the more that the political decisions of tech employers feel unfamiliar and adversarial to employees whose assumptions weren’t shaped by the culture of direct marketing/right-wing organiations.

Nodriver: A Game-Changer in Web Automation Despite the existence of multiple plugins like puppeteer-stealth, rebrowser, real-browser and many more, they have been quite detectable by WAFs like Cloudflare, Imperva, and Datadome….Nodriver takes a different approach by getting in at the framework level itself. By minimizing the affected footprint and communicating directly over the Chrome Devtool Protocol itself, Nodriver leaves very little marks of its presence, if any at all. A side effect of this is that Nodriver is also one of the fastest scraping frameworks available. (The scraper bot will always get through?)

One Tiny Mod Makes A Cheap Mic Sound A Lot Like A Neumann - Aftermath A tiny, easy to solder mod discovered on forums makes the AKG Perception sound much closer to the legendary Neumann U 87.

“Modern Work Fucking Sucks.” Your company doesn’t just use one app; it uses all of them. Slack for chatting, Zoom for meetings, Notion for brainstorming, Trello for project tracking, Asana for workflows, and Jira for… something vaguely technical that no one fully understands. The end result isn’t streamlined productivity, it’s a Byzantine ecosystem of software where every app exists to talk to every other app while you stand in the middle, trying to make sense of the chaos. (Adam Smith would facepalm. Specialization of labor is a thing, especially for administrative and organizational tasks. Remember the ideal software development team in The Mythical Man-Month had two secretaries and a program clerk? I guess the good news here is that Simple Sabotage for the 21st Century is almost undetectable in the presence of normal IT friction.)

Consumer Solar Surge: Pakistan Shows you Don’t Need Government Programs to Green the Grid While no one was looking, the Pakistani public took matters into their own hands, adding 17 gigawatts of solar power this year. These installations are mostly in the form of Chinese panels for rooftop or ground level solar in towns and villages. (Yes, the grid power generally goes off when it’s sunny, and yes, there are a lot of people who are good at electrical work and in importing stuff from China.)

Whither CockroachDB? and RFD 508: what happends when an open-source dependency changes license?

Kill Oracle’s ‘JavaScript’ trademark, Deno asks USPTO (If this works, then what happens to twitter and tweet?)

What To Use Instead of PGP This section contains specific tools to solve the same problems that PGP tries to solve, but better.

Smarter than ‘Ctrl+F’: Linking Directly to Web Page Content Text fragments are a powerful feature of the modern web platform that allows for precise linking to specific text within a web page without the need to add an anchor! (Related: Text fragments on MDN)

PAAPI Could Be As Effective For Retargeting As Third-Parties Cookies, Study Finds (The headline doesn’t include the interesting math here. In-browser ad auctions are 81.8% as effective as old-fashioned cookie tracking in conversions per dollar, but 49.8% as effective in conversions per ad. So if you multiply it out with the units and cancel conversions, dollars per ad comes out to 61.8% which is only a little above where you get with no tracking at all, and the real-world privacy risks and computing resource costs are higher. Stop putting advertising features in web browsers) Related: The Kids Aren’t Playing In The Privacy Sandbox | AdExchanger

Mozilla Addons BlogDeveloper Spotlight: Adaptive Tab Bar Color

A few years ago software developer Yixin Wang (aka Eason) decided he wanted to “de-Google” his digital life. After switching from Chrome to Firefox, Eason created macOS Monterey Safari Dark theme to mimic the look of Safari while experimenting with themes.

“During this process,” Eason explains, “I discovered that Firefox’s theme colors can be changed programmatically. That’s when it struck me — I could make Firefox dynamically adapt its theme color based on the web page it’s displaying, imitating Safari’s tab bar tinting behavior.”

This revelation led Eason to develop Adaptive Tab Bar Color, an extension that dynamically changes the color of Firefox’s tab bar to match the look of any website.

Upcoming v2.2 will feature a revamped Options page with modern HTML and CSS for a cleaner design. Users will also gain the ability to set a minimum contrast ratio for better UI readability.

While the concept may be simple, Adaptive Tab Bar Color’s development presented unique challenges. Eason understands that users expect his extension to seamlessly integrate colors of any web page they visit, but there are often unforeseeable edge cases. “What happens if a user always prefers dark mode, but the page has a bright color palette?” Eason wonders. “Or if a web page specifies a theme color that’s purely branding related and unrelated to content? What about pages with transparent backgrounds? Balancing these nuances to ensure a consistent and visually appealing experience has been both challenging and rewarding.”

Creating a cool extension like Adaptive Tab Bar Color can lead to unexpected benefits. After Eason put it on his resume, job recruiters came calling. This led to “… an incredible opportunity to write my Bachelor thesis at a company I’d always dreamed of working for. I’m so grateful for the support and enthusiasm of the Firefox community — it’s been an amazing journey.”


Do you have an intriguing extension development story? Do tell! Maybe your story should appear on this blog. Contact us at amo-featured [at] mozilla [dot] org and let us know a bit about your extension development journey.

The post Developer Spotlight: Adaptive Tab Bar Color appeared first on Mozilla Add-ons Community Blog.

Firefox Developer ExperienceFirefox DevTools Newsletter — 133

I’m writing those lines in a high speed train to Paris, where the French Mozilla employees are gathering today to celebrate the end of the year. As always, I’m a bit late writing this post (Firefox 133 was released a couple weeks ago already). Since this is my last day before going on holiday, I hope you’ll be fine with a bullets points list of the notable things that happened in this version.

Firefox being an open source project, we are grateful to get contributions from people outside of Mozilla, like Abhijeet Chawla who’s helping us getting rid of deprecated React lifecycle methods (#1810429, #1810480, #1810482, #1810483, #1810485, #1810486). They also migrated some of our docs ASCII diagrams to MermaidJS so they’re easier to maintain (#1855165, #1855168)

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues


  • We improved opening files in the Debugger way faster (up to 60% on very large files!), by delaying some computation we were doing to retrieve information on the script (#1919570). Those computation are now done only when the Debugger pauses, so you only pay the performance cost if it would be useful for you
  • Still on the performance side, console API calls are now 5% faster thanks to some refactoring (#1921175)
  • If you wanted to debug or see console messages of WebExtension content scripts, you had to go to the Settings panel and toggle the “Enable browser chrome and add-on debugging toolboxes” checkbox. This was a bit cryptic, so we exposed a new “Show content script” setting right in the Debugger Sources panel for easier access (#1698068)
  • Since we’re talking about the Debugger, we improved accessibility by making the Breakpoints panel fully functional using only the keyboard (#1870062)
  • We fixed an issue that could make the Debugger unusable (#1921571)
  • Some of the work we did in the inspector introduced a regression which could prevent to edit an element tag when double clicking on it (#1925913)

And that’s it for this month, and this year. Thank you for reading those updates and using our tools, see you in the beginning of 2025 for a new round of updates 🙂


Full list of fixed bugs in DevTools for the Firefox 133 release:

The Mozilla BlogMozilla Builders: Celebrating community-driven innovation in AI

This year, we celebrated a major milestone: the first Mozilla Builders demo day! More than just a tech event, it was a celebration of creativity, community and bold thinking. With nearly 200 applicants from more than 40 countries, 14 projects were selected for the Builders accelerator, showcasing the diversity and talent shaping the future of AI. Their presentations at demo day demonstrated their innovative visions and impactful ideas. The projects on display weren’t just about what’s next in AI; they showed us what’s possible when people come together to create technology that truly works for everyone – inclusive, responsible and built with trust at its core.

Mozilla’s approach to innovation has always focused on giving people more agency in navigating the digital world. From standing up to tech monopolies to empowering developers and everyday users, to building in public, learning through collaboration, and iterating in community, we’ve consistently prioritized openness, user choice, and community. Now, as we navigate a new era of technological disruption, we aim to bring those same values to AI.

Mozilla Builders is all about supporting the next wave of AI pioneers – creators building tools that anyone can use to shape AI in ways we can all trust. This year’s accelerator theme was local AI: technology that runs directly on devices like phones or computers, empowering users with transparent systems they control. These specialized models and applications preserve privacy, reduce costs and inspire creative solutions.

As we reflect on this year and look to the future, we’re inspired by what these creators are building and the values they bring to their work.

Real-world AI solutions that help everyday people

AI doesn’t have to be abstract or overwhelming. The projects we’re supporting through Mozilla Builders prove that AI can make life better for all of us in practical and tangible ways. Take Pleias, Ersilia and Sartify, for example.

Pleias, with its latest research assistant Scholastic AI, is making waves with its commitment to open data in France. This mission-driven approach not only aligns with Mozilla’s values but also highlights the global impact of responsible AI. At demo day, Pleias announced the release of Pleias 1.0, a groundbreaking suite of models trained entirely on open data — including Pleias-3b, Pleias-1b and Pleias-350m — built on a 2 trillion-token dataset, Common Corpus. Ersilia is another standout, bringing AI models and tools for early state drug discovery to scientific communities studying infectious diseases in the Global South. Sartify has demonstrated the critical importance of compute access for innovators in the Global Majority with PAWA, its Swahili-language assistant built on its own Swahili-langugage models. 

These projects show what it looks like when AI is built to help people. And that’s what we’re all about at Mozilla – creating technology that empowers.

Empowering developers to build tools that inspire and innovate 

AI isn’t just for end-users – it’s for the people building our tech, too. That’s why we’re excited about projects like Theia IDE, Transformer Lab and Open WebUI.

Theia IDE gives developers full control of their AI copilots, enabling local AI solutions like Mozilla’s llamafile version of Starcoder2 to be used for various programming tasks, while Transformer Lab is creating flexible tools for machine learning experimentation. Together, these projects highlight the power of open-source tools to advance the field of computer programming, while also making advanced capabilities more seamlessly integrated into development workflows.

Open WebUI further simplifies the development process for AI applications, demonstrating the immense potential of AI tools driven by community and technical excellence.

The future of AI creativity that bridges art, science and beyond

Some of the projects from this year’s cohort are looking even further ahead, exploring how AI can open new doors in data and simulation. Two standouts are Latent Scope and Tölvera. Latent Scope has a unique approach to make unstructured data – like survey responses and customer feedback – more understandable. It offers a fresh perspective on how data can be visualized and used to find hidden insights in information.

Tölvera, on the other hand, is bridging disciplines like art and science to redefine how we think about AI, and even artificial life forms. With this multidisciplinary perspective, the creator behind Tölvera has developed visually stunning simulations that explore alternative models of intelligence – a key area for next-generation AI. Based in Iceland, Tölvera’s brings a global perspective that highlights the intersectional vision of Mozilla Builders.

We also created a zine called “What We Make It,” which captures this pivotal moment in computing history. Taking inspiration from seminal works like Ted Nelson‘s “Computer Lib / Dream Machines,” it weaves together analysis, philosophical reflection, and original artwork to explore fundamental questions about the purpose of technology and the diverse community of creators shaping its future.

Mozilla Builders’ role in open-source AI innovation starts with community

One of the things that makes Mozilla special is our community-centered approach to AI. This year, collaborations like Llamafile and Mozilla Ventures companies Plastic Labs and Themis AI also joined the accelerator cohort members at demo day, showcasing the broad range of perspectives across Mozilla’s investments in open, local AI. Transformer Lab’s integration with the new Llamafile API highlights how these tools complement one another to create something even greater. Llamafile runs on devices of all sizes and costs, as demonstrated at the demo day science fair. Attendees loved playing with our open-source AI technology on an Apple II.

<figcaption class="wp-element-caption">Mozilla Builders demo day, December 5, 2024 in San Francisco
</figcaption>
<figcaption class="wp-element-caption">Mozilla Builders demo day, December 5, 2024 in San Francisco
</figcaption>
<figcaption class="wp-element-caption">Mozilla Builders demo day, December 5, 2024 in San Francisco</figcaption>

And let’s not forget the Mozilla AI Discord community, which has become a place for thousands of developers and technologists working with open-source AI. This year, we hosted over 30 online events on the Mozilla AI stage, attracting around 400 live attendees. What started as an online hub for creators to share ideas evolved into an in-person forum connection at demo day. Seeing those relationships come to life was a highlight of the year and a reminder of what’s possible when we work together.

Follow the Mozilla Builders leading the way in AI 

We’re thrilled to introduce the new Builders brand and website. We deeply believe that the new brand not only communicates what we build but also shapes how we build and who builds with us. We hope you find it similarly inspiring! On the site, you’ll find technical analyses, perspective pieces, and walkthroughs, with much more to come in the next month. 

Mozilla has a long history of empowering individuals and communities through open technology. The projects from this year’s cohort – and the vision driving them – stand as a testament to what’s possible when community, responsibility and innovation intersect. Together, we’re shaping an AI future that empowers everyone, and we can’t wait to see what’s next in 2025 and beyond.

Discover the future with Mozilla Builders

Dive in and join the conversation today

The post Mozilla Builders: Celebrating community-driven innovation in AI appeared first on The Mozilla Blog.

Mozilla Performance BlogIntroducing the Chrome Extension for the Firefox Profiler

What is the Firefox Profiler?

The Firefox Profiler is a performance analysis tool designed to help developers understand and optimize the performance of websites and Firefox itself. It allows you to capture detailed performance profiles and analyze them in the profiler.firefox.com analysis view. If you haven’t used it yet, head over to profiler.firefox.com to enable it and learn more about its capabilities!

A New Way to Import Chrome Traces

 

Previously, if you wanted to analyze Chrome traces in the Firefox Profiler, the process was a bit tedious. You had to manually download the trace as a JSON file, then drag and drop it into the profiler to load it up. While this worked, it wasn’t ideal, especially if you needed to repeat this process multiple times. To solve this, we’ve developed a Chrome extension that streamlines the entire workflow. You can download the extension from the Chrome Web Store.

With this new extension, capturing and importing Chrome traces is simple and quick. Click on the profiler icon in the toolbar to start and stop Chrome’s internal profiler and capture a profile, or use the shortcut Ctrl+Shift+1 to start and Ctrl+Shift+2 to stop and capture. Once the trace is captured, it automatically opens in Firefox Profiler’s analysis view, ready for you to investigate. No more downloading files or dragging and dropping!

Collaboration Made Easy

One of the best features of the Firefox Profiler is its ability to make collaboration effortless. Once you’ve captured and analyzed a profile, it remains completely offline and is not uploaded to any server until you decide to share it. You can share it with your teammates by clicking the upload button in the top-right corner. This lets you remove any personal information before uploading. Once uploaded, the profiler generates a permalink that preserves the exact view you were analyzing. This means the person you share it with can see exactly what you’re seeing, making debugging and performance discussions much simpler.

Why This Extension Matters

This extension isn’t just about convenience, it opens up new possibilities for cross-browser performance comparisons. By making it easy to capture and analyze Chrome traces in the Firefox Profiler, developers can now compare performance across browsers side by side. This is especially useful for ensuring a consistent user experience across different platforms. Whether you’re optimizing rendering performance or debugging a specific issue, having a unified way to analyze performance is incredibly helpful.

What’s Next?

We’re excited to see how this extension helps you in your workflows. While it offers significant benefits like its collaboration features and different data visualizations, it’s worth noting that some features, such as network markers, are not fully supported yet. We’re committed to improving it further, and we hope the extension becomes a helpful tool for you.

Download the extension today from the Chrome Web Store, and let us know what you think! If you have any feedback or encounter any issues, feel free to reach out in the Firefox Profiler Matrix channel (#profiler:mozilla.org) or file a bug on our GitHub repository. We’d also love to hear how you’re using the profiler for cross-browser performance comparisons!

Thanks for reading, and happy profiling!

Martin ThompsonC2PA Is Not Going To Fix Our Misinformation Problem

A lot of people are deeply concerned about misinformation.

People often come to believe in falsehoods as part of how they identify with a social group. Once established, false beliefs are hard to overcome. Beliefs are a shorthand we use in trying to make sense of the world.

Misinformation is often propagated in order to engender delusion, or a firmly-held belief that does not correspond with reality. Prominent examples of delusions include belief in a flat earth, the risk of vaccines causing autism, or that moon landing was staged.

Delusions – if sufficiently widespread or if promoted aggressively enough – can have a significant effect on the operation of our society, particularly when it comes to involvement in democratic processes.

Misinformation campaigns seek to drive these effects. For instance, promoting a false belief that immigrants are eating household pets might motivate the implementation of laws that lead to unjustifiable treatment of immigrants.

For some, the idea that technology might help with this sort of problem is appealing. If misinformation is the cause of harmful delusions, maybe having less misinformation would help.

The explosion in popularity and efficacy of generative AI has made the creation of content that carries misinformation far easier. This has sharpened a desire to build tools to help separate truth and falsehood.

A Security Mechanism

Preventing the promotion of misinformation can be formulated a security goal. We might set out one of two complementary goals:

  1. It must be possible to identify fake content as fake.
  2. It must be possible to distinguish genuine content.

Our adversary might seek to pass off fake content as genuine. However, an easier goal might be easier to achieve: the adversary only needs to avoid having their fake content being identified as fabrications.

Note that we assume that once a story is established as fake, most people will cease to believe it. That’s a big assumption, but we can at least pretend that this will happen for the purposes of this analysis.

In terms of capabilities, any adversary can be assumed to be capable of using generative AI and other tools to produce fake content. We also allow the adversary access to any mechanism used to distinguish between real and fake content[1].

Technical Options

Determining what is – or is not – truthful is not easy. Given an arbitrary piece of content, it is not trivial to determine whether it contains fact or fabrication. After all, if it were that simple, misinformation would not be that big a problem.

Technical proposals in this space generally aim for a less ambitious goal. One of two approaches is typically considered:

  1. Mark fake content as fake.
  2. Mark genuine content as genuine.

Both rely on the system that creates content knowing which of the two applies. The creator can therefore apply the requisite mark. As long as that mark survives to be read by the consumer of the content, what the creator knew about whether the content was “true” can be conveyed.

Evaluating these options against the goals of our adversary – who seeks to pass off fake content as “real” – is interesting. Each approach requires high levels of adoption to be successful:

  • If an adversary seeks to pass off fake content as real, virtually all fake content needs to be marked as such. Otherwise, people seeking to promote fake content can simply use any means of production that don’t add markings. Markings also need to be very hard to remove.

  • In comparison, genuine content markings might still need to be universally applied, but it might be possible to realize benefits when limited to specific outlets.

That makes markings on genuine content more appealing as a way to help counteract misinformation.

Attesting to Fakeness

If content (text, image, audio, or video) is produced with generative AI, it can maybe include some way to check that it is fake. The output of many popular LLMs often includes both metadata and a small watermark.

These indications are pretty useless if someone is seeking to promote a falsehood. It is trivial to edit content to remove metadata. Similarly, visible watermarks can be edited out of images.

The response to that is a form of watermarking that is supposed to be impossible to remove. Either the generator embeds markings in the content as it is generated, or the marking is applied to the output content by a specialized process.

A separate system is then provided that can take any content and determine whether it was marked.

The question then becomes whether it is possible to generate a watermark that cannot be removed. This paper makes a strong case for the negative by demonstrating the removal – and re-application – of arbitrary watermarks, is possible, requiring only access to the system that rules on whether the watermark is present.

Various generative AI vendors companies have implemented systems of markings, including metadata, removable watermarks, and watermarking that is supposed to be resistant to removal.

Furthermore, generative AI models have to be controlled so that people can’t generate their own content without markings. That is clearly not feasible, as much as some would like to retain control.

Even if model access could be controlled, it seems likely that watermarks will be removable. At best, this places the systems that apply markings in a escalating competition with adversaries that seek to remove (or falsify) markings.

Content Provenance

There’s a case to be made for the use of metadata in establishing where content came from, namely provenance. If the goal is to positively show that content was generated in a particular way, then metadata might be sufficient.

Provenance could work to label content as either fake or real. However, it is most interesting as a means of tracing real content to its source because that might be more feasible.

The most widely adopted system is C2PA. This system has received a lot of attention and is often presented as the answer to online misinformation.

An unpublished opinion piece that I wrote in 2023 about C2PA is highly critical. This blog is a longer examination of what C2PA might offer and its shortcomings.

How C2PA Works

The C2PA specification is long and somewhat complicated[2], but the basics are pretty simple:

Content is digitally signed by the entity that produced it. C2PA defines a bunch of claims that all relate to how the content was created.

C2PA binds attributes to content in one of two ways. A “hard” binding uses a cryptographic hash, which ensures that any modification to the content invalidates the signature. A “soft” binding binds to a perceptual hash or a watermark (more on that below).

The C2PA metadata includes a bunch of attributes, including a means of binding to the content, all of which are digitally signed.

An important type of attribute in C2PA is one that points to source material used in producing derivative content. For instance, if an image is edited, an attribute might refer to the original image. This is supposed to enable the tracing of:

  • the original work, when the present work contains edits, or
  • the components that comprise a derivative work.

What Might Work in C2PA

Cryptographic assertions that come from secured hardware might be able to help identify “real” content.

A camera or similar capture device could use C2PA to sign the content it captures. Provided that the keys used cannot be extracted from the hardware[3], an assertion by the manufacturer might make a good case for the image being genuine.

The inclusion of metadata that includes URLs for source material – “ingredients” in C2PA-speak[4] – might also be useful in finding content that contains a manufacturer signature. That depends on the metadata including accessible URLs. As any assertion in C2PA is optional, this is not guaranteed.

Where C2PA Does Not Deliver

The weaknesses in C2PA are somewhat more numerous.

This section looks in more detail at some aspects of C2PA that require greater skepticism. These are the high-level items only; there are other aspects of the design that seem poorly specified or problematic[5], but the goal of this post is to focus on the primary problem.

C2PA Soft Bindings

A soft binding in C2PA allows for modifications of the content. The idea is that the content might be edited, but the assertions would still apply.

As mentioned, two options are considered in the specification:

  1. Perceptual hashing, which are non-cryptographic digests of content that are intended to remain stable when content is edited.

  2. Watermarking, which binds to a watermark that is embedded in the content.

In an adversarial setting, the use of perceptual hashes is well-studied, with numerous results that show exploitable weaknesses.

Perceptual hashes are not cryptographic hashes, so they are often vulnerable to cryptanalytic attack. Collision and second preimage attacks are most relevant here:

  • Collision attacks – such as this one – give an adversary the ability to generate two pieces of content with the same fingerprint.

  • Second preimage attacks – such as implemented with this code – allow an adversary to take content that produces one output and then modify completely different content so that it results in the same fingerprint.

Either attack allows an adversary to substitute one piece of content for another, though the preimage attack is more flexible.

Binding to a watermark appears to be easier to exploit. It appears to be possible to extract a watermark from one piece of content and apply it to another. Watermarks are often able to be removed – such as the TrustMark-RM mode of TrustMark[6] – and re-applied. That makes it possible to extract a watermark from one piece of content and copy it – along with any C2PA assertions – to entirely different content.

C2PA Traceability and Provenance

One idea that C2PA promotes is that source material might be traced. When content is edited in a tool that supports C2PA, the tool embeds information about the edits, especially any source material. In theory, this makes it possible to trace the provenance of C2PA-annotated content.

In practice, tracing provenance is unlikely to be a casual process. Some publisher sites might aid the discovery of source material but content that is redistributed in other places could be quite hard to trace[7].

Consider photographs that are published online. Professional images are captured in formats like RAW that are unsuitable for publication. Most images are often transcoded and edited for publication.

To trace provenance, editing software needs to embed its own metadata about changes[8], including a means of locating the original[9].

Any connection between the published and original content cannot be verified automatically in a reliable fashion. A hard, or cryptographic, binding is immediately invalidated by any edit.

The relationship between edited and original content therefore cannot be validated by a machine. Something like a perceptual hash might be used to automate this connection. However, as we’ve already established, perceptual hashes are vulnerable to attack. Any automated process based on a perceptual hash is therefore unreliable.

At best, a human might be able to look at images and reach their own conclusions. That supports the view that provenance information is unlikely to be able to take advantage of the scaling that might come from machine validation.

C2PA and DRM

With a published specification, anyone can generate a valid assertion. That means that C2PA verifiers need some means of deciding which assertions to believe.

For hardware capture of content (images, audio, and video), there are relatively few manufacturers. For the claims of a hardware manufacturer to be credible, they have to ensure that the keys they use to sign assertions can only be used with unmodified versions of their hardware.

That depends on having a degree of control. Control over access to secret keys in specialized hardware modules means that it might be possible to maintain the integrity of this part of the system.

There is some risk of this motivating anti-consumer actions on the part of manufacturers. For example, cameras could refuse to produce assertions when used with aftermarket lenses. Or, cameras that stop producing assertions if they are repaired.

As long as modifying hardware only results in a loss of assertions, that seems unlikely to be a serious concern for many people. Very few people seek to modify hardware[10].

The need to restrict editing software is far more serious. In order for edits to be considered trustworthy, strict controls are necessary.

The need for controls would make it impossible for open source software to generate trustworthy assertions. Assertions could only be generated to cloud-based – or maybe DRM-laden – software.

Completely New Trust Infrastructure

The idea of creating trust infrastructure for authenticating capture device manufacturers and editing software vendors is somewhat daunting.

Experience with the Web PKI shows that this is a non-trivial undertaking. A governance structure needs to be put in place to set rules for how inclusions – and exclusions – are decided. Systems need to be put in place for distributing keys and for managing revocation.

This is not a small undertaking. However, for this particular structure, it is not unreasonable to expect this to work out. With a smaller set of participants than the Web PKI, along with somewhat lower stakes, this seems possible.

Alternative Trust Infrastructure Options

In discussions about C2PA, when I raised concerns about DRM, Jeffrey Yasskin mentioned a possible alternative direction.

In that alternative, attestations are not made by device or software vendors. Content authors (or editors or a publisher) would be the ones to make any assertions. Assertions might be tied to an existing identity, such as a website domain name, avoiding any need to build an entirely new PKI.

A simple method would be to have content signed[11] by a site that claims it. That immediately helps with the problem of people attempting to pass fake information as coming from a particular source.

The most intruiging version of this idea relies on building a reputation system for content. If content can then be traced to its source, the reputation associated that source can in some way be built up over time.

The key challenge is that this latter form changes from a definitive sort of statement – under C2PA, content is either real or not – to a more subjective one. That’s potentially valuable in that it encourages more active engagement with the material.

The idea of building new reputational systems is fascinating but a lot more work is needed before anything more could be said.

A Simpler Provenance

The difficulty of tracing, along with the problems associated with editing, suggests a simpler approach.

The benefits of C2PA might be realized by a combination of hardware-backed cryptographic assertions and simple pointers (that is, without digital signatures) from edited content to original content.

Even then, an adversary still has a few options.

Trickery

When facial recognition systems were originally built, researchers found that some of these could be defeated by showing the camera a photo[12].

Generating a fake image with a valid assertion could as simple as showing a C2PA camera a photograph[13]. The use of trick photography to create a false impression is also possible.

No Expectations

It is probably fair to say that – despite some uptake of C2PA – most content in existence does not include C2PA assertions.

Limited availability seriously undermines the value of any provenance system in countering misinformation. An attacker can remove metadata if people do not expect it to be present.

This might be different for media outlets that implement policies that result in universal – or at least near-universal – use of something like C2PA. Then, people can expect content produced by that outlet will contain provenance information.

Articles on social media can still claim to be from that outlet. However, it might become easier to refute that sort of false claim.

That might be reason enough for a media outlet to insist on implementing something like C2PA. After all, the primary currency in which journalistic institutions trade is their reputation. Having a technical mechanism that can support refutation of falsified articles has some value in terms of being able to defend their reputation.

The cost might be significant, if the benefits are not realized until nearly all content is traceable. That might entail replacing every camera used by journalists and outside contributors. Given the interconnected nature of news media, with many outlets publishing content that is sourced from partners, that’s likely a big ask.

A Lack of Respect for the Truth

For any system like this to be effective, people need to care about whether something is real or not.

It is not just about expectations, people have to be motivated to interrogate claims and seek the truth. That’s not a problem that can be solved by technical means.

Conclusion

The narrow applicability of the assertions for capture hardware suggests that a simpler approach might be better and more feasible. Some applications – such as in marking generated content – are probably ineffectual as a means of countering misinformation. The DRM aspect is pretty ugly, while not really adding any value.

All of which is to say that the technical aspects of provenance systems like C2PA are not particularly compelling.


  1. We have to assume that people will need to be able to ask whether content is real or fake for the system to work. ↩︎

  2. And – it pains me to say – it is not very good. I write specifications for a living, so I appreciate how hard it is to produce something on this scale. Unfortunately, this specification needs far more rigor. I suspect that the only way to implement C2PA successfully would be to look at one of the implementations. ↩︎

  3. That’s a big “if”, though not implausible. Though hardware keys used in consumer hardware have been extracted, the techniques used for protecting secrets require considerable resources. That would only invalidate the signatures from a single manufacturer or limited product lines. C2PA might not be worth the effort. ↩︎

  4. C2PA can also indicate generative AI ingredients such as the text prompt used and the details of the generative model. That’s not much use in terms of protecting against use of content for misinformation, but it might have other uses. ↩︎

  5. For instance, the method by which assertions can be redacted is pretty questionable. See my post on selective disclosure for more on what that sort of system might need to do. ↩︎ ↩︎

  6. TrustMark is one of the soft binding mechanisms that C2PA recognizes. It’s also the first one I looked into. I have no reason to believe that other systems are better. ↩︎

  7. C2PA does not use standard locators (such as https://), defining a new URI scheme. That suggests that the means of locating source material is likely not straightforward. ↩︎

  8. I did not look into how much detail about edits is recorded. Some of the supporting material for C2PA suggests that this could be quite detailed, but that seems impractical and the specification only includes a limited set of edit attributes. ↩︎

  9. C2PA also defines metadata for an image thumbnail. Nothing prevents this from including a false representation. ↩︎

  10. This might be more feasible for images and video than for audio. Image and video capture equipment is often integrated into a single unit. Audio often features analog interconnections between components, which makes it harder to detect falsified inputs. ↩︎

  11. Yes, we’ve been here before. Sort of. ↩︎

  12. Modern systems use infrared or depth cameras that are harder to spoof so trivially, though not completely impossible: hardware spoofing and depth spoofing both appear to be feasible. ↩︎

  13. C2PA has the means to attest to depth information, but who would expect that? Especially when you can redact any clues that might lead someone to expect it to be present[5:1]. ↩︎

Mozilla Localization (L10N)Celebrating Pontoon contributors with achievement badges

At the heart of Mozilla’s localization efforts lies Pontoon, our in-house translation management system. Powered by our vibrant volunteer community, Pontoon thrives on their commitments to submit and review translations across all our products.

As part of our ongoing attempts to further recognize the contributions of Pontoon’s volunteers, the localization team has been exploring new ways to celebrate their achievements. We know that the success of localization at Mozilla hinges on the dedication of our community, and it’s important to not only acknowledge this effort but to also create an environment that encourages even greater participation.

That’s why we’re excited to introduce achievement badges in Pontoon! Whether you’re new to Pontoon or a seasoned contributor, achievement badges not only recognize your contribution but also encourage participation and promote good habits amongst our community.

With achievement badges, we aim to make contributing to Pontoon more rewarding and fun while reinforcing Mozilla’s mission of building an open and accessible web for everyone, everywhere.

What are achievement badges?

Achievement badges are a symbol recognizing your hard work in keeping the internet accessible and open, no matter where users are located. These badges are displayed on your Pontoon profile page.

In collaboration with Mozillian designer Céline Villaneau, we’ve created three distinct badges to promote different behaviors within Pontoon:

  • Translation Champion, awarded for submitting translations.
  • Review Master, awarded for reviewing translations.
  • Community Builder, awarded for promoting users to higher roles.

Screenshot of the 3 types of badges displayed in the Pontoon profile.Receiving a badge

When the threshold required to receive a badge is crossed, you’ll receive a notification along with a pop-up tooltip (complete with confetti!). The tooltip will display details about the badge you’ve just earned.

Screencast of animation displayed when the user achieves the Translation Champion badge.To give you more of a challenge, each badge comes with multiple levels, encouraging continued contributions to Pontoon. You’ll receive similar notifications and celebratory tooltips whenever you unlock a new badge level.

Start collecting!

Badges are more than just icons — they’re a celebration of your dedication to keeping the web accessible to all. Ready to make your mark? All users will begin with a blank slate, so start contributing and begin your badge collection today!

This Week In RustThis Week in Rust 577

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is include-utils, a more powerful replacement for the standard library's include_str macro.

Thanks to Aleksey Sidorov for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.x

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

462 pull requests were merged in the last week

Rust Compiler Performance Triage

A pretty quiet week, with both few PRs landed and no large changes in performance.

Triage done by @simulacrum. Revision range: 490b2cc0..1b3fb316

0 Regressions, 0 Improvements, 7 Mixed; 4 of them in rollups 25 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-12-11 - 2025-01-08 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Memory-safe implementations of PNG (png, zune-png, wuffs) now dramatically outperform memory-unsafe ones (libpng, spng, stb_image) when decoding images.

Rust png crate that tops our benchmark shows 1.8x improvement over libpng on x86 and 1.5x improvement on ARM.

Shnatsel on /r/rust

Thanks to Anton Fetisov for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Mozilla BlogMozilla welcomes new executive team members

I am excited to announce that three exceptional leaders are joining Mozilla to help drive the continued growth of Firefox and increase our systems and infrastructure capabilities. 

For Firefox, Anthony Enzor-DeMeo will serve as Senior Vice President of Firefox, and Ajit Varma will take on the role of our new Vice President of Firefox Product. Both bring with them a wealth of experience and expertise in building product organizations, which is critical to our ongoing efforts to expand the impact and influence of Firefox. 

The addition of these pivotal roles comes on the heels of a year full of changes, successes and celebrations for Firefox — leadership transitions, mobile growth, impactful marketing campaigns in both North America and Europe and the marking of 20 years of being the browser that prioritizes privacy and millions of people choose daily. 

As Firefox Senior Vice President, Anthony will oversee the entire Firefox organization and drive overall business growth. This includes supporting our back-end engineering efforts and setting the overall direction for Firefox. In his most recent role as Chief Product and Technology Officer at Roofstock, Anthony led the organization through a strategic acquisition that greatly enhanced the product offering. He also served as Chief Product Officer at Better, and as General Manager, Product, Engineering & Design at Wayfair. Anthony is a graduate of Champlain College in Vermont, and has an MBA from the Sloan School at MIT. 

In his role as Vice President of Firefox Product, Ajit will lead the development of the Firefox strategy, ensuring it continues to meet the evolving needs of current users, as well as those of the future. Ajit has years of product management experience from Square, Google, and most recently, Meta, where he was responsible for monetization of WhatsApp and overseeing Meta’s business messaging platform. Earlier in his career, he was a co-founder and CEO of Adku, a venture-funded recommendation platform that was acquired by Groupon. Ajit has a BS from the University of Texas at Austin. 

We are also adding to our infrastructure leadership. As Senior Vice President of Infrastructure, Girish Rao is responsible for Platform Services, AI/ML Data Platform, Core Services & SRE, IT Services and Security, spanning Corporate and Product technology and services. His focus is on streamlining tools and services that enable teams to deliver products efficiently and securely. 

Previously, Girish led the Platform Engineering and Operations team at Warner Bros Discovery for their flagship streaming product Max. Prior to that, he led various digital transformation initiatives at Electronic Arts, Equinix Inc and Cisco. Girish’s professional journey spans various market domains (OTT streaming, gaming, blockchain, hybrid cloud data center, etc) where he leveraged technology to solve large scale complex problems to meet customer and business outcomes.  

We are thrilled to add to our team leaders who share our passion for Mozilla, and belief in the principles of our Manifesto — that the internet is a vital public resource that must remain open, accessible, and secure, enriching individuals’ lives and prioritizing their privacy.

The post Mozilla welcomes new executive team members appeared first on The Mozilla Blog.

The Mozilla BlogJay-Ann Lopez, founder of Black Girl Gamers, on creating safe spaces in gaming

A person with braided hair and bold red lipstick rests their face on their hand, surrounded by a colorful grid background with gaming and heart icons.<figcaption class="wp-element-caption">Jay-Ann Lopez, founder of Black Girl Gamers, a group of 10,000+ black women around the world with a shared passion for gaming.</figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and what reclaiming the internet really looks like.

This month, we caught up with Jay-Ann Lopez, founder of Black Girl Gamers, a group of 10,000+ black women around the world with a shared passion for gaming. We talked to her about the internet rabbit holes she loves diving into (octopus hunting, anyone?), her vision for more inclusive digital spaces, and what it means to shape a positive online community in a complex industry.

What is your favorite corner of the internet? 

Definitely Black Girl Gamers! It’s a community-focused company and agency housing the largest network of Black women gamers. We host regular streams on Twitch, community game nights, and workshops that are both fun and educational—like making games without code or improving presentation skills. We’ve also established clear community guidelines to make it a positive, safe space, even for me as a founder. Some days, I’m just there as another member, playing and relaxing.

Why did you start Black Girl Gamers?

In 2005, I was gaming on my own and wondered where the other Black women gamers were. I created a gaming channel but felt isolated. So I decided to start a group, initially inviting others as moderators on Facebook. We’ve since grown into a platform that centers Black women and non-binary gamers, aiming not only to build a safe community but to impact the gaming industry to be more inclusive and recognize diverse gamers as a core part of the audience.

What is an internet deep dive that you can’t wait to jump back into?

I stumbled upon this video on octopuses hunting with fish, and it’s stayed on my mind! Animal documentaries are a favorite of mine, and I often dive into deep rabbit holes about ecosystems and how human activity affects wildlife. I’ll be back in the octopus rabbit hole soon, probably watching a mix of YouTube and TikTok videos, or wherever the next related article takes me.

What is the one tab you always regret closing?

Not really! I regret how long I keep tabs open more than closing them. They stick around until they’ve done their job, so there’s no regret when they’re finally gone.

What can you not stop talking about on the internet right now?

Lately, I’ve been talking about sustainable fashion—specifically how the fashion industry disposes of clothes by dumping them in other countries. I think of places like Ghana where heaps of our waste end up on beaches. Our consumer habits drive this, but we’re rarely mindful of what happens to clothes once we’re done with them. I’m also deeply interested in the intersection of fashion, sustainability, and representation in gaming.

What was the first online community you engaged with?

Black Girl Gamers was my first real community in the sense of regular interaction and support. I had a platform before that called ‘Culture’ for natural hair, which gained a following, but it was more about sharing content rather than having a true community feel. Black Girl Gamers feels like a true community where people chat daily, play together, and share experiences.

If you could create your own corner of the internet, what would it look like?

I’d want a space that combines community, education, and events with opportunities for growth. It would blend fun and connection with a mission to improve and equalize the gaming industry, allowing gamers of all backgrounds to feel valued and supported.

What articles and/or videos are you waiting to read/watch right now?

There’s a Vogue documentary that’s been on my watchlist for a while! Fashion and beauty are big passions of mine, so I’m looking forward to finding time to dive into it.

How has building a community for Black women gamers shaped your experience online as both a creator and a user?

Building Black Girl Gamers has shown me the internet’s positive side, especially in sharing culture and interests. But being in a leadership role in an industry that has been historically sexist and racist also means facing targeted harassment from people who think we don’t belong. The work I do brings empowerment, but there’s also a constant pushback, especially in the gaming space, which can make it challenging. It’s a dual experience—immensely rewarding but sometimes exhausting.


Jay-Ann Lopez is the award-winning founder of Black Girl Gamers, a community-powered platform advocating for diversity and inclusion while amplifying the voices of Black women. She is also an honorary professor at Norwich University of the Arts, a member and judge for BAFTA, and a sought-after speaker and entrepreneur.

In 2023, Jay-Ann was featured in British Vogue as a key player in reshaping the gaming industry and recognized by the Institute of Digital Fashion as a Top 100 Innovator. She speaks widely on diversity in entertainment, tech, fashion and beauty and has presented at major events like Adweek, Cannes Lion, E3, PAX East and more. Jay-Ann also curates content for notable brands including Sofar Sounds x Adidas, WarnerBros, SEGA, Microsoft, Playstation, Maybelline, and YouTube, and co-produces Gamer Girls Night In, the first women and non-Binary focused event that combines gaming, beauty and fashion.

The post Jay-Ann Lopez, founder of Black Girl Gamers, on creating safe spaces in gaming appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird for Android November 2024 Progress Report

The title reads "Thunderbird for Android November 2024 Progress Report' and has both the Thunderbird and K-9 Mail logos beneath it.

It’s been a while since our last update in August, and we’re glad to be back to share what’s been happening. Over the past few months, we’ve been fully focused on the Thunderbird for Android release, and now it’s time to catch you up. In this update, we’ll talk about how the launch went, the improvements we’ve made since then, and what’s next for the project.

A Milestone Achieved

Launching Thunderbird for Android has been an important step in extending the Thunderbird ecosystem to mobile users. The release went smoothly, with no hiccups during the Play Store review process, allowing us to deliver the app to you right on schedule.

Since its launch a month ago, the response has been incredible. Hundreds of thousands of users have downloaded Thunderbird for Android, offering encouragement and thoughtful feedback. We’ve also seen an influx of contributors stepping up to make their mark on the project, with around twenty people making their first contribution to the Thunderbird for Android and K-9 Mail repository since 8.0b1. Their efforts, along with your support, continue to inspire us every day.

Listening to Feedback

When we launched, we knew there were areas for improvement. As we’ve been applying our updates to both K-9 Mail and Thunderbird for Android, it won’t magically have all the bugs fixed with a new release over night. We’ve been grateful for the feedback in the beta testing group and the reviews, but also especially excited about those of you who spent a moment to appreciate by leaving a positive review. Your feedback has helped us focus on key issues like account selection, notifications, and app stability.

For account selection, the initial design used two-letter abbreviations from domain names, which worked for many users but caused confusion for users managing many similar accounts. A community contributor updated this to use letters from account names instead. We’re now working on adding custom icons for more personalization while keeping simple options available. Additionally, we resolved the confusing dynamic reordering of accounts, keeping them fixed while clearly indicating the active one.

Notifications have been another priority. Gmail users on K-9 faced issues due to new requirements from Google, which we’re working on. As a stop gap we’ve added a support article which will also be in the login flow from 8.2 onwards. Others have had trouble setting up push notifications or emails not arriving immediately, which you can read more about as well. Missed system error alerts have also been a problem, so we’re planning to bring notifications into the app itself in 2025, providing a clearer way to address actions.

There are many smaller issues we’ve been looking at, also with the help of our community, and we look forward to making them available to you.

Addressing Stability

App stability is foundational to any good experience, and we regularly look at the data Google provides to us. When Thunderbird for Android launched, the perceived crash rate was alarmingly high at 4.5%. We found that many crashes occurred during the first-time user experience. With the release of version 8.1, we implemented fixes that dramatically reduced the crash rate around 0.4%. The upcoming 8.2 update will bring that number down further.

The Year Ahead

The mobile team at MZLA is heading into well deserved holidays a bit early this year, but next year we’ll be back with a few projects to keep you productive while reading email on the go. Our mission is for you to fiddle less with your phone. If we can reduce the time you need between reading emails and give you ways to focus on specific aspects of your email, we can help you stay organized and make the most of your time. We’ll be sharing more details on this next year.

While we’re excited about these plans, the success of Thunderbird for Android wouldn’t be possible without you. Whether you’re using the app, contributing code, or sharing your feedback, your involvement is the lifeblood of this project.

If K-9 Mail or Thunderbird for Android has been valuable to you, please consider supporting our work with a financial contribution. Thunderbird for Android relies entirely on user funding, and your support is essential to ensure the sustainability of open-source development. Together, we can continue improving the app and building a better experience for everyone.

The post Thunderbird for Android November 2024 Progress Report appeared first on The Thunderbird Blog.

Don Martirun a command in a tab with gnome-terminal

To start a command a new tab, use the --tab command-line option to gnome-terminal, along with -- to separate the gnome-terminal options from the options passed to the commnd being run.

The script for previewing this site locally uses separate tabs for the devd process and for the script that re-runs make when a file changes.

#!/usr/bin/bash set -e trap popd EXIT pushd $PWD cd $(dirname "$0") run_in_tab () { gnome-terminal --tab -- $* } make cleanhome # remove indexes, home page, feeds make -j run_in_tab devd --port 8088 public run_in_tab code/makewatch -j pages

More: colophon

Bonus links

Deepfake YouTube Ads of Celebrities Promise to Get You ‘Rock Hard’ YouTube is running hundreds of ads featuring deepfaked celebrities like Arnold Schwarzenegger and Sylvester Stallone hawking supplements that promise to help men with erectile dysfunction. Related LinkedIn post from Jérôme Segura at Malwarebytes: In the screenshot below, we see an ad for eBay showing the https website for the real eBay site. Yet, this ad is a fake.

How DraftKings, FanDuel, Legal Sports Betting Changed the U.S., The App Always Wins (Not just a Google thing. Win-lose deals are becoming more common as a percentage of total interactions in the market. More: personal AI in the rugpull economy)

I can now run a GPT-4 class model on my laptop I’m so excited by the continual efficiency improvements we’re seeing in running these impressively capable models. In the proprietary hosted world it’s giving us incredibly cheap and fast models like Gemini 1.5 Flash, GPT-4o mini and Amazon Nova. In the openly licensed world it’s giving us increasingly powerful models we can run directly on our own devices. (Openly licensed in this context means, in comparison to API access, you get predictable pricing and no surprise nerfing. More: generative ai antimoats)

$700bn delusion: Does using data to target specific audiences make advertising more effective? Latest studies suggest not We can improve the quality of our targeting much better by just buying ads that appear in the right context, than we can by using my massive first party database to drive the buy, and it’s way cheaper to do that. Putting ads in contextually relevant places beats any form of targeting to individual characteristics. Even using your own data. (This makes sense—if the targeting data did increase return on ad spend, then the price of the data and targeting-related services would tend to go up to capture any extra value.)

Defining AI I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.

U.S. Officials Urge Americans to Use Encrypted Apps, for Texting and Calls, in Wake of Chinese Infiltration of Our Unencryped Telecom Network (Switch from SMS to Signal is fairly common advice—the surprising part here is the source.)

Talking shit Why are people not developing a resistance to bullshit artists?

The Servo BlogThis month in Servo: :is(), :where(), grid layout, parallel flexbox, and more!

Servo nightly showing new support for CSS grid layout, when enabled via `layout.grid.enabled`

Servo now supports :is() and :where() selectors (@mrobinson, #34066), parallel layout for flexbox (@mrobinson, #34132), and experimentally, CSS grid layout (@nicoburns, @taniishkaa, #32619, #34352, #34421)! To try our new grid layout support, run Servo with --pref layout.grid.enabled.

We’ve added support for two key Shadow DOM interfaces, the shadowRoot property on Element (@simonwuelker, #34306) and the innerHTML property on ShadowRoot (@simonwuelker, #34335).

We’ve also landed ‘justify-self’ on positioned elements (@chickenleaf, #34235), form submission with <input type=image> (@shanehandley, #34203), DataTransfer (@Gae24, #34205), the close() method on ImageBitmap (@simonwuelker, #34124), plus several new SubtleCrypto API features:

On OpenHarmony, we’ve landed keyboard input and the IME (@jschwe, @jdm, @mukilan, #34188), touch fling gestures (@jschwe, @mrobinson, #33219), and additional CJK fallback fonts (@jschwe, #34410). You can now build for OpenHarmony on a Windows machine (@jschwe, #34113), and build errors have been improved (@jschwe, #34267).

More engine changes

You can now scroll the viewport and scrollable elements with your pointer anywhere in the area, not just when hovering over actual content (@mrobinson, @mukilan, #34347). --unminify-js, a very useful feature for diagnosing Servo bugs in real websites, now supports module scripts (@jdm, #34206).

We’ve fixed the behaviour of offsetLeft and offsetTop relative to <body> with ‘position: static’ (@nicoburns, @Loirooriol, #32761), which also required spec changes (@nicoburns, @Loirooriol, w3c/csswg-drafts#10549). We’ve also fixed several layout bugs around:

The getClientRects() method on Element now correctly returns a DOMRectList (@chickenleaf, #34025).

Stylo has been updated to 2024-11-01 (@Loirooriol, #34322), and we’ve landed some changes to prepare our fork of Stylo for publishing releases on crates.io (@mrobinson, @nicoburns, #34332, #34353). We’ve also made more progress towards splitting up our massive script crate (@jdm, @sagudev, #34357, #34356, #34163), which will eventually allow Servo to be built (and rebuilt) much faster.

Performance improvements

In addition to parallel layout for flexbox (@mrobinson, #34132), we’ve landed several other performance improvements:

We’ve also landed some changes to reduce Servo’s binary size:

Servo’s tracing-based profiling support (--features tracing-perfetto or tracing-hitrace) now supports filtering events via an environment variable (@delan, #34236, #34256), and no longer includes events from non-Servo crates by default (@delan, #34209). Note that when the filter matches some span or event, it will also match all of its descendants for now, but this is a limitation we intend to fix.

Most of the events supported by the old interval profiler have been ported to tracing (@delan, #34238, #34337). ScriptParseHTML and ScriptParseXML events no longer count the time spent doing layout and script while parsing, reducing them to more realistic times (@delan, #34273), while ScriptEvaluate events now count the time spent running scripts in timers, DOM event listeners, and many other situations (@delan, #34286), increasing them to more realistic times.

We’ve added new tracing events for display list building (@atbrakhi, #34392), flex layout, inline layout, and font loading (@delan, #34392). This will help us diagnose performance issues around things like caching and relayout for ‘stretch’ in flex layout, shaping text runs, and font template creation.

For developers

Hacking on Servo is now easier, with our new --profile medium build mode in Cargo (@jschwe, #34035). medium is more optimised than debug, but unlike release, it supports debuggers, line numbers in backtraces, and incremental builds.

Servo now uses CODEOWNERS to list reviewers that are experts in parts of our main repo. This should make it much easier to find reviewers that know how to review your code, and helps us maximise the quality of our code reviews by allowing reviewers to specialise.

Donations

Thanks again for your generous support! We are now receiving 4291 USD/month (+2.1% over October) in recurring donations. We are no longer accepting donations on LFX — if you were donating there, please move your recurring donations to GitHub or Open Collective.

Servo is also on thanks.dev, and already fifteen GitHub users that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

4291 USD/month
10000

With this money, we’ve been able to cover our web hosting and self-hosted CI runners for Windows and Linux builds. When the time comes, we’ll also be able to afford macOS runners and perf bots, as well as additional Outreachy interns next year! As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Conferences and blogs

Mozilla ThunderbirdCelebrating 20 Years of Thunderbird: Independence, Innovation and Community

Thunderbird turns 20 today. Such a huge milestone invites reflection on the past and excitement for the future. For two decades, Thunderbird has been more than just an email application – it has been a steadfast companion to millions of users, offering communication, productivity, and privacy.

20 Years Ago Today…

Thunderbird’s journey began in 2003, but version 1.0 was officially released on December 7, 2004. It started as an offshoot of the Mozilla project and was built to challenge the status quo – providing an open-source, secure and customizable alternative to proprietary email clients. What began as a small, humble project soon became the go-to email solution for individuals and organizations who valued control over their data. Thunderbird was seen as the app for those in the ‘know’ and carved a unique space in the digital world.

Two Decades of Ups and Downs and Ups

The path hasn’t always been smooth. Over the years, Thunderbird faced its share of challenges – from the shifting tides of technology and billion dollar competitors coming on the scene to troubles funding the project. In 2012, Mozilla announced that support for Thunderbird would end, leaving the project largely to fend for itself. Incredibly, a passionate group of developers, users, and supporters stepped up and refused to let it fade away. Twenty million people continued to rely on Thunderbird, believing in its potential, rallying behind it, and transforming it into a project fueled by its users, for its users.

In 2017, the Mozilla Foundation, which oversaw Thunderbird along with a group of volunteers in the Thunderbird Council, once again hired a small 3 person team to work on the project, breathing new life into its development. This team decided to take matters into their own hands and let the users know through donation appeals that Thunderbird needed their support. The project began to regain strength and momentum and Thunderbird once again came back to life. (More on this story can be found in our previous post, “The History of Thunderbird.”)

The past few years, in particular, have been pivotal. Thunderbird’s user interface got a brand new facelift with the release of Supernova 115 in 2023.  The 2024 Nebula release fixed a lot of the back-end code and technical debt that was plaguing faster innovation and development.  The first-ever Android app launched, extending Thunderbird to mobile users and opening a new chapter in its story. The introduction of Thunderbird Pro Services, including tools like file sharing and appointment booking, signals how the project is expanding to become a comprehensive productivity suite. And with that, Thunderbird is gearing up for the next era of growth and relevance.

Thank You for 20 Amazing Years

As we celebrate this milestone, we want to thank you. Whether you’ve been with Thunderbird since its earliest days or just discovered it recently, you’re part of a global movement that values privacy, independence, and open-source innovation. Thunderbird exists because of your support, and with your continued help, it will thrive for another 20 years and beyond.

Here’s to Thunderbird: past, present, and future. Thank you for being part of the journey. Together, let’s build what’s next.

Happy 20th, Thunderbird!

20 Years of Thunderbird Trivia!

It Almost Had a Different Name

Before Thunderbird was finalized, the project was briefly referred to as “Minotaur.” However, that name didn’t stick, and the team opted for something more dynamic and fitting for its vision.

Beloved By Power Users

Thunderbird has been a favorite among tech enthusiasts, system administrators, and privacy advocates because of its extensibility. With add-ons and customizations, users can tweak Thunderbird to do pretty much anything.

Supports Over 50 Languages

Thunderbird is loved world-wide! The software is available in more than 50 languages, making it accessible to users all across the globe.

Launched same year as Gmail

Thunderbird and Gmail both launched in 2004. While Gmail revolutionized web-based email, Thunderbird was empowering users to manage their email locally with full control and customization.

Donation-Driven Independence

Thunderbird relies entirely on user donations to fund its development. Remarkably, less than 3% of users donate, but their generosity is what keeps the project alive and independent for the other 97% of users.

Robot Dog Regeneration

The newly launched Thunderbird for Android is actually the evolution of the K-9 Mail project, which was acquired by Thunderbird in 2022. It was smarter to work with an existing client who shared the same values of open source, respecting the user, and offering customization and rich feature options.

The post Celebrating 20 Years of Thunderbird: Independence, Innovation and Community  appeared first on The Thunderbird Blog.

Data@MozillaHow do we preserve the integrity of business metrics while safeguarding our users privacy choice?

Abstract. Respecting our user’s privacy choices is at the top of our priorities and it also involves the deletion of their data from our Data Warehouse (DHW) when they request us to do so. For Analytics Engineering, this deletion presents the challenge to maintain business metrics reliable and stable along with the evolution of business analyses. This blog describes our approach to break through this challenge. Reading time: ~5 minutes.


Mozilla has a strong commitment to protecting user privacy and giving each user control over the information that they share with us. When the user’s choice is to opt-out of sending telemetry data, the browser sends a request that results in the deletion of the user’s records from our Data Warehouse. We call this process Shredder. The impact of Shredder is problematic when the reported key performance indicators (KPIs) and Forecasts change after a reprocess or “backfill” of data. This is a limitation to our analytics capabilities and the evolution of our products. Yet, running a backfill is a common process that remains essential to expand our business understanding, so the question becomes: how do we rise to this challenge? Shredder Mitigation is a strategy that breaks through this problem and resolves the impact in business metrics. Let’s see how it works with a simplified example. A table “installs” in the DWH contains telemetry data including the install id, browser and  channel utilized on given dates.

installs

date install_id browser channel
2021-01-01 install-1 Firefox Release
2021-01-01 install-2 Fenix Release
2021-01-01 install-3 Focus Release
2021-01-01 install-4 Firefox Beta
2021-01-01 install-5 Fenix Release

Derived from this installs table, there is an aggregate that stores the metric “kpi_installs”, which allows us to understand the usage per browser over time and improve accordingly, and that doesn’t contain any ID or channel information.

installs_aggregates_v1

date browser kpi_installs
2021-01-01 Firefox 2
2021-01-01 Fenix 2
2021-01-01 Focus 1
Total   5

  What happens when install-3 and install-5 opt-out of sending telemetry data and we need to backfill? This event results in the browser sending a deletion request, which Mozilla’s Shredder process addresses by deleting existing records of these installs along the DWH. After this deletion, the business asks us if it’s possible to calculate kpi_installs split by channel, to evaluate beta, nightly and release separately. This means that the channel needs to be added to the aggregate and the data be backfilled to recalculate the KPI. With install-3 and install-5 deleted, the backfill will report a reduced -thus, unstable- value for kpi_installs due to Shredder’s impact.

installs_aggregates (without shredder mitigation)

date browser channel kpi_installs
2021-01-01 Firefox Release 2
2021-01-01 Fenix Release 1
Total     3

  How do we solve this problem? The Shredder Mitigation process safely executes the backfill of the aggregate by recalculating the KPI using only the combination of previous and new aggregates data and queries, identifying the difference in metrics due to Shredder’s deletions and storing this difference as NULL. The process runs efficiently for terabytes of data, ensuring a 100% stability in reported metrics and avoiding unnecessary costs by running automated data checks for each subset backfilled. Every version of our aggregates that use Shredder Mitigation is reviewed to not contain any dimensions that could be used to identify previously deleted records. The result of a backfill with shredder mitigation in our example, is a new version of the aggregate that incorporates the requested dimension “channel” and matches the reported version of the KPI:

installs_aggregates_v2

browser channel kpi_installs
Firefox Release 1
Firefox Beta 1
Fenix Release 1
Fenix NULL 1
Focus NULL 1
Total   5

With the reported metrics stable and consistent, the shredder mitigation process enables the business to safely evolve, generating knowledge in alignment with our data protection policies and safeguarding our users’ privacy choice. Want to learn more? Head over to the shredder process technical documentation for a detailed implementation guide and hands-on insights.

Firefox NightlyLearning and Improving Every Day – These Weeks in Firefox: Issue 173

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Abhijeet Chawla[:ff2400t]

New contributors (🌟 = first patch)

 

Project Updates

Add-ons / Web Extensions

WebExtension APIs
WebExtensions Framework
    • Fixed a tabs events regression on extensions-created tabs with a tab url that uses an unknown protocol (e.g. extension-registered protocol handler) – Bug 1921426
  • Thanks to John Bieling for reporting and fixing this regression
Addon Manager & about:addons
  • In the extensions panel, a new messagebar has been introduced to let users know when an extension has been disabled through the blocklist (for add-ons of type extensions disabled by either a hard or soft block) – Bug 1917848

DevTools

DevTools Toolbox

Fluent

Lint, Docs and Workflow

  • The test-manifest-toml linter has now been added to CI. This may show up in code reviews, and typically reports issues like not using double quotes, separating skip-if conditions to multiple lines, ordering of tests in a file.

Migration Improvements

 

Picture-in-Picture

  • Thanks to florian for removing an unused call to Services.telemetry.keyedScalarAdd (bug 1932090), as a part of the effort to remove legacy telemetry scalar APIs (bug 1931901)
  • Also thanks to emilio for updating the PiP window to use outerHeight and outerWidth (bug 1931747), providing better compatibility for rounded PiP window corners and shadows on Windows

Search and Navigation

  • Address bar revamp (aka Scotch Bonnet project)
    • Dale disabled “interventions” results in address bar when new Quick Actions are enabled Bug 1794092
    • Dale re-enabled the Contextual Search feature Bug 1930547
    • Yazan changed Search Mode to not stick unless search terms are persisted, to avoid accidentally searching for URLs Bug 1923686
    • Daisuke fixed a problem where confirming an autofilled search keyword did not enable Search Mode Bug 1925532 
    • Daisuke made the Unified Search Button panel pick theme colors Bug 1930190
    • Daisuke improved keyboard navigation in and out of the Unified Search Button Bug 1930492, Bug 1931765
    • Emilio fixed regressions in the Address Bar alignment when the browser is full-screen Bug 1930499, and when the window is not focused Bug 1932652 
  • Search Service
  • Suggest

The Rust Programming Language BlogLaunching the 2024 State of Rust Survey

It’s time for the 2024 State of Rust Survey!

Since 2016, the Rust Project has collected valuable information and feedback from the Rust programming language community through our annual State of Rust Survey. This tool allows us to more deeply understand how the Rust Project is performing, how we can better serve the global Rust community, and who our community is composed of.

Like last year, the 2024 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until Monday, December 23rd, 2024. Trends and key insights will be shared on blog.rust-lang.org as soon as possible.

We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. Your responses will help us improve Rust over time by shedding light on gaps to fill in the community and development priorities, and more.

Once again, we are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:

  • English
  • Simplified Chinese
  • French
  • German
  • Japanese
  • Russian
  • Spanish

Note: the non-English translations of the survey are provided in a best-effort manner. If you find any issues with the translations, we would be glad if you could send us a pull request to improve the quality of the translations!

Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.

This survey would not be possible without the time, resources, and attention of members of the Survey Working Group, the Rust Foundation, and other collaborators. We would also like to thank the following contributors who helped with translating the survey (in no particular order):

  • @albertlarsan68
  • @GuillaumeGomez
  • @Urgau
  • @Jieyou Xu
  • @llogiq
  • @avrong
  • @YohDeadfall
  • @tanakakz
  • @ZuseZ4
  • @igaray

Thank you!

If you have any questions, please see our frequently asked questions.

We appreciate your participation!

Click here to read a summary of last year's survey findings.

The Mozilla BlogReclaim the internet: Mozilla’s rebrand for the next era of tech

A stylized green flag on a black background, with the flag represented by a vertical line and a partial rectangle, and the "3" depicted with angular, geometric shapes.

Mozilla isn’t just another tech company — we’re a global crew of activists, technologists and builders, all working to keep the internet free, open and accessible. For over 25 years, we’ve championed the idea that the web should be for everyone, no matter who you are or where you’re from. Now, with a brand refresh, we’re looking ahead to the next 25 years (and beyond), building on our work and developing new tools to give more people the control to shape their online experiences. 

“As our personal relationships with the internet have evolved, so has Mozilla’s, developing a unique ability to meet this moment and help people regain control over their digital lives,” said Mark Surman, president of Mozilla. “Since open-sourcing our browser code over 25 years ago, Mozilla’s mission has been the same – build and support technology in the public interest, and spark more innovation, more competition and more choice online along the way. Even though we’ve been at the forefront of privacy and open source, people weren’t getting the full picture of what we do. We were missing opportunities to connect with both new and existing users. This rebrand isn’t just a facelift — we’re laying the foundation for the next 25 years.”

We teamed up with global branding powerhouse Jones Knowles Ritchie (JKR) to revamp our brand and revitalize our intentions across our entire ecosystem. At the heart of this transformation is making sure people know Mozilla for its broader impact, as well as Firefox. Our new brand strategy and expression embody our role as a leader in digital rights and innovation, putting people over profits through privacy-preserving products, open-source developer tools, and community-building efforts.

The Mozilla brand was developed with this in mind, incorporating insights from employees and the wider Mozilla community, involving diverse voices as well as working with specialists to ensure the brand truly represented Mozilla’s values while bringing in fresh, objective perspectives.

We back people and projects that move technology, the internet and AI in the right direction. In a time of privacy breaches, AI challenges and misinformation, this transformation is all about rallying people to take back control of their time, individual expression, privacy, community and sense of wonder. With our “Reclaim the Internet” promise,  a strategy built with DesignStudio in 2023,  the new brand empowers people to speak up, come together and build a happier, healthier internet — one where we can all shape how our lives, online and off, unfold. 

A close-up of a black hoodie with "Mozilla" printed in vibrant green, showcasing a modern and bold typeface.
A set of three ID badges with minimalist designs, each featuring a stylized black flag logo, a name, title, and Mozilla branding on green, black, or white backgrounds. The lanyards have "Mozilla" printed in bold text.

“The new brand system, crafted in collaboration with JKR’s U.S. and UK studios, now tells a cohesive story that supports Mozilla’s mission,” said Amy Bebbington, global head of brand at Mozilla. “We intentionally designed a system, aptly named ‘Grassroots to Government,’ that ensures the brand resonates with our breadth of audiences, from builders to advocates, changemakers to activists. It speaks to grassroots coders developing tools to empower users, government officials advocating for better internet safety laws, and everyday consumers looking to reclaim control of their digital lives.”

A large stage presentation with a bold black backdrop featuring oversized white typography and vibrant portraits of diverse individuals set against colorful blocks. The Mozilla flag logo is displayed in the top left, with "©2025 Mozilla Corporation" on the right. A presenter stands on the stage, emphasizing a modern, inclusive, and impactful design aesthetic.
A dynamic collage of Mozilla-branded presentation slides and visuals, showcasing a mix of graphs, headlines, diverse portraits, and key messaging. Themes include "Diversity & Inclusion," "Trustworthy AI," and "Sustainability," with bold typography, a structured grid layout, green accents, and the stylized flag logo prominently featured.

This brand refresh pulls together our expanding offerings, driving growth and helping us connect with new audiences in meaningful ways. It also funnels resources back into the  research and advocacy that fuel our mission.

  • The flag symbol highlights our activist spirit, signifying a commitment to ‘Reclaim the Internet.’ A symbol of belief, peace, unity, pride, celebration and team spirit—built from the ‘M’ for Mozilla and a pixel that is conveniently displaced to reveal a wink to its iconic Tyrannosaurus rex symbol designed by Shepard Fairey. The flag can transform into a more literal interpretation as its new mascot in ASCII art style, and serve as a rallying cry for our cause.
  • The bespoke wordmark is born of its semi-slab innovative typeface with its own custom characters. It complements its symbol and is completely true to Mozilla.
  • The colors start with black and white — a no-nonsense, sturdy base, with a wider green palette that is quintessential with nature and nonprofits that make it their mission to better the world, this is a nod to making the internet a better place for all.
  • The custom typefaces are bespoke and an evolution of its Mozilla slab serif today. It stands out in a sea of tech sans. The new interpretation is more innovative and built for its tech platforms. The sans brings character to something that was once hard working but generic. These fonts are interchangeable and allow for a greater degree of expression across its brand experience, connecting everything together.
  • Our new unified brand voice makes its expertise accessible and culturally relevant, using humor to drive action.
  • Icons inspired by the flag symbol connect to the broader identity system. Simplified layouts use a modular system underpinned by a square pixel grid.

“Mozilla isn’t your typical tech brand; it’s a trailblazing, activist organization in both its mission and its approach,” said Lisa Smith, global executive creative director at JKR. “The new brand presence captures this uniqueness, reflecting Mozilla’s refreshed strategy to ‘reclaim the internet.’ The modern, digital-first identity system is all about building real brand equity that drives innovation, acquisition and stands out in a crowded market.”

Our transition to the new brand is already underway, but we’re not done yet. We see this brand effort as an evolving process that we will continue to build and iterate on over time, with all our new efforts now aligned to this refreshed identity. This evolution brings advancements in AI, product growth and support for groundbreaking ventures. Stay tuned for upcoming campaigns and find out more at www.mozilla.org/en-US/

Curious to learn more about this project or JKR? Head over to www.jkrglobal.com

The word "Mozilla" displayed in bold, modern black typography on a white background, aligned with a precise grid system that emphasizes balance and structure.

The post Reclaim the internet: Mozilla’s rebrand for the next era of tech appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgIntroducing Uniffi for React Native: Rust-Powered Turbo Modules

Today Mozilla and Filament are releasing Uniffi for React Native, a new tool we’ve been using to build React Native Turbo Modules in Rust, under an open source license. This allows millions of developers writing cross-platform React Native apps to use Rust  – a modern programming language known for its safety and performance benefits to build single implementations of their app’s core logic to work seamlessly across iOS and Android. 

This is a big win for us and for Filament who co-developed the library with Mozilla and James Hugman, the lead developer. We think it will be awesome for many other developers too. Less code is good. Memory safety is good. Performance is good. We get all three, plus the joy of using a language we love in more places.

For those familiar with React Native, it’s a great framework for creating cross-platform apps, but it has its challenges. React Native apps rely on a single JavaScript thread, which can slow things down when handling complex tasks. Developers have traditionally worked around this by writing code twice – once for iOS and once for Android – or by using C++, which can be difficult to manage. Uniffi for React Native offers a better solution by enabling developers to offload heavy tasks to Rust, which is now easy to integrate with React Native. As a result, you’ve got faster, smoother apps and a streamlined development process.

How Uniffi for React Native works

Unifii for React Native is a uniFFI bindings generator for using Rust from React Native via Turbo Modules. It lets us work at an abstraction level high enough to stay focused on our applications’s needs rather than getting lost in the gory technical details of bespoke native cross-platform development  It provides tooling to generate:

  • Typescript and JSI C++ to call Rust from Typescript and back again
  • A Turbo-Module that installs the bindings into a running React Native library.

We’re stoked about this work continuing. In 2020, we started with Uniffi as a modern day ‘write once; run anywhere’ toolset for Rust. Uniffi has come a long way since we developed the technology as a bit of a hack to get us a single implementation of Firefox Sync’s core (in Rust) that we could then deploy to both our Android and iOS apps! Since then Mozilla has used uniffi-rs to successfully deploy Rust in mobile and desktop products used by hundreds of millions of users. This Rust code runs important subsystems such as bookmarks and history sync, Firefox Suggest, telemetry and experimentation. Beyond Mozilla, Uniffi is used in Android (in AOSP), high-profile security products and some complex libraries familiar to the community.

Currently the Uniffi for React Native project is an early release. We don’t have a cool landing page or examples in the repo (coming!), but open source contributor Johannes Marbach has already been sponsored by Unomed to use Uniffi for React Native to create a React Native Library for the Matrix SDK .

Need an idea on how you might give it a whirl? I’ve got two uses that we’re very excited about:

1) Use Rust to offload computationally heavy code to a multi-threaded/memory-safe subsystem to escape single-threaded JS performance bottlenecks in React Native. If you know, you know.

2) Leverage the incredible library of Rust crates in your React Native app. One of the Filament devs showed how powerful this is, recently. With a rudimentary knowledge of Rust, they were able to find a fast blurhashing library on crates.io to replace a slow Typescript implementation and get it running the same day. We’re hoping we can really improve the tooling even more to make this kind of optimization as easy as possible.

Uniffi represents a step forward in cross-platform development, combining the power of Rust with the flexibility of React Native to unlock new possibilities for app developers. 

We’re excited to have the community explore what’s possible. Please check out the library on Github and jump into the conversation on Matrix

Disclosure: in addition to this collaboration, Mozilla Ventures is an investor in Filament. 

 

The post Introducing Uniffi for React Native: Rust-Powered Turbo Modules appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 576

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is augurs, a time-series toolkit for Rust with bindings to JS & Python.

Thanks to Ben Sully for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • RustWeek 2025 | Closes 2025-01-12 | Utrecht, The Netherlands | Event date: 2025-05-13

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

488 pull requests were merged in the last week

Rust Compiler Performance Triage

Busy week with more PRs impacting performance than is typical. Luckily performance improvements outweighed regressions in real world benchmarks with the largest single performance gain coming from a change to no longer unconditionally do LLVM IR verification in debug builds which was just wasted work.

Triage done by @rylev. Revision range: 7db7489f..490b2cc0

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.2%, 1.9%] 58
Regressions ❌
(secondary)
1.1% [0.2%, 5.1%] 85
Improvements ✅
(primary)
-2.3% [-8.2%, -0.2%] 116
Improvements ✅
(secondary)
-2.5% [-8.9%, -0.1%] 55
All ❌✅ (primary) -1.4% [-8.2%, 1.9%] 174

6 Regressions, 6 Improvements, 5 Mixed; 5 of them in rollups 49 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-12-04 - 2025-01-01 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

"self own" sounds like a rust thing

ionchy on Mastodon

Thanks to Jacob Pratt for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Tiger OakesHow to fix Storybook screenshot testing

As an alternative to Chromatic, I’ve been using Storybook’s Test Runner to power screenshot tests for Microsoft Loop. We configure the test runner to run in CI and take a screenshot of every story. However, the initial implementation based on the official Storybook docs was very flaky due to inconsistent screenshots of the same story. Here are some tips to reduce flakiness in your Storybook screenshot tests.

The Storybook Test Runner configuration

<figcaption class="header">.storybook/test-runner.js</figcaption>
import * as path from 'node:path';
import { getStoryContext, waitForPageReady } from '@storybook/test-runner';
/**
* @type {import('@storybook/test-runner').TestRunnerConfig}
*/
const config = {
async preVisit(page) {
await page.emulateMedia({ reducedMotion: 'reduce' });
},
async postVisit(page, context) {
const { tags, title, name } = await getStoryContext(page, context);
if (!tags.includes('no-screenshot')) {
// Wait for page idle
await waitForPageReady(page);
await page.evaluate(
() => new Promise((resolve) => window.requestIdleCallback(resolve))
);
// Wait for images to load
await page.waitForFunction(() =>
Array.from(document.images).every((i) => i.complete)
);
// INFO: '/' or "\\" in screenshot name creates a folder in screenshot location.
// Replacing with '-'
const ssNamePrefix = `${title}.${name}`
.replaceAll(path.posix.sep, '-')
.replaceAll(path.win32.sep, '-');
await page.screenshot({
path: path.join(
process.cwd(),
'dist/screenshots',
`${ssNamePrefix}.png`
),
animations: 'disabled',
caret: 'hide',
mask: [
page.locator('css=img[src^="https://res.cdn.office.net/files"]'),
],
});
}
},
};
export default config;

This configuration essentially tells Storybook to run page.screenshot after each story loads, using the postVisit hook. As the Test Runner is based on Playwright, we can use Playwright’s screenshot function to to take pictures and save them to disk.

Disable animations

One source of inconsistency in screenshot tests is animation, as the screenshot will be taken at slightly different times. Luckily, Playwright has a built-in option to disable animations.

<figcaption class="header"></figcaption>
await page.screenshot({
animations: 'disabled',
caret: 'hide',
});

Additionally, we can use the prefers-reduced-motion media query to use CSS designed for no motion. (You are writing CSS for reduced motion, right?) This can be configured when the page is loaded in the preVisit hook.

<figcaption class="header"></figcaption>
async function preVisit(page) {
await page.emulateMedia({ reducedMotion: 'reduce' });
}

Wait for images to load

Since images are a separate network request, they might not be loaded when the screenshot is taken. We can get a list of all the image elements on the page and wait for them to complete.

<figcaption class="header"></figcaption>
// waitForFunction waits for the function to return a truthy value
await page.waitForFunction(() =>
// Get list of images on the page
Array.from(document.images)
// return true if .complete is true for all images
.every((i) => i.complete)
);

However, we still ended up with some issues for images that load over the internet instead of from the disk. To fix this, we can mask out specific elements from the screenshot using the mask option. I wrote a CSS selector for images loaded from the Office CDN.

<figcaption class="header"></figcaption>
await page.screenshot({
mask: [page.locator('css=img[src^="https://res.cdn.office.net/files"]')],
});

Try to figure out if the page is idle

Storybook Test Runner includes a helper waitForPageReady function that waits for the page to be loaded. We also wait for the browser to be in an idle state using requestIdleCallback.

<figcaption class="header"></figcaption>
import { waitForPageReady } from '@storybook/test-runner';
await waitForPageReady(page);
await page.evaluate(
() => new Promise((resolve) => window.requestIdleCallback(resolve))
);

Both of these feel more like vibes than guarantees, but they can help reduce flakiness.

Custom assertions in stories

The above configuration gives a good baseline, but you’ll likely end up with one-off issues in specific stories (especially if React Suspense or lazy loading is involved). In these cases, you can add custom assertions to the story itself! Storybook Test Runner waits until the play function in the story is resolved, so you can add assertions there.

<figcaption class="header">Component.stories.js</figcaption>
import { expect, within } from '@storybook/test';
export const SomeStory = {
async play({ canvasElement }) {
const canvas = within(canvasElement);
await expect(
await canvas.findByText('Lazy loaded string')
).toBeInTheDocument();
},
};

Future Vitest support

Storybook is coming out with a brand-new Test addon based on Vitest. This isn’t supported by Webpack loaders so we can’t use it for Microsoft Loop yet, but it’s something to keep an eye on. Vitest will run in browser mode on top of Playwright, so the page object will still be available.

<figcaption class="header"></figcaption>
import { page } from '@vitest/browser/context';

The Mozilla BlogUsing trusted execution environments for advertising use cases

This article is the next in a series of posts we’ll be doing to provide more information on how Anonym’s technology works.  We started with a high level overview, which you can read here.

Mozilla acquired Anonym over the summer of 2024, as a key pillar to raise the standards of privacy in the advertising industry. These privacy concerns are well documented, as described in the US Federal Trade Commission’s recent report. Separate from Mozilla surfaces like Firefox, which work to protect users from invasive data collection, Anonym is ad tech infrastructure that focuses on improving privacy measures for data commonly shared between advertisers and ad networks. A key part of this process is where that data is sent and stored. Instead of advertisers and ad networks sharing personal user data with each other, they encrypt it and send it to Anonym’s Trusted Execution Environment.  The goal of this approach is to unlock insights and value from data without enabling the development of cross-site behavioral profiles based on user-level data.

A trusted execution environment (TEE) is a technology for securely processing sensitive information in a way that protects code and data from unauthorized access and modification. A TEE can be thought of as a locked down environment for processing confidential information. The term enclave refers to the secure memory portion of the trusted execution environment.

Why TEEs?

TEEs improve on standard compute infrastructure due to:

  • Confidentiality – Data within the TEE is encrypted and inaccessible outside the TEE, even if the underlying system is compromised. This ensures that sensitive information remains protected.
  • Attestation – TEEs can provide cryptographic proof of their identity and the code they intend to execute. This allows other parts of the system to verify that the TEE is trustworthy before interacting with it and ensures only authorized code will process sensitive information.

Because humans can’t access TEEs to manipulate the code, Anonym’s system requires that all the operations that must be performed on the data be programmed in advance. We do not support arbitrary queries or real-time data manipulation. While that may sound like a drawback, it offers two material benefits. First, it ensures that there are no surprises. Our partners know with certainty how their data will be processed. Anonym and its partners cannot inadvertently access or share user data. Second, this hardened approach also lends itself to highly repeatable use cases. In our case, for example, this means ad platforms can run a measurement methodology repeatedly with many advertisers without needing to approve the code each time knowing that by design, the method and the underlying data are safe.

TEEs in Practice

Today, Anonym uses hardware-based Trusted Execution Environments (TEEs) based on Intel SGX offered by Microsoft Azure. We believe Intel SGX is the most researched and widely deployed approach to TEEs available today.

When working with our ad platform partners, Anonym develops the algorithm for the specific advertising application. For example, if an advertiser is seeking to understand whether and which ads are driving the highest business value, we will customize our attribution algorithm to align with the ad platform’s standard approach to attribution. This includes creating differentially private output to protect data subjects from reidentification. 

Prior to running any algorithm on partner data, we provide our partners with documentation and source code access through our Transparency Portal, a process we refer to as binary review. Once our partners have reviewed a binary, they can approve it using the Transparency Portal. If, at any time, our partners want to disable Anonym’s ability to process data, they can revoke approval.

Each ‘job’ processed by Anonym starts with an ephemeral TEE being spun up. Encrypted data from our partners is pulled into the TEE’s encrypted memory. Before the data can be decrypted, the TEE must verify its identity and integrity. This process is referred to as attestation. Attestation starts with the TEE creating cryptographic evidence of its identity and the code it intends to run (similar to a hash). The system will compare that evidence to what has been approved for each partner contributing data. Only if this attestation process is successful will the TEE be able to decrypt the data. If the cryptographic signature of the binary does not match the approved binary, the TEE will not get access to the keys to decrypt and will not be able to process the data. 

Attestation ensures our partners have control of their data, and can revoke access at any point in time. It also ensures Anonym enclaves never have access to sensitive data without customer visibility.  We do this by providing customers with a log that records an entry any time a customer’s data is processed.

Once the job is complete and the anonymized data is written to storage, the TEE is spun down and the data within it is destroyed. The aggregated and differentially private output is then shared with our partners. 

We hope this overview has been helpful. Our next blog post will walk through Anonym’s approach to transparency and control through our Transparency Portal.

The post Using trusted execution environments for advertising use cases appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird Monthly Development Digest – November 2024

Hello Thunderbird Community! Another adventurous month is behind us, and the team has emerged victorious from a number of battles with code, quirks, bugs and performance issues. Here’s a quick summary of what’s been happening across the front and back end teams as some of the team heads into US Thanksgiving:

Exchange Web Services support in Rust

November saw an increase in the number of team members contributing to the project and to the number of features shipped! Users on our Daily release channel can help to test newly-released features such as copy and move messages from EWS to another protocol, marking a message as read/unread, and local storage functionality. Keep track of feature delivery here.

If you aren’t already using Daily or Beta, please consider downloading to get early access to new features and fixes, and to help us uncover issues early.

Account Hub

Development of a refreshed account hub has reached the end of an important initial stage, so is entering QA review next week while we spin up tasks for phase 2 – taking place in the last few weeks of the year. Meta bug & progress tracking.

Global Database & Conversation View

Work to implement a long term database replacement is moving ahead despite some team members being held up in firefighting mode on regressions from patches which landed almost a year ago. Preliminary patches on this large-scale project are regularly pumped into the development ecosystem for discussion and review, with the team aiming to be back to full capacity before the December break.

In-App Notifications

With phase 1 of this project now complete, we’ve uplifted the feature to 134.0 Beta and notification tests will be activated this week. Phase 2 of the project is well underway, with some features accelerated and uplifted to form part of our phase 1 testing plan.  Meta Bug & progress tracking.

Folder & Message Corruption

Some of the code we manage is now 20 years old and efforts are constantly under way to modernize, standardize and make things easier to maintain in the future. While this process is very rewarding, it often comes with unforeseen consequences which only come to light when changes are exposed to the vast number of users on our “ESR” channel who have edge cases and ways of using Thunderbird that are hard to recreate in our limited test environments.

The past few months have been difficult for our development team as they have responded to a wide range of issues related to message corruption. After a focused team effort, and help from a handful of dedicated users and saintly contributors, we feel that we have not only corrected any issues that were introduced during our recent refactoring, but also uncovered and solved problems that have been plaguing our users for years. And long may that continue! We’re here to improve things!

New Features Landing Soon

Several requested features have reached our Daily users and include…

If you want to see things as they land, and help squash early bugs, you can check the pushlog and try running daily. This would be immensely helpful for catching things early.

See you next month.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest – November 2024 appeared first on The Thunderbird Blog.

Firefox NightlyAnnouncing Faster, Lighter Firefox Downloads for Linux with .tar.xz Packaging!

We’re excited to announce an improvement for our Linux users that enhances both performance and compatibility with various Linux distributions.

Switching to .tar.xz Packaging for Linux Builds

In our ongoing effort to optimize Firefox for all users, we are transitioning the packaging format of Firefox for Linux from .tar.bz2 to .tar.xz (utilizing the LZMA compression algorithm). This change results in smaller download sizes and faster decompression times, making your experience smoother and more efficient.

What This Means for You

  • Smaller Downloads: The Firefox .tar.xz packages are, on average, 25% smaller than their .tar.bz2 counterparts. This means quicker downloads, saving you time and bandwidth.
  • Faster Installation: With improved decompression speeds, installing Firefox on Linux will be faster than ever. The .tar.xz format decompresses more than twice as fast as .tar.bz2, allowing you to get up and running in no time.
  • Enhanced Compatibility: Modern Linux distributions support the .tar.xz format. This switch aligns Firefox with the standards of the Linux community, ensuring better integration and compatibility.
  • No Action Required for Current Users: If you already have Firefox installed on your computer, there’s nothing you need to do. Firefox will continue to operate and update as usual.

Accessing the New Packages

(Re)installing Firefox? Just curious about testing out the compression?

Starting today, November 27th, 2024 you can find the new .tar.xz archives on our downloads page. Simply select the Firefox Nightly for Linux that you desire, and you’ll receive the new packaging format.

Maintaining Firefox on your favorite Linux distribution?

For package maintainers or scripts that reference our download links, please note that this packaging change is currently implemented in Firefox Nightly and will eventually roll out to the Beta and Release channels in the weeks to come.

To maintain uninterrupted updates now and in the future, we recommend updating your scripts to handle both .tar.bz2 and .tar.xz extensions, or switching to .tar.xz format when it becomes available in your preferred channel.

Why does Firefox use .tar.xz instead of Zstandard (.zst) for Linux releases?

While Zstandard is slightly faster to decompress, we chose .tar.xz because it offers better compression, reducing download sizes and saving bandwidth. Additionally, .tar.xz is widely supported across Linux systems, ensuring compatibility without extra dependencies.

For more details on how the decision was made, please refer to bug 1710599.

We Value Your Feedback

Your input is crucial to us. We encourage you to download the new .tar.xz packaged builds, try them out, and let us know about your experience.

  • Report Issues: If you encounter any bugs or problems, please report them through Bugzilla.
  • Stay Connected: Join the discussion and share your thoughts with the Firefox Nightly community. Your feedback helps us improve and tailor Firefox to better meet your needs.

Thank You for Your Support

We appreciate your continued participation in the Firefox Nightly community. Together, we’re making Firefox better every day. Stay tuned for more updates, and happy browsing!

Tiger Oakes2024 JS Rap Up

To open JSNation US 2024, Daphne asked me to help write a rap to recap the year in JavaScript news, parodying mrgrandeofficial. Here’s what I came up with (with info from Frontend Focus, TC39 meetings, and lots of web searches)!

Thanks to rappers CJ Reynolds, Daphne Oakes, Henri Helvetica, and Beau Carnes - aka Hip Hop Array!

alt

The Script

11 months into 2024…
let’s recap Javascript once more

January

iOS gets new browser engines
Apple creates PWA tension

February

React Labs drops a big update
Transferable buffers come out the gate

March

JSR comes alive
World Wide Web turns 35

April

Node 22 gives us module require()
ESLint 9 sets configs on fire

May

React 19 enters RC
SolidStart 1 adds simplicity

June

This year’s spec is ratified
JSNation on the EU side

July

Ladybird browser enters the race
Node tries type stripping whitespace

August

rspack 1 hits 1.0
telling webpack you’re too slow

September

Tell Oracle: drop JS trademark
So we can leave ECMAScript in the dark

October

Here comes NextJS 15
Deno 2, Svelte 5 - so fresh so clean

November

Bluesky rising, Twitter’s outcast
CSS gets a logo at last

JSNation will be a blast!

The Rust Programming Language BlogAnnouncing Rust 1.83.0

The Rust team is happy to announce a new version of Rust, 1.83.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.83.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.83.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.83.0 stable

New const capabilities

This release includes several large extensions to what code running in const contexts can do. This refers to all code that the compiler has to evaluate at compile-time: the initial value of const and static items, array lengths, enum discriminant values, const generic arguments, and functions callable from such contexts (const fn).

References to statics. So far, const contexts except for the initializer expression of a static item were forbidden from referencing static items. This limitation has now been lifted:

static S: i32 = 25;
const C: &i32 = &S;

Note, however, that reading the value of a mutable or interior mutable static is still not permitted in const contexts. Furthermore, the final value of a constant may not reference any mutable or interior mutable statics:

static mut S: i32 = 0;

const C1: i32 = unsafe { S };
// error: constant accesses mutable global memory

const C2: &i32 = unsafe { &S };
// error: encountered reference to mutable memory in `const`

These limitations ensure that constants are still "constant": the value they evaluate to, and their meaning as a pattern (which can involve dereferencing references), will be the same throughout the entire program execution.

That said, a constant is permitted to evaluate to a raw pointer that points to a mutable or interior mutable static:

static mut S: i32 = 64;
const C: *mut i32 = &raw mut S;

Mutable references and pointers. It is now possible to use mutable references in const contexts:

const fn inc(x: &mut i32) {
    *x += 1;
}

const C: i32 = {
    let mut c = 41;
    inc(&mut c);
    c
};

Mutable raw pointers and interior mutability are also supported:

use std::cell::UnsafeCell;

const C: i32 = {
    let c = UnsafeCell::new(41);
    unsafe { *c.get() += 1 };
    c.into_inner()
};

However, mutable references and pointers can only be used inside the computation of a constant, they cannot become a part of the final value of the constant:

const C: &mut i32 = &mut 4;
// error[E0764]: mutable references are not allowed in the final value of constants

This release also ships with a whole bag of new functions that are now stable in const contexts (see the end of the "Stabilized APIs" section).

These new capabilities and stabilized APIs unblock an entire new category of code to be executed inside const contexts, and we are excited to see how the Rust ecosystem will make use of this!

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.83.0

Many people came together to create Rust 1.83.0. We couldn't have done it without all of you. Thanks!

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 132-134)

Hello! Welcome to another episode of the SpiderMonkey Newsletter. I’m your host, Matthew Gaudet.

In the spirit of the upcoming season, let’s talk turkey. I mean, monkeys. I mean SpiderMonkey.

Today we’ll cover a little more ground than the normal newsletter.

If you haven’t already read Jan’s wonderful blog about how he managed to improve Wasm compilation speed by 75x on large modules, please take a peek. It’s a great story of how O(n^2) is the worst complexity – fast enough to seem OK in small cases, and slow enough to blow up horrendously when things get big.

🚀 Performance

👷🏽‍♀️ New features & In Progress Standards Work

🚉 SpiderMonkey Platform Improvements

This Week In RustThis Week in Rust 575

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is postcard, a battle-tested, well-documented #[no_std] compatible serializer/deserializer geared towards use in embedded devices.

Thanks to Reto Trappitsch for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

405 pull requests were merged in the last week

Rust Compiler Performance Triage

This week saw more regressions than improvements, mostly due to three PRs that performed internal refactorings that are necessary for further development and modification of the compiler.

Triage done by @kobzol. Revision range: 7d40450b..7db7489f

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.6% [0.1%, 3.6%] 57
Regressions ❌
(secondary)
0.6% [0.0%, 2.7%] 100
Improvements ✅
(primary)
-0.5% [-1.5%, -0.2%] 11
Improvements ✅
(secondary)
-0.4% [-0.5%, -0.3%] 7
All ❌✅ (primary) 0.4% [-1.5%, 3.6%] 68

4 Regressions, 2 Improvements, 3 Mixed; 3 of them in rollups 40 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-11-27 - 2024-12-25 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Will never stop being positively surprised by clippy

text error: hypothenuse can be computed more accurately: --> src/main.rs:835:5 | 835 | (width * width + height * height).sqrt() / diag | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: consider using `width.hypot(height)` | help: for further information, visit https://rust-lang.github.io/rust-clippy/master/index.html#imprecise_flops

llogiq is quite self-appreciative regarding his suggestion.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Rust Programming Language BlogRust 2024 call for testing

Rust 2024 call for testing

We've been hard at work on Rust 2024. We're thrilled about how it has turned out. It's going to be the largest edition since Rust 2015. It has a great many improvements that make the language more consistent and ergonomic, that further our relentless commitment to safety, and that will open the door to long-awaited features such as gen blocks, let chains, and the never (!) type. For more on the changes, see the nightly Edition Guide.

As planned, we recently merged the feature-complete Rust 2024 edition to the release train for Rust 1.85. It has now entered nightly beta1.

You can help right now to make this edition a success by testing Rust 2024 on your own projects using nightly Rust. Migrating your projects to the new edition is straightforward and mostly automated. Here's how:

  1. Install the most recent nightly with rustup update nightly.
  2. In your project, run cargo +nightly fix --edition.
  3. Edit Cargo.toml and change the edition field to say edition = "2024" and, if you have a rust-version specified, set rust-version = "1.85".
  4. Run cargo +nightly check to verify your project now works in the new edition.
  5. Run some tests, and try out the new features!

(More details on how to migrate can be found here and within each of the chapters describing the changes in Rust 2024.)

If you encounter any problems or see areas where we could make the experience better, tell us about it by filing an issue.

Coming next

Rust 2024 will enter the beta channel on 2025-01-09, and will be released to stable Rust with Rust 1.85 on 2025-02-20.

  1. That is, it's still in nightly (not in the beta channel), but the edition items are frozen in a way similar to it being in the beta channel, and as with any beta, we'd like wide testing.

Firefox Developer ExperienceFirefox WebDriver Newsletter 133

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 133 release cycle.

Contributions

Firefox – including our WebDriver implementation – is developed as an open source project, and everyone is welcome to contribute. If you ever wanted to contribute to an open source project used by millions of users, or are interested in some experience in software development, jump in.

We are always grateful to receive external contributions, here are the ones which made it in Firefox 133:

  • Liam (ldebeasi) added an internal helper to make it easier to call commands from the parent process to content processes
  • Dan (temidayoazeez032) updated the error thrown by the browsingContext.print command for invalid dimensions

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi.

WebDriver BiDi

Support for url argument of network.continueRequest

We just added support for the "url" argument of the network.continueRequest. This parameter, which should be a string representing a URL, allows a request blocked in the beforeRequestSent phase to be transparently redirected to another URL. The content page will not be aware of the redirect, and will consider the response as if it came from the originally targeted URL.

In terms of BiDi network events, note that this transparent redirect will also not lead to additional network.beforeRequestSent events. The redirect count for this request/response will not be increased by this command either. It can be useful if clients want to redirect a specific call to a test API, without having to update the implementation of the website/webapplication.

-> {
  "method": "network.continueRequest",
  "params": {
    "request": "12",
    "url": "https://bugzilla.allizom.org/show_bug.cgi?id=1234567"
  },
  "id": 2
}

<- { "type": "success", "id": 2, "result": {} }

As with other network interception features, using this command and this parameter relies on the fact that the client is monitoring network events and has setup appropriate intercepts in order to catch specific requests. For more details, you can check out the Firefox WebDriver 124 newsletter where we introduced network interception.

Bug fixes

Marionette

Bug fixes

Don Martiopt out of Google Page Annotations

Ever wish Google would have one button for opt me out of all Google growth hacking schemes that you could click once and be done with it? Me too. But that’s not how it works.

Anyway, the new one is Google Page Annotations: Google app for iOS now injects links back to Search on websites. I really don’t want this site showing up with links to stuff I didn’t link to. The choices of links on here are my own free expression.

This opt-out has two parts and you do need to have a Google Account to do it.

  1. Either set up Google Search Console and add your site(s) as web properties on there, or go to your existing Google Search Console account and get a list of your web properties.

  2. Visit the form: Opt out from Page Annotation in Google App browser for iOS and add your web properties as a comma-separated list. You have to be the Google Search Console owner of the site(s) to do the opt out.

Hopefully this awkward form thing is just temporary and there will be a more normal opt-out with a meta tag or something at some point. I’ll update this page if they make one.

IMHO the IT business had a peak some time in the mid-2000s. You didn’t have to dink with vintage PC stuff like DIP switches and partition tables, but the Internet companies were still in create more value than you capture mode and you didn’t have to work around too many dark patterns either. If I recall correctly, Microsoft did something like this link-adding scheme in Internet Explorer at one point, but they backed off on it before it really became a thing and the opt-out was easier. Welcome to the return of the power user. Oh well, writing up all the individual opt outs is good for getting clicks. The Google Search algorithm loves tips on how to turn Google stuff off.

Related (more stuff to turn off)

fix Google Search: get rid of most of the AI and other annoying features

Google Chrome ad features checklist: turn off tracking and built-in ads in Google Chrome

Block AI training on a web site Right now you can’t block Google from taking your content for AI without also blocking your site from Google Search, but that’s likely to change.

Bonus links

Why the DOJ’s Google Ad Tech Case Matters to You In 2020, as the UK report cited above showed, publishers received only 51% of the money spent by advertisers to reach readers, and about 15% of advertisers’ money seems to just… disappear.

MFA is Programmatic’s Dark Mirror The failure of MFA is not MFA websites. The failure of MFA is that we built an incentive system in programmatic that essentially necessitated their existence. Related: I was invited to Google HQ to talk about my failing website. Here’s how that went.

The Rust Programming Language BlogThe wasm32-wasip2 Target Has Reached Tier 2 Support

Introduction

In April of this year we posted an update about Rust's WASI targets to the main Rust blog. In it we covered the rename of the wasm32-wasi target to wasm32-wasip1, and the introduction of the new wasm32-wasip2 target as a "tier 3" target. This meant that while the target was available as part of rust-lang/rustc, it was not guaranteed to build. We're pleased to announce that this has changed in Rust 1.82.

For those unfamiliar with WebAssembly (Wasm) components and WASI 0.2, here is a quick, simplified primer:

  • Wasm is a (virtual) instruction format for programs to be compiled into (think: x86).
  • Wasm Components are a container format and type system that wrap Core Wasm instructions into typed, hermetic binaries and libraries (think: ELF).
  • WASI is a reserved namespace for a collection of standardized Wasm component interfaces (think: POSIX header files).

For a more detailed explanation see the WASI 0.2 announcement post on the Bytecode Alliance blog.

What's new?

Starting Rust 1.82 (2024-10-17) the wasm32-wasip2 (WASI 0.2) target has reached tier-2 platform support in the Rust compiler. Among other things this now means it is guaranteed to build, and is now available to install via Rustup using the following command:

rustup target add wasm32-wasip2

Up until now Rust users writing Wasm Components would always have to rely on tools (such as cargo-component) which target the WASI 0.1 target (wasm32-wasip1) and package it into a WASI 0.2 Component via a post-processing step invoked. Now that wasm32-wasip2 is available to everyone via Rustup, tooling can begin to directly target WASI 0.2 without the need for additional post-processing.

What this also means is that ecosystem crates can begin targeting WASI 0.2 directly for platform-specific code. WASI 0.1 did not have support for sockets. Now that we have a stable tier 2 platform available, crate authors should be able to finally start writing WASI-compatible network code. To target WASI 0.2 from Rust, authors can use the following cfg attribute:

#[cfg(all(target_os = "wasi", target_env = "p2"))]
mod wasip2 {
    // items go here
}

To target the older WASI 0.1 target, Rust also accepts target_env = "p1".

Standard Library Support

The WASI 0.2 Rust target reaching tier 2 platform support is in a way just the beginning. means it's supported and stable. While the platform itself is now stable, support in the stdlib for WASI 0.2 APIs is still limited. While the WASI 0.2 specification specifies APIs for example for timers, files, and sockets - if you try and use the stdlib APIs for these today, you'll find they don't yet work.

We expect to gradually extend the Rust stdlib with support for WASI 0.2 APIs throughout the remainder of this year into the next. That work has already started, with rust-lang/rust#129638 adding native support for std::net in Rust 1.83. We expect more of these PRs to land through the remainder of the year.

Though this doesn't need to stop users from using WASI 0.2 today. The stdlib is great because it provides portable abstractions, usually built on top of an operating system's libc or equivalent. If you want to use WASI 0.2 APIs directly today, you can either use the wasi crate directly. Or generate your own WASI bindings from the WASI specification's interface types using wit-bindgen.

Conclusion

The wasm32-wasip2 target is now installable via Rustup. This makes it possible for the Rust compiler to directly compile to the Wasm Components format targeting the WASI 0.2 interfaces. There is now also a way for crates to compile add WASI 0.2 platform support by writing:

#[cfg(all(target_os = "wasi", target_env = "p2"))]
mod wasip2 {}

We're excited for Wasm Components and WASI 0.2 to have reached this milestone within the Rust project, and are excited to see what folks in the community will be building with it!

Frederik BraunModern solutions against cross-site attacks

NB: This is the text/html version of my talk from the German OWASP Day 2024 in Leipzig earlier this month. If you prefer, there is also a video from the event.

Title Slide. Firefox log in the top right. Headline is "Dealing with Cross-Site Attacks". Presentation from Frederik Braun held at German OWASP Day 2024 in Leipzig

This article is about cross-site leak attacks and what recent defenses have been introduced to counter them. I …

Don MartiUse an ad blocking extension when performing Internet searches

The FBI seems to have taken down the public service announcement covered in Even the FBI says you should use an ad blocker | TechCrunch.

Use an ad blocking extension when performing internet searches. Most internet browsers allow a user to add extensions, including extensions that block advertisements. These ad blockers can be turned on and off within a browser to permit advertisements on certain websites while blocking advertisements on others.

This is still good advice. Search ads are full of scams, and you can block ads on search without blocking the ads on legit sites. I made a local copy of the FBI alert.

Why did they take the web version down? Maybe we’ll find out. I sent the FBI a FOIA request for any correspondence about this alert and the decision to remove it.

The Malwarebytes site has more good info on ongoing problems with search ads. Google Search user interface: A/B testing shows security concerns remain

Related

effective privacy tips

SingleFile is a convenient extension for saving copies of pages. (I got the FBI page from the Internet Archive. It’s a US government work so make all the copies you want.)

Bonus links

“Interpreting the Ambiguities of Section 230” by Alan Rozenshtein (Section 230 covers publisher liability, but not distributor liability.)

Confidential OCR (How to install and use Tesseract locally on Linux)

The Great Bluesky Migration: I Answer (Some) Of Your Questions Bluesky also offers a remedy for quote-dunking. If someone quotes your post to make a nasty comment on it, you can detach the quoted post entirely. (And then you should block the jerk). Related: Bluesky’s success is a rejection of big tech’s operating system

Designing a push life in a pull world Everything in our online world is designed to push through our boundaries, usually because it’s in someone else’s financial best interest. And we’ve all just accepted that this is the way the world works now.

Killer Robots About to Fill Skies… (this kind of thing is why the EU doesn’t care about AI innovation in creepy tracking and copyright infringement—they need those developers to get jobs in the defense industry, which isn’t held back by the AI Act.)

Inside the Bitter Battle Between Starbucks and Its Workers (More news from management putting dogmatic union-busting ahead of customers and shareholders, should be a familiar story to anyone dealing with inadequate ad review or search quality ratings.)

National Public Data saga illustrates little-regulated US data broker industry National Public Data appears to have been a home-based operation run by Verini himself. The enterprise maintains no dedicated physical offices. The owner/operator maintains the operations of company from his home office, and all infrastructure is housed in independent data centers, Verini said in his bankruptcy filing.

Don Martiprediction markets and the 2024 election link dump

Eric Neyman writes, in Seven lessons I didn’t learn from election day, Many people saw the WSJ report as a vindication of prediction markets. But the neighbor method of polling hasn’t worked elsewhere. More: Polling by asking people about their neighbors: When does this work? Should people be doing more of it? And the connection to that French dude who bet on Trump

The money is flooding in, but what are prediction markets truly telling us? If we look back further, predicted election markets were actually legal in the US from the 1800s to 1924, and historical data shows that they were accurate. There’s a New York Times story of Andrew Carnegie noting how surprisingly accurate the election betting markets were at predicting outcomes. They were actually more accurate before the introduction of polling as a concept, which implies that the introduction of polling diluted the accuracy of the market, rather than the opposite.

Was the Polymarket Trump whale smart or lucky? Whether one trader’s private polling tapped sentiment more accurately than the publicly available surveys, or whether statistical noise just happened to reinforce his confidence to buy a dollar for 40c, can’t be known without seeing the data.

Koleman Strumpf Interview - Prediction Markets & More 2024 was a huge vindication for the markets. I don’t know how else to say it, but all the polls and prognosticators were left in the dust. Nobody came close to the markets. They weren’t perfect, but they were an awful lot better than anything else, to say the least.

FBI raids Polymarket CEO Shayne Coplan’s apartment, seizes phone: source Though U.S. election betting is newly legal in some circumstances, Polymarket is not supposed to allow U.S. users after the Commodity Futures Trading Commission halted its operations in 2022, but its user base largely operates through cryptocurrency, which allows for easy anonymity.

Polymarket Explained: How Blockchain Prediction Markets Are Shaping the Future of Forecasting (Details of how Polymarket works including tokens and smart contracts.)

Betting odds called the 2024 election better than polls did. What does this mean for the future of prediction markets?

Prediction Markets for the Win

Just betting on an election every few years is not the interesting part, though. Info Finance is a broader concept. [I]nfo finance is a discipline where you (i) start from a fact that you want to know, and then (ii) deliberately design a market to optimally elicit that information from market participants.

Bonus links

The rise and fall of peer review - by Adam Mastroianni

The Great Redbox Cleanup: One Company is Hauling Away America’s Last DVD Kiosks

Both Democrats and Republicans can pass the Ideological Turing Test

The Verge Editor-In-Chief Nilay Patel breathes fire on Elon Musk and Donald Trump’s Big Tech enablers

2024-11-09 iron mountain atomic storage

How Upside-Down Models Revolutionized Architecture, Making Possible St. Paul’s Cathedral, Sagrada Família & More

Firefox Developer ExperienceFirefox DevTools Newsletter — 132

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 132 Nightly release cycle.

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Firefox 133 is around the corner and I’m late to tell you about what was done in 132! This release does not offer any new features as the team is working on bigger tasks that are still not visible by the users. But this still contains a handful of important bug fixes, so let’s jump right in.

Offline mode and cached requests

When enabling Offline mode from the Network panel, cached requests would fail, which doesn’t match the actual behavior of the browser when there is no network (#1907304). This is fixed now and cached requests will succeed as you’d expect.

Inactive CSS and pseudo elements

You might be familiar with what we call Inactive CSS in the Inspector: small hints on declarations that don’t have any impact on the selected element as the property requires other properties to be set (for example, setting top on non-positioned element). Sometimes we would show invalid hints on pseudo-element rules displayed in their binding elements (i.e. the one that we show under the “Pseudo element” section), and so we fixed this to avoid any confusion (#1583641).

Stable device detection on about:debugging

In order to debug Firefox for Android, you can go to about:debugging , plug your phone through USB and inspect the tabs you have opened on your phone. Unfortunately the device detection was a bit flaky and it could happen that the device wouldn’t show up in the list of connected phones. After some investigation, we found out the culprit (adb is now grouping device status notifications in a single message), and device detection should be more stable (#1899330).

Service Workers console logs

Still in about:debugging, we introduced a regression a couple releases ago which would prevent any Service Workers console logs to be displayed in the console. The issue was fixed and we added automated tests to prevent regressing such an important features (#1921384, #1923648)

Keyboard navigation

We tackled a few accessibility problems: in the Network panel, “Raw” toggles couldn’t be checked with the keyboard (#1917296), and the inspector filter input clear button couldn’t be focused with the keyboard (#1921001).

Misc

Finally, we fixed an issue where you couldn’t use the element picker after a canceled navigation from about:newtab (#1914863), as well as a pretty nasty Debugger crash that could happen when debugging userscript code (#1916086).

And that’s it for this months folks, Thank you for reading this and using our tools, see you in a few weeks days for a new round of updates 🙂


Full list of fixed bugs in DevTools for the Firefox 132 release:

Mozilla Open Policy & Advocacy BlogMozilla Responds to DOE’s RFI on the Frontiers in AI for Science, Security, and Technology (FASST)

This month, the US Department of Energy’s (DOE)  released a Request for Information on their Frontiers in Artificial Intelligence for Science, Security, and Technology (FASST) initiative. Mozilla was eager to provide feedback, particularly given our recent focus on the emerging conversation around Public AI.

The Department of Energy’s (DOE’s) FASST initiative has the potential to create the foundation for Public AI infrastructure, which will not only help to enable increased access to critical technologies within the government that can be leveraged to create more efficient and useful services, but also potentially catalyze non-governmental innovation.

In addressing DOE’s questions outlined in the RFI, Mozilla focused on key themes including the myriad benefits of open source, the need to keep competition related to the whole AI stack top of mind, and the opportunity or FASST to help lead the development of Public AI by creating the program as “public” by default.

 

Below, we set out ideas in more depth. Mozilla’s response to DOE in full can be found here.

  • Benefits of Open Source: Given Mozilla’s long standing support of the open source community, a clear through line in Mozilla’s responses to DOE’s questions is the importance of open source in advancing key government objectives. Below are four key themes related to the benefits of open source:
    • Economic Security: Open source by its nature enables the more rapid proliferation of a technology and according to NTIA’s report on Dual-Use Foundation Models with Widely Available Model Weights, “They diversify and expand the array of actors, including less resourced actors, that participate in AI research and development.” For the United States, whose competitive advantage in global competition is its innovative private sector, the rapid proliferation of newly accessible technologies means that new businesses can be created on the back of a new technology, speeding innovation. Existing businesses, whether a hospital or a factory, can more easily adopt new technologies as well, helping to increase efficiency.
    • Expanding the Market for AI: While costs are rapidly decreasing, the use of cutting edge AI products purchased from major labs and big tech companies are not cheap. Many small businesses, research institutions, and nonprofits would be unable to benefit from the AI boom if they did not have the option to use freely available open source AI models. This means that more people around the world get access to American built open source technologies, furthering the use of American technology tools and standards, while forging deeper economic and technological ties.
    • Security & Safety: Open source has had demonstrable security and safety benefits. Rather than a model of “security through obscurity,” open source AI thrives from having many eyes examining code bases and models for exploits by harnessing the wisdom of the crowd to find issues, whether related to discriminatory outputs from LLMs or security vulnerabilities.
    • Resource Optimization: Open source in AI means more than freely downloadable model weights – it means considering how to make the entire AI stack more open and transparent, from the energy cost of training to data on the resources used to develop the chips necessary to train and operate AI models. By making more information on AI’s resource usage open and transparent, we can collectively work to optimize the efficiency of AI, ensuring that the benefits truly outweigh the costs.
  • Keep Competition Top of Mind: The U.S. government wields outsized influence in shaping markets as its role not just as a promulgator of standards and regulations but due to its purchasing power. We urge the DOE to consider broader competitive concerns when determining potential vendors and partnerships for products and services, ranging from cloud resources to semiconductors. This would foster a more competitive AI ecosystem, as noted in OMB’s guidance to Advance the Responsible Acquisition of AI in Government which highlights the importance of promoting competition in procurement of AI. The DOE should make an effort to work with a range of  partners and civil society organizations rather than defaulting to standard government partners and big tech companies.
  • Making FASST “Public” By Default: It is critical that as FASST engages in the development of new models, datasets, and other tools and resources, it defaults to making its work public by default. This may mean directly open sourcing datasets and models, or working with partners, civil society, academia, and beyond to advance access to AI assets which can provide public value.

We applaud DOE’s commitment to advancing open, public-focused AI, and we’re excited about the potential of the FASST program. Mozilla is eager to work alongside DOE and other partners to make sure FASST supports the development of technology that serves the public good. Here’s to a future where AI is open, accessible, and beneficial for everyone.

The post Mozilla Responds to DOE’s RFI on the Frontiers in AI for Science, Security, and Technology (FASST) appeared first on Open Policy & Advocacy.

Martin ThompsonEverything you need to know about selective disclosure

Why does this matter?

A lot of governments are engaging with projects to build “Digital Public Infrastructure”. That term covers a range of projects, but one of the common and integral pieces relates to government-backed identity services. While some places have had some form of digital identity system for years — hi Estonia! — there are many more governments looking to roll out some sort of digital identity wallet for their citizens. Notably, the European Union recently passed a major update to their European Digital Identity Regulation, which seeks to have a union-wide digital identity system for all European citizens. India’s Aadhaar is still the largest such project with well over a billion people enrolled.

There are a few ways that these systems end up being implemented, but most take the same basic shape. A government agency will be charged with issuing people with credentials. That might be tied to driver licensing, medical services, passports, or it could be a new identity agency. That agency issues digital credentials that are destined for wallets in phones. Then, services can request that people present these credentials at certain points, as necessary.

The basic model that is generally used looks something like this:

Three boxes with arrows between each in series, in turn labeled: Issuer, Holder, Verifier

The government agency is the “issuer”, your wallet app is a “holder”, and the service that wants your identity information is a “verifier”.

This is a model for digital credentials that is useful in describing a lot of different interactions. A key piece of that model is the difference between a credential, which is the thing that ends up in a wallet, and a presentation, which is what you show a verifier.

This document focuses on online use cases. That is, where you might be asked to present information about your identity to a website Though there are many other uses for identity systems, online presentation of identity is becoming more common. How we use identity online is likely to shape how identity is used more broadly.

The goal of this post is to provide information and maybe a fresh perspective on the topic. This piece also has a conclusion that suggests that the truly hard problems in online identity are not technical in nature, so do not necessarily benefit from the use of selective disclosure. As much as selective disclosure is useful in some contexts, there are significant challenges in deploying it on the Web.

What is selective disclosure?

A presentation might be a reduced form of the credential. Let’s say that you have a driver license, like the following:

A photo of a (fake) Hawaii driver license

One way of thinking about selective disclosure is to think of it as redacting those parts of the credential that you don’t want to share.

Let’s say that you want to show that you are old enough to buy alcohol. You might imagine doing something like this:

A photo of a (fake) Hawaii driver license with some fields covered with black boxes

That is, if you were presenting that credential to a store in person, you would want to show that the card truly belongs to you and that you are old enough.

If you aren’t turning up in person, the photo and physical description are not that helpful, so you might cover those as well.

You don’t need to share your exact birth date to show that you are old enough. You might be able to cover the month and day of those too. That is still too much information, but the best you can easily manage with a black highlighter.

If there was a “can buy alcohol” field on the license, that might be even better. But the age at which you can legally buy alcohol varies quite a bit across the world. And laws apply to the location, not the person. A 19 year old from Canada can’t buy alcohol in the US just because they can buy alcohol at home[1]. Most digital credential systems have special fields to allow for this sort of rule, so that a US[2] liquor store could use an “over_21” property, whereas a purchase in Canada might check for “over_18” or “over_19” depending on the province.

Simple digital credentials

The simplest form of digital credential is a bag of attributes, covered by a digital signature from a recognized authority. For instance, this might be a JSON Web Token, which is basically just a digitally-signed chunk of JSON.

For our purposes, let’s run with the example, which we’d form into something like this:

{
  "number": "01-47-87441",
  "name": "McLOVIN",
  "address": "892 MOMONA ST, HONOLULU, HI 96820",
  "iss": "1998-06-18",
  "exp": "2008-06-03",
  "dob": "1981-06-03",
  "over_18": true,
  "over_21": true,
  "over_55": false,
  "ht": "5'10",
  ...
}

That could then be wrapped up and signed by whatever Hawaiian DMV issues the license. Something like this:

Two nested boxes, the inner containing text "McLOVIN's Details"; the outer containing text "Digital Signature"

That isn’t perfect, because a blob of bytes like that can just be copied around by anyone that receives that credential. Anyone that received a credential could “impersonate” our poor friend.

The way that problem is addressed is through the use of a digital wallet. The issuer requires that the wallet hold a second signing key. The wallet provides the issuer with an attestation, which is just evidence from the wallet maker (which is often the maker of your phone) that they are holding a private key in a place where it can’t be moved or copied[3]. That attestation includes the public key that matches that private key.

Once the issuer is sure that the private key is tied to the device, the issuer produces a credential that lists the public key from the wallet.

In order to use the credential, the wallet signs the credential along with some other stuff, like the current time and maybe the identity of the verifier[4], as follows:

Nested boxes, the outer containing text "Digital signature using the Private Key from McLOVIN's Wallet"; two at the next level the first containing text "Verifier Identity, Date and Time, etc...", the other containing text "Digital Signature using the Private Key of the Hawaii DMV"; the latter box contains two further boxes containing text "McLOVIN's Details" and "McLOVIN's Wallet Public Key"

With something like this, unless someone is able to use the signing key that is in the wallet, they can’t generate a presentation that a verifier will accept. It also ensures that the wallet can use a biometric or password check to ensure that a presentation is only created when the person allows it.

That is a basic presentation that includes all the information that the issuer knows about. The problem is that this is probably more than you might be comfortable with sharing with a liquor store. After all, while you might be able to rely on the fact that the cashier in a store isn’t copying down your license details, you just know that any digital information you present is going to be saved, stored, and sold. That’s where selective disclosure is supposed to help.

Salted hash selective disclosure

One basic idea behind selective disclosure is to replace all of the data elements in a credential — or at least the ones that someone might want to keep to themselves — with placeholders. Those placeholders are replaced with a commitment to the actual values. Any values that someone wants to reveal are then included in the presentation. A verifier can validate that the revealed value matches the commitment.

The most basic sort of commitment is a hash commitment. That uses a hash function, which is really anything where it is hard to produce two inputs that result in the same output. The commitment to a value of X is H(X).

That is, you might replace the (“name”, “McLOVIN”) with a commitment like H(“name” || “McLOVIN”). The hash function ensures that it is easy to validate that the underlying values match the commitment, because the verifier can compute the hash for themselves. But it is basically impossible to recover the original values from the hash. And it is similarly difficult to find another set of values that hash to the same value, so you can’t easily substitute false information.

A key problem with a hash commitment is that a simple hash commitment only works to protect the value of the input if that input is hard to guess in the first place. But most of the stuff on a license is pretty easy to guess in one way or another. For simple stuff like “over_21”, there are just two values: “true” or “false”. If you want to know the original value, you can just check each of the values and see which matches.

Even for fields that have more values, it is possible to build a big table of hash values for every possible (or likely) value. This is called a “rainbow table”[5].

A diagram showing mappings from hashes to values

Rainbow tables don’t work if the committed value very hard to guess. So, in addition to the value of the field, a large random number is added to the hidden value. This number is called “salt” and a different value needs to be generated for every field that can be hidden, with different values for every new credential. As long as there are many more values for the salt than can reasonably be stored in a rainbow table, there is no easy way to work out which commitment corresponds to which value.

So for each field, the issuer generates a random number and replaces all fields in the credential with H(salt || name || value), using some agreed encoding. The issuer then signs over those commitments and provides the wallet with a credential that is full of commitments, plus the full set of values that were committed to, including the associated salt.

A credential containing commitments to values, with the value and associated salt alongside

The wallet can then use the salt and the credential to reveal a value and prove that it was included in the credential, creating a presentation something like this:

A presentation using the credential, with selected values and their salt alongside

The verifier then gets a bunch of fields with the key information replaced with commitments. All of the commitments are then signed by the issuer. The verifier also gets some number of unsigned tuples of (salt, name, value). The verifier can then check that H(salt || name || value) matches one of the commitments.

This is the basic design that underpins a number of selective disclosure designs. Salted hash selective disclosure is pretty simple to build because it doesn’t require any fancy cryptography. However, salted hash designs have some limitations that can be a little surprising.

Other selective disclosure approaches

There are other approaches that might be used to solve this problem. Imagine that you had a set of credentials, each of which contained a single attribute. You might imagine sharing each of those credentials separately, choosing which ones you show based on what the situation demanded.

That might look something like this:

A presentation that includes multiple separate credentials, each with a single attribute

Having multiple signatures can be nefficient, but this basic idea is approximately sound[7]. There are a lot of signatures, which would make a presentation pretty unwieldy if there were lots of properties. There are digital signature schemes that make this more efficient though, like the BLS scheme, which allows multiple signatures to be folded into one.

That is the basic idea behind SD-BLS. SD-BLS doesn’t make it cheaper for an issuer. An issuer still needs to sign a whole bunch of separate attributes. But combining signatures means that it can make presentations smaller and easier to verify. SD-BLS has some privacy advantages over salted hashes, but the primary problem that the SD-BLS proposal aims to solve is revocation, which is covered in more detail below.

Problems with salted hashes

Going back to the original example, the effect of the salted hash is that you probably get something like this:

A Hawaii driver license with all the fields covered with gray rectangles, except the expiry date

Imagine that every field on the license is covered with the gray stuff you get on scratch lottery tickets. You can choose which to scratch off before you hand it to someone else[8]. Here’s what they learn:

  1. That this is a valid Hawaii driver license. That is, they learn who issued the credential.
  2. When the license expires.
  3. The value of the fields that you decided to reveal.
  4. How many fields you decided not to reveal.
  5. Any other places that you present that same credential, as discussed below.

On the plus side, and contrary to what is shown for a physical credential, the size and position of fields is not revealed for a digital credential.

Still, that is likely a bit more information than might be expected. If you only wanted to reveal the “over_21” field so that you could buy some booze, having to reveal all those other things isn’t exactly ideal.

Revealing who issued the credential seems like it might be harmless, but for a digital credential, that’s revealing a lot more than your eligibility to obtain liquor. Potentially a lot more. Maybe in Hawaii, holding a Hawaii driver license isn’t notable, but it might be distinguishing — or even disqualifying — in other places. A Hawaii driver license reveals that you likely live in Hawaii, which is not exactly relevant to your alcohol purchase. It might not even be recognized as valid in some places.

If the Hawaiian DMV uses multiple keys to issue credentials, you’ll also reveal which of those keys was used. That’s unlikely to be a big deal, but worth keeping in mind as we look at alternative approaches.

Revealing the number of fields is a relatively minor information leak. This constrains the design a little, but not in a serious way. Basically, it means that you should probably have the same set of fields for everyone.

For instance, you can’t include only the “over_XX” age fields that are true; you have to include the false ones as well or the number of fields would reveal an approximate age. That is, avoid:

{ ..., "older_than": [16, 18], ... }

Note: Some formats allow individual items in lists like this to be committed separately. The name of the list is generally revealed in that case, but the specific values are hidden. These usually just use H(salt || value) as the commitment.

And instead use:

{ ..., "over_16": true, "over_18": true, "over_21": false, "over_55": false, ... }

Expiration dates are tricky. For some purposes, like verifying that someone is allowed to drive, the verifier will need to know if the credential is not expired.

On the other hand, expiry is probably not very useful for something like age verification. After all, it’s not like you get younger once your license expires.

The exact choice of expiration date might also carry surprising information. Imagine that only one person was able to get a license one day because the office had to close or the machine broke down. If the expiry date is a fixed time after issuance, the expiry date on their license would then be unique to them, which means that revealing that expiration date would effectively be identifying them.

The final challenge here is the least obvious and most serious shortcoming of this approach: linkability.

Linkability and selective disclosure

A salted hash credential carries several things that makes the credential itself identifiable. This includes the following:

  • The value of each commitment is unique and distinctive.
  • The public key for the wallet.
  • The signature that the issuer attaches to the credential.

Each of these is unique, so if the same credential is used in two places, it will clearly indicate that this is the same person, even if the information that is revealed is very limited.

For example, you might present an “over_21” to purchase alcohol in one place, then use the full credential somewhere else. If those two presentations use the same credential, those two sites will be able to match up the presentations. The entity that obtains the full credential can then share all that knowledge with the one that only knows you are over 21, without your involvement.

A version of the issuer-holder-verifier diagram with multiple verifiers

Even if the two sites only receive limited information, they can still combine the information they obtain — that you are over 21 and what you did on each site — into a profile. The building of that sort of profile online is known as unsanctioned tracking and generally regarded as a bad thing.

This sort of matching is technically called verifier-verifier linkability. The way that it can be prevented is to ensure that a completely fresh credential is used for every presentation. That includes a fresh set of commitments, a new public key from the wallet, and a new signature from the issuer (naturally, the thing that is being signed is new). At the same time, ensuring that the presentation doesn’t include any extraneous information, like expiry dates, helps.

A system like this means that wallets need to be able to handle a whole lot of credentials, including fresh public keys for each. The wallet also needs to be able to handle cases where its store of credentials run out, especially when the wallet is unable to contact the issuer.

Issuers generally need to be able to issue larger batches of credentials to avoid that happening. That involves a lot of computationally intensive work for the issuer. This makes wallets quite a bit more complex. It also increases the cost of running issuance services because they need better availability, not just because they need more issuance capacity.

In this case, SD-BLS has a small advantage over salted hashes because its “unregroupability” property means that presentations with differing sets of attributes are not linkable by verifiers. That’s a weaker guarantee than verifier-verifier unlinkability, because presentations with the same set of attributes can still be linked by a verifier; for that, fresh credentials are necessary.

Using a completely fresh credential is a fairly effective way to protect against linkability for different verifiers, but it does nothing to prevent verifier-issuer linkability. An issuer can remember the values they saw when they issued the credential. A verifier can take any one of the values from a presentation they receive (commitments, public key, or signature) and ask the issuer to fill in the blanks. The issuer and verifier can then share anything that they know about the person, not limited to what is included in the credential.

A version of the issuer-holder-verifier diagram with a bidirectional arrow between issuer and verifier

What the issuer and verifier can share isn’t limited to the credential. They can share anything they know, not just the stuff that was included in the credential. Maybe McLovin needed to show a passport and a utility bill in order to get a license and the DMV kept a copy. The issuer could give that information to the verifier. The verifier can also share what they have learned about the person, like what sort of alcohol they purchased.

Useful linkability

In some cases, linkability might be a useful or essential feature. Imagine that selective disclosure is used to authorize access to a system that might be misused. Selective disclosure avoids exposing the system to information that is not essential. Maybe the system is not well suited to safeguarding private information. The system only logs access attempts and the presentation that was used.

In the event that the access results in some abuse, the abuse could be investigated using verifier-issuer linkability. For example, the access could be matched to information available to the issuer to find out who was responsible for the abuse.

The IETF is developing a couple of salted hash formats (in JSON and CBOR) that should be well suited to a number of applications where linkability is a desirable property.

All of this is a pretty serious problem for use for something like online age verification. Having issuers, which are often government agencies, being in a position to trace activity, might have an undesirable chilling effect. This is something that legislators generally recognize and laws often include provisions that require unlinkability[9].

In short, salted hash based systems only work if you trust the issuer.

Linkable attributes

There is not much point in avoiding linkability when the disclosed information is directly linkable. For instance, if you selectively disclose your name and date of birth, that information is probably unique or highly identifying. Revealing identifying information to a verifier makes verifier-issuer linkability easy; just like revealing the same information to two verifiers makes verifier-verifier linkability simple.

This makes linkability for selective disclosure less concerning when it comes to revealing information that might be identifying.

Unlinkability therefore tends to be most useful for non-identifying attributes. Simple attributes — like whether someone meets a minimum age requirement, holds a particular qualification, or has authorization — are less likely to be inherently linkable, so are best suited to being selectively disclosed.

Privacy Pass

If the goal is to provide a simple signal, such as whether a person is older than a target age, Privacy Pass is specifically designed to prevent verifier-issuer linkability.

Privacy Pass also includes options that split the issuer into two separate functions — an issuer and an attester — where the attester is responsible for determining if a holder (or client) has the traits required for token issuance and the issuer only creates the tokens. This might be used to provide additional privacy protection.

The four entities of the Privacy Pass architecture: Issuer, Attester, Holder/Client, and Verifier/Service

A Privacy Pass issuer could produce a token that signifies possession of a given trait. Only those with the trait would receive the token. For age verification, the token might signify that a person is at a selected age or older.

Token formats for Privacy Pass that include limited public information are also defined, which might be used to support selective disclosure. This is far less flexible than the salted hash approach as a fresh token needs to be minted with the set of traits that will be public. That requires that the issuer is more actively involved or that the different sets of public traits are known ahead of time.

Privacy Pass does not naturally provide verifier-verifier unlinkability, but a fresh token could be used for each usage, just like for the salted hash design. Some of the Privacy Pass modes can issue a batch of tokens for this reason.

In order to provide tokens for different age thresholds or traits, an issuer would need to use different public keys, each corresponding to a different trait.

Privacy Pass is therefore a credible alternative to the use of salted hash selective disclosure for very narrow cases. It is somewhat inflexible in terms of what can be expressed, but that could mean more deliberate additions of capabilities. The strong verifier-issuer unlinkability is definitely a plus, but it isn’t without shortcomings.

Key consistency

One weakness of Privacy Pass is that it depends on the issuer using the same key for everyone. The ideal privacy is provided when there is a single issuer with just one key for each trait. With more keys or more issuers, the key that is used to generate a token carries information, revealing who issued the token. This is just like the salted hash example where the verifier needs to learn that the Hawaiian DMV issued the credential.

The privacy of the system breaks down if every person receives tokens that are generated using a key that is unique to them. This risk can be limited through the use of key consistency schemes. This makes the system a little bit harder to deploy and operate.

As foreshadowed earlier, the same key switching concern also applies to a salted hash design if you don’t trust the issuer. Of course, we’ve already established that a salted hash design basically only works if you trust the issuer. Salted hash presentations are linkable based on commitments, keys, or signatures, so there is no real need to play games with keys.

Anonymous credentials

A zero knowledge proof enables the construction of evidence that a prover knows something, without revealing that information. For an identity system, it allows a holder to make assertions about a credential without revealing that credential. That creates what is called an anonymous credential.

Anonymous credentials are appealing as the basis for a credential system because the proofs themselves contain no information that might link them to the original credential.

Verifier-issuer unlinkability is a natural consequence of using a zero knowledge proof. Verifier-verifier unlinkability would be guaranteed by providing a fresh proof for each verifier, which is possible without obtaining a fresh credential. The result is that anonymous credentials provide excellent privacy characteristics.

Zero knowledge proofs trace back to systems of provable computation, which mean that they are potentially very flexible. A proof can be used to prove any property that can be computed. The primary cost is in the amount of computation it takes to produce and validate the proof[10]. If the underlying credential can be adjusted to support the zero knowledge system, these costs can be reduced, which is what the BBS signature scheme does. Unmodified credentials can be used if necessary.

Thus, a proof statement for use in age verification might be a machine translation of the following compound statement:

  • this holder has a credential signed by the Hawaiian DMV;
  • the expiration date on the credential is later than the current date;
  • the person is 21 or older (or the date of birth plus 21 years is earlier than the current date);
  • the holder knows the secret key associated with the public key mentioned in the credential; and,
  • the credential has not been used with the current verifier more than once on this day[11].

A statement in that form should be sufficient to establish that someone is old enough to purchase alcohol, while providing assurances that the credential was not stolen or reused. The only information that is revealed is that this is a valid Hawaiian license. We’ll see below how hiding that last bit is also possible and probably a good idea.

Reuse protections

The last statement from the set of statements above provides evidence that the credential has not been shared with others. This condition, or something like it, is a necessary piece of building a zero-knowledge system. Otherwise, the same credential can be used and reused many times by multiple people.

Limiting the number of uses doesn’t guarantee that a credential isn’t shared, but it limits the number of times that it can be reused. If the credential can only be used once per day, then that is how many times the credential can be misused by someone other than the person it was issued to.

Choosing how many times a credential might be used will vary on the exact circumstances. For instance, it might not be necessary to have the same person present proof of age to an alcohol vendor multiple times per day. Maybe it would be reasonable for the store to remember them if they come back to make multiple purchases on any given day. One use per day might be reasonable on that assumption.

In practice, multiple rate limits might be used. This can make the system more flexible over short periods (to allow for people making multiple alcohol purchases in a day) but also stricter over the long term (because people rarely need to make multiple purchases every day). For example, age checks for the purchase of alcohol might combine a three per day limit with a weekly limit of seven. Multiple conditions can be easily added to the proof, with a modest cost.

It is also possible for each verifier to specify their own rate limits according to their own conditions. A single holder would then limit the use of credentials according to those limits.

Tracking usage is easy for a single holder. An actor looking to abuse credentials by sharing and reusing them has more difficulty. A bad actor would need to carefully coordinate their reuse of a credential so that any rate limits were not exceeded.

Hiding the issuer of credentials

People often do not get to choose who issues them a credential. Revealing the identity of an issuer might be more identifying than is ideal. This is especially true for people who have credentials issued by an atypical issuer.

Consider that Europe is building a union-wide system of identity. That means that verifiers will be required to accept credentials from any country in the EU. Someone accessing a service in Portugal with an Estonian credential might be unusual if most people use a Portuguese credential. Even if the presentation is limited to something like age verification, the choice of issuer becomes identifying.

This could also mean that a credential that should be valid is not recognized as such by an issuer, simply because they chose not to consider that issuer. Businesses in Greece might be required by law to recognize other EU credentials, but what about a credential issued by Türkiye?

Zero knowledge proofs can also hide the issuer, only revealing that a credential was issued by one of a set of issuers. This means that a verifier is unable to discriminate on the basis of issuer. For a system that operates at scale, that creates positive outcomes for those who hold credentials from atypical issuers.

Credential revocation

Perhaps the hardest problem in any system that involves the issuance of credentials is what to do when the credential suddenly becomes invalid. For instance, if a holder is a phone, what do you do if the phone is lost or stolen?

That is the role of revocation. On the Web, certificate authorities are required to have revocation systems to deal with lost keys, attacks, change of ownership, and a range of other problems. For wallets, the risk of loss or compromise of wallets might also be addressed with revocation.

Revocation typically involves the verifier confirming with the issuer that the credential issued to the holder (or the holder itself) has not been revoked. That produces a tweak to our original three-entity system as follows:

Issuer-holder-verifier model with an arrow looping back from verifier to issuer

Revocation is often the most operationally challenging aspect of running identity infrastructure. While issuance might have real-time components — particularly if the issuer needs to ensure a constant supply of credentials to maintain unlinkability — credentials might be issued ahead of time. However, revocation often requires a real-time response or something close to it. That makes a system with revocation much more difficult to design and operate.

Revoking full presentations

When a full credential or more substantive information is compromised, lack of revocation creates a serious impersonation risk. The inability to validate biometrics online means that a wallet might be exploited to perform identity theft or similarly serious crimes. Being able to revoke a wallet could be a necessary component of such a system.

The situation with a complete credential presentation, or presentations that include identifying information, is therefore fairly simple. When the presentation contains identifying information, like names and addresses, preventing linkability provides no benefit. So providing a direct means of revocation checking is easy.

With verifier-issuer linkability, the verifier can just directly ask the issuer whether the credential was revoked. This is not possible if there is a need to perform offline verification, but it might be possible to postpone such checks or rely on batched revocations (CRLite is a great example of a batched revocation system). Straightforward or not, providing adequate scale and availability make the implementation of a reliable revocation system a difficult task.

Revoking anonymous credentials

When you have anonymous credentials, which protect against verifier-issuer linkability, revocation is very challenging. A zero-knowledge assertion that the credential has not been revoked is theoretically possible, but there are a number of serious challenges. One issue is that proof of non-revocation depends on providing real-time or near-real-time information about the underlying credential. Research into solving the problem is still active.

It is possible that revocation for some selective disclosure cases is unnecessary. Especially those cases where zero-knowledge proofs are used. We have already accepted some baseline amount of abuse of credentials, by virtue of permitting non-identifying and unlinkable presentations. Access to a stolen credential is roughly equivalent to sharing or borrowing a credential. So, as long as the overall availability of stolen credentials is not too high relative to the availability of borrowed credentials, the value of revocation is low. In other words, if we accept some risk that credentials will be borrowed, then we can also tolerate some use of stolen credentials.

Revocation complications

Even with linkability, revocation is not entirely trivial. Revocation effectively creates a remote kill switch for every credential that exists. The safeguards around that switch are therefore crucial in determining how the system behaves.

For example, if any person can ask for revocation, that might be used to deny a person the use of a perfectly valid credential. There are well documented cases where organized crime has deprived people of access to identification documents in order to limit their ability to travel or access services.

These problems are more tied to the processes that are used, rather than the technical design. However, technical measures might be used to improve the situation. For instance, SD-BLS suggests that threshold revocation be used, where multiple actors need to agree before a credential can be revoked.

All told, and especially if dealing with revocation on the Web has taught us anything, it might not be worth the effort to add revocation. It might be easier — and no less safe — to frequently update credentials.

Authorizing Verifiers

Selective disclosure systems can fail to achieve their goals if there is a power imbalance between verifiers and holders. For instance, a verifier might withhold services unless a person agrees to provide more information than the verifier genuinely requires. That is, the verifier might effectively extort people to provide non-essential information. A system that can withhold information to improve privacy is pointless unless attempts to exercise withholding are supported.

One way to work around this is to require that verifiers be certified before they can request certain information. For instance, EU digital identity laws require that it be possible to restrict who can request a presentation. This might involve the certification of verifiers, so that verifiers would be required to provide holders with evidence that they are authorized to receive certain attributes.

A system of verifier authorization could limit overreach, but it might also render credentials ineffective in unanticipated situations, including for interactions in foreign jurisdictions.

Authorizations also need monitoring for compliance. Businesses — particularly larger businesses that engage in many activities — might gain authorization for many different purposes. Abuse might occur if a broad authorization is used where a narrower authorization is needed. That means more than a system of authorization, but creating a way to ensure that businesses or agencies are accountable for their use of credentials.

Quantum computers

Some of these systems depend on cryptography that is only classically secure. That is, a sufficiently powerful quantum computer might be able to attack the system.

Salted hash selective disclosure relies only on digital signatures and hash functions, which makes them the most resilient to attacks that use a quantum computer. However, many of the other systems described rely on some version of the discrete logarithm problem being difficult, which can make them vulnerable. Predicting when a cryptographically-relevant quantum computer might be created is as hard as any other attempt to look into the future, but we can understand some of the risks.

Quantum computers present two potential threats to any system that relies on classical cryptographic algorithms: forgery and linkability.

A sufficiently powerful quantum computer might use something like Shor’s algorithm to recover the secret key used to issue credentials. Once that key has been obtained, new credentials could be easily forged. Of course, forgeries are only a threat after the key is recovered.

Some schemes that rely on classical algorithms could be vulnerable to linking by a quantum computer, which could present a very serious privacy risk. This sort of linkability is a serious problem because it potentially affects presentations that are made before the quantum computer exists. Presentations that were saved by verifiers could later be linked.

Some of the potential mechanisms, such as the BBS algorithm, are still able to provide privacy, even if that the underlying cryptography is broken by a quantum computer. The quantum computer would be able to create forgeries, but not break privacy by linking presentations.

If we don’t need to worry about forgery until a quantum computer exists and privacy is maintained even then, we are therefore largely concerned with how long we might be able to use these systems. That gets back to the problem of predictions and balancing the cost of deploying a system against how long the system is going to remain secure. Credential systems take a long time to deploy, so — while they are not vulnerable to a future advance in the same way as encryption — planning for that future is likely necessary.

The limitations of technical solutions

If there is a single conclusion to this article is that the problems that exist in identity systems are not primarily technical. There are several very difficult problems to consider when establishing a system. Those problems only start with the selection of technology.

Any technological choice presents its own problems. Selective disclosure is a powerful tool, but with limited applicability. Properties like linkability need to be understood or managed. Otherwise, the actual privacy properties of the system might not meet expectations. The same goes for any rate limits or revocation that might be integrated.

How different actors might participate in the system needs further consideration. Decisions about who might act as an issuer in the system needs a governance structure. Otherwise, some people might be unjustly denied the ability to participate.

For verifiers, their incentives need to be examined. A selective disclosure system might be built to be flexible, which might seem to empower people with choice about what they disclose, however that might be abused by powerful verifiers to extort additional information from people.

All of which to say is: better technology does not always help as much as you might hope. Many of the problems are people problems, social problems, and governance problems, not technical problems. Technical mechanisms tend to only change the shape of non-technical problems. That is only helpful if the new shape of the problem is something that people are better able to deal with.


  1. This is different from licensing to drive, where most countries recognize driving permits from other jurisdictions. That’s probably because buying alcohol is a simple check based on an objective measure, whereas driving a car is somewhat more involved. ↩︎

  2. Well, most of the US. It has to do with highways. ↩︎

  3. The issuer might want some additional assurances, like some controls over how the credential can be accessed, controls over what happens if a device is lost, stolen, or sold, but they all basically reduce to this basic idea. ↩︎

  4. If the presentation didn’t include information about the verifier and time of use, one verifier could copy the presentation they receive and impersonate the person. ↩︎

  5. Rainbow tables can handle relatively large numbers of values without too much difficulty. Even some of the richer fields can probably be put in a rainbow table. For example, there are about 1.4 million people in Hawaii. All the values for some fields are known, such as the complete set of possible addresses. Even if every person has a unique value, a very simple rainbow table for a field would take a few seconds to build and around 100Mb to store, likely a lot less. A century of birthdays would take much less storage[6]. ↩︎

  6. In practice, a century of birthdays (40k values) will have no collisions with even a short hash. You don’t need much more than 32 bits for that many values. Furthermore, if you are willing to have a small number of values associated with each hash, you can save even more space. 40k values can be indexed with a 16-bit value and a 32-bit hash will produce very few collisions. A small number of collisions are easy to resolve by hashing a few times, so maybe this could be stored in about 320kB with no real loss of utility. ↩︎

  7. There are a few things that need care, like whether different attributes can be bound to a different wallet key and whether the attributes need to show common provenance. With different keys, the holder might mix and match attributes from different people into a single presentation. ↩︎

  8. To continue the tortured analogy, imagine that you take a photo of the credential to present, so that the recipient can’t just scratch off the stuff that you didn’t. Or maybe you add a clear coat of enamel. ↩︎

  9. For example, Article 5a, 16 of the EU Digital Identity Framework requires that wallets “not allow providers of electronic attestations of attributes or any other party, after the issuance of the attestation of attributes, to obtain data that allows transactions or user behaviour to be tracked, linked or correlated, or knowledge of transactions or user behaviour to be otherwise obtained, unless explicitly authorised by the user”. ↩︎

  10. A proof can be arbitrarily complex, so this isn’t always cheap, but most of the things we imagine here are probably very manageable. ↩︎

  11. This isn’t quite accurate. The typical approach involves the use of tokens that repeat if the credential is reused too often. That makes it possible to catch reuse, not prevent it. ↩︎

Firefox NightlyNew Address Bar Updates are Here – These Weeks in Firefox: Issue 172

Highlights

  • Our newly updated address bar, also known as “Scotch Bonnet”, is available in Nightly builds! 🎉
  • Weather suggestions have also been enabled in Nightly. The feature is US only at this time, as part of Firefox Suggest. :rain_cloud:
  • robwu fixed a regression introduced in Firefox 132 that was triggering the default built-in theme to be re-enabled on every browser startup – Bug 1928082
  • Love Firefox Profiler and DevTools? Check out the latest DevTools updates and see how they can better help you track down issues.

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • abhijeetchawla[:ff2400t]
  • Collin Richards
  • John Bieling (:TbSync)
  • kernp25

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • As a part of Bug 1928082, a failure hit by the new test_default_theme.js xpcshell test will ensure the default theme manifest version is in sync in both the manifest and the XPIProvider startup call to maybeInstallBuiltinAddon
WebExtensions Framework
  • Fixed a leak in ext-theme hit when an extension was setting a per-window theme using the theme WebExtensions API – Bug 1579943
  • ExtensionPolicyService content scripts helper methods has been tweaked to fix a low frequency crash hit by ExtensionPolicyService::ExecuteContentScripts – Bug 1916569
  • Fixed an unexpected issue with loading moz-extension url as subframe of the background page for extensions loaded temporarily from a directory – Bug 1926106
  • Prevent window.close() calls originated from the WebExtensions registered devtools panel to close the browser chrome window (when there is only a single tab open) – Bug 1926373
    • Thanks to Becca King for contributing this fix 🎉
  • Native messaging support for snap-packaged Firefox (default on Ubuntu):
    • Thanks to Alexandre Lissy for working on finalizing the patches from Bug 1661935
    • Fixed a regression hit by the snap-packaged Firefox 133 build – Bug 1930119
WebExtension APIs
  • Fixed a bug preventing declarativeNetRequest API dynamic rules to work correctly after a browser restart for extensions not having any static rules registered – Bug 1921353

DevTools

DevTools Toolbox

DevTools debugger log points being marked in a profiler instance

Lint, Docs and Workflow

  • A change to the mozilla/reject-addtask-only has just landed on Autoland.
    • This makes it so that when the rule is raising an issue with .only() in tests, only the .only() is highlighted, not the whole test:

a before screenshot of the Firefox code linter highlighting a whole test

an after screenshot of the Firefox code linter highlighting the ".only" part of a test

Migration Improvements

New Tab Page

  • The team is working on some new section layout and organization variations – specifically, we’re testing whether or not recommended stories should be grouped into various configurable topic sections. Stay tuned!

Picture-in-Picture

  • Thanks to contributor kern25 for:
    • Updating our Dailymotion site-specific wrapper (bug), which also happens to fix broken PiP captions (bug).
    • Updating our videojs site-specific wrapper (bug) to recognize multiple cue elements. This fixes PiP captions rendering incorrectly on Windows for some sites.

Search and Navigation

Firefox NightlyCelebrating 20 years of Firefox – These Weeks in Firefox: Issue 171

Highlights

  • Firefox is turning 20 years old! Here’s a sneak peek of what’s to come for the browser.
  • We completed work on the new messaging surface for the AppMenu / FxA avatar menu. There’s a new FXA_ACCOUNTS_APPMENU_PROTECT_BROWSING_DATA entry in about:asrouter for people who’d like to try it. Here’s another variation:

a message with an illustration of a cute fox sitting on a cloud, as well as a sign-up button, encouraging users to create a Mozilla account

  • The experiment will also test new copy for the state of the sign-in button when this message is dismissed:

  • Alexandre Poirot added an option in the Debugger Sources panel to control the visibility of WebExtension content scripts (#1698068)

  • Hubert Boma Manilla improved the Debugger by adding the paused line location in the “paused” section, and making it a live region so it’s announced to screen reader when pausing/stepping (#1843320)

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • abhijeetchawla[:ff2400t]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • In Firefox >= 133, WebExtensions sidebar panels can close themselves using window.close() (Bug 1921631)
    • Thanks to Becca King for contributing this enhancement to the WebExtensions sidebar panels 🎉
WebExtension APIs
  • A new telemetry probe related to the storage.sync quota has been introduced in Firefox 133 (Bug 1915183). The new probe is meant to help plan replacement of the deprecated Kinto-based backend with a rust-based storage.sync implementation in Firefox for Android (similar to the one introduced in Firefox for desktop v79).

DevTools

DevTools Toolbox

Lint, Docs and Workflow

  • The source documentation generate and upload tasks on CI will now output specific TEST-UNEXPECTED-FAILURE lines for new warnings/errors.
    • Running ./mach doc locally should generally do the same.
    • The previous “max n warnings” has been replaced by an allow list of current warnings/errors.
  • Flat config and ESLint v9 support has now been added to eslint-plugin-mozilla.
    • This is a big step in preparing to switch mozilla-central over to the new flat configuration & then v9.
  • hjones upgraded stylelint to the latest version and swapped its plugins to use ES modules.

New Tab Page

  • The New Tab team is analyzing the results from an experiment that tried different layouts, to see how it impacted usage. Our Data Scientists are pouring over the data to help inform design directions moving forward.
  • Another experiment is primed to run once Firefox 132 fully ships to release – the new “big rectangle” vertical widget will be tested to see whether or not users find this new affordance useful.
  • Work completed on the Fakespot experiment that we’re going to be running for Firefox 133 in December. We’ll be using the vertical widget to display products identified as high-quality, with reliable reviews.

Search and Navigation

  • 2024 Address Bar Scotch Bonnet Project
    • Various bugs were fixed by Mandy, Dale, and Yazan
      • quick actions search mode preview was formatted incorrectly (1923550)
      • dedicated Search button was getting stuck after clicking twice (1913193)
      • about chiclets not showing up when scotch bonnet is enabled (1925643)
      • tab to search not shown when scotch bonnet is enabled (1925129)
      • searchmode switcher works when Search Services fails (1906541)
      • localize strings for search mode switcher button (1924228)
      • secondary actions UX updated to be shown between heuristic and first search suggestion. (1922570)
    • To try out these scotch bonnet features, use the PREF browser.urlbar.scotchBonnet.enableOverride
  • Address Bar
    • Moritz deduplicated bookmark and history results with the same URL, but different references. (1924968) browser.urlbar.deduplication.enabled
    • Daisuke fixed overlapping remote tab text in compact mode (1924911)
    • Richardscollin, a volunteer contributor, fixed pressing esc on the address bar when it was selected and will now return focus to the window. (1086524)
    • Daisuke fixed the “Not Secure” label being Illegible when the width is too small (1925332)
  • Suggest
    • adw has been working on City-based weather suggestions (1921126, 1925734, 1925735, 1927010)
    • adw working on integrating machine learning (MLSuggest) with UrlbarPRoviderQuickSuggest (1926381)
  • Search
    • Mortiz landed a patch to localize the keyword for wikipedia search engine. 1687153, 1925735
  • Places
    • Yazan landed favicon improvement on how firefox picks the best favicon for page-icon urls without a path. (1664001)
    • Mak landed a patch where we significantly improved performance and memory usage when checking for visited URIs. The process by executing a single query for the entire batch of URIs, instead of running one query per URI. (1594368)

Firefox NightlyExperimental address bar deduplication, better auto-open Picture-in-Picture, and more – These Weeks in Firefox: Issue 170

Highlights

  • A new messaging surface for the AppMenu and PXI menu is landing imminently so that we can experiment with some messages to help users understand the value of signing up for / signing into a Mozilla account

a message with a cute fox illustration and a sign-up button in Firefox's app menu encouraging users to create a Mozilla account

  • mconley landed a patch to make the heuristics for the automatic Picture-in-Picture feature a bit smarter. This should make it less likely to auto-pip silent or small videos.
  • Moritz fixed an older bug for the address bar where duplicate Google Docs results had been appearing in the address bar dropdown. This fix is currently behind a disabled pref – people are free to test the behavior flipping browser.urlbar.deduplication.enabled to true, and feedback is welcome. We’re still investigating UI treatments to eventually show the duplicates. (1389229)

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Gregory Pappas [:gregp]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
WebExtensions Framework
  • Thanks to Florian for moving WebExtensions and AddonsManager telemetry probes away from the legacy telemetry API (Bug 1920073, Bug 1923015)
WebExtension APIs
  • The cookies API will be sorting cookies according to RFC 6265 (Bug 1818968), fixing a small chrome incompatibility issue

Migration Improvements

New Tab Page

  • We will be running an experiment in December featuring a Fakespot feed in the vertical list on newtab. This list will show products that have been identified as high-quality, and with reliable product reviews. They will link to more detailed Fakespot product pages that will give a breakdown of the product analysis. The test is not being monetized.
    • Note: A previous version of this post featured a mockup image that predated the feature being built.

a list of products identified by Fakespot as having reliable reviews for a Holiday Gift Guide, displayed in New Tab.

Picture-in-Picture

  • Special shout-out to volunteer contributor def00111 who has been helping out with our site-specific wrappers!

Search and Navigation

  • 2024 Address Bar Updates (previously known as “Project Scotch Bonnet”)
    • Intuitive Search Keywords
      • Mandy added new telemetry related to intuitive search keywords (1919180)
      • Mandy also landed a patch to list the keywords in the results panel when a user types `@` (1921549)
    • Unified Search Button
      • Daisuke refined our telemetry so that user interactions with the unified search button are differentiated from user interactions with the original one-off search button row (1919857)
    • Persisted Search
      • James fixed a bug related to persisting search terms for non-default search engines (1921092)
    • Search Config v2
      • Moritz landed a patch that streamlines how we handle search parameter names for search engine URLs (1895934)
    • Search & Suggest
      • Nan landed a patch that allows us to integrate a user-interest-based relevance ranking into the address bar suggestions we receive from our Merino server (1923187)
    • Places Database
      • Daisuke landed a series of patches so that the Places database no longer fetches any icons over the network. Now that icon fetching is delegated to consumers which have better knowledge about how to do it in a safer way. (1894633)
    • Favicons
      • Yazan landed several patches related to favicons which improve the way we pick a best favicon, avoiding excessive downscaling of large favicons that could make the favicon unrecognizable. (1494016, 1556396, 1923175)

Mozilla ThunderbirdMaximize Your Day: Make Important Messages Stand Out with Filters

For the past two decades, I’ve been trying to get on Jeopardy. This is harder than answering a Final Jeopardy question in your toughest subject. Roughly a tenth of people who take the exam get invited to auditions, and only a tenth of those who make it to auditions make it to the Contestant Pool and into the show. During this time, there are two emails you DON’T want to miss: the first saying you made it to auditions, and the second that you’re in the Contestant Pool. (This second email comes with your contestant form, and yes, I have my short, fun anecdotes to share with host Ken Jennings ready to go.)

The next time I audition, reader, I am eliminating refreshing my inbox every five minutes. Instead, I’ll use Thunderbird Filters to make any emails from the Jeopardy Contestant department STAND OUT.

Whether you’re hoping to be called up for a game show, waiting on important life news, or otherwise needing to be alert, Thunderbird is here to help you out.

Make Important Messages Stand Out with Filters

Most of our previous posts have focused on cleaning out your inbox. Now, in addition to showing you how Thunderbird can clear visual and mental clutter out of the way, we’re using filters to make important messages stand out.

  1. Click the Application menu button, then Tools. followed by Message Filters.
  2. Click New. A Filter Rules dialog box will appear.
  1. In the “Filter Name” field, type a name for your filter.
  2. Under “Apply filter when”, check one of the options or both. (You probably won’t want to change from the default “Getting New Mail” and “Manually Run” options.)
  3. In the “Getting New Mail: ” dropdown menu, choose either Filter before Junk Classification or Filter after Junk Classification. (As for me, I’m choosing Filter before Junk Classification. Just in case)
  4. Choose a property, a test and a value for each rule you want to apply:
  • A property is a message element or characteristic such as “Subject” or “From”
  • A test is a check on the property, such as “contains” or “is in my address book”
  • A value completes the test with a specific detail, such as an email address or keyword
  1. Choose one or more actions for messages that meet those criteria. (For extra caution, I put THREE actions on my sample filter. You might only need one!)
<figcaption class="wp-element-caption">(Note – not the actual Jeopardy addresses!)</figcaption>

Find (and Filter) Your Important Messages

Thunderbird also lets you create a filter directly from a message. Say you’re organizing your inbox and you see a message you don’t want to miss in the future. Highlight the email, and click on the Message menu button. Scroll down to and click on ‘Create Filter from Message.’ This will open a New Filter window, automatically filled with the sender’s address. Add any other properties, tests, or values, as above. Choose your actions, name your filter, and ta-da! Your new filter will help you know when that next important email arrives.

Resources

As with last month’s article, this post was inspired by a Mastodon post (sadly, this one was deleted, but thank you, original poster!). Many thanks to our amazing Knowledge Base writers at Mozilla Support who wrote our guide to filters. Also, thanks to Martin Brinkmann and his ghacks website for this and many other helpful Thunderbird guides!

Getting Started with Filters Mozilla Support article: https://support.mozilla.org/en-US/kb/organize-your-messages-using-filters

How to Make Important Messages Stick Out in Thunderbird: https://www.ghacks.net/2022/12/02/how-to-make-important-emails-stick-out-in-thunderbird/

The post Maximize Your Day: Make Important Messages Stand Out with Filters appeared first on The Thunderbird Blog.

About:CommunityA tribute to Dian Ina Mahendra

It is with a heavy heart that I share the passing of my dear friend, Dian Ina Mahendra, who left us after a long battle with illness. Dian Ina was a remarkable woman whose warmth, kindness, and ever-present support touched everyone around her. Her ability to offer solutions to even the most challenging problems was truly a gift, and she had an uncanny knack for finding a way out of every situation.

Dian Ina’s contribution to Mozilla spanned back to the launch of Firefox 4 in 2011. She had also been heavily involved during the days of Firefox OS, the Webmaker campaign, FoxYeah, and most recently, Firefox Rocket (later renamed Firefox Lite) when it first launched in Indonesia. Additionally, she had been a dedicated contributor to localization through Pontoon.

Those who knew Dian Ina were constantly drawn to her, not just for her brilliant ideas, but for her open heart and listening ear. She was the person people turned to when they needed advice or simply someone to talk to. No matter how big or small the problem, she always knew just what to say, offering guidance with grace and clarity.

Beyond her wisdom, Dian Ina was a source of light and laughter. Her fun-loving nature and infectious energy made her the key person everyone turned to when they were looking for recommendations, whether it was for the best restaurant in town, a great book, or even advice on life itself. Her opinions were trusted, not only for their insight but also for the care she took in considering what would truly benefit others.

Her impact on those around her was immeasurable. She leaves behind a legacy of warmth, wisdom, and a deep sense of trust from everyone who had the privilege of knowing her. We will miss her dearly, but her spirit and the lessons she shared will live on in the hearts of all who knew her.

Here are some of the memories that people shared about Dian Ina:

  • Franc: Ina was a funny person, always with a smile. We shared many events like All Hands, Leadership Summit and more. Que la tierra te sea leve.

  • Rosana Ardila: Dian Ina was a wonderful human being. I remember her warm smile, when she was supporting the community, talking about art or food. She was independent and principled and so incredibly fun to be around. I was looking forward to seeing her again, touring her museum in Jakarta, discovering more food together, talking about art and digital life, the little things you do with people you like. She was so multifaceted, so smart and passionate. She left a mark on me and I will remember her, I’ll keep the memory of her big smile with me.
  • Delphine: I am deeply saddened to hear of Dian Ina’s passing. She was a truly kind and gentle soul, always willing to lend a hand. I will cherish the memories of our conversations and her dedication to her work as a localizer and valued member of the Mozilla community. Her presence will be profoundly missed.
  • Fauzan: For me, Ina is the best mentor in conflict resolution, design, art, dan L10n. She is totally irreplaceable in Indonesian community. We already missed her a lot.
  • William: I will never forget that smile and that contagious laughter of yours. I have such fond memories of my many trips to Jakarta, in large part thanks to you. May you rest in peace dearest Dian Ina.

  • Amira Dhalla: I’m going to remember Ina as the thoughtful, kind, and warm person she always was to everyone around her. We have many memories together but I specifically remember us giggling and jumping around together on the grounds of a castle in Scotland. We had so many fun memories together talking technology, art, and Indonesia. I’m saddened by the news of her passing but comforted by the Mozilla community honoring her in a special way and know we will keep her legacy alive.

  • Kiki: Mbak Ina was one of the female leaders I looked up to within the Mozilla Indonesia Community. She embodied all the definition of a smart and capable woman. The kind who was brave, assertive and above all, so fun to be around. I like that she can keep things real by not being afraid of sharing the hard truth, which is truly appreciative within a community setting. I always thought about her and her partner (Mas Mahen) as a fun and intelligent couple. Deep condolences to Mas Mahen and her entire family in Malang and Bandung. She left a huge mark on the Mozilla Indonesia Community, and she’ll be deeply missed.

  • Joe Cheng: I am deeply saddened to hear of Dian Ina’s passing. As the Product Manager for Firefox Lite, I had the privilege of witnessing her invaluable contributions firsthand. Dian was not only a crucial part of Mozilla’s community in Indonesia but also a driving force behind the success of Firefox Lite and other Mozilla projects. Her enthusiasm, unwavering support, and kindness left an indelible mark on everyone who met her. I fondly remember the time my team and I spent with her during our visit to Jakarta, where her vibrant spirit and warm smiles brought joy to our interactions. Dian’s positive energy and dedication will be remembered always, and her legacy will live on in the Mozilla community and beyond. She will be dearly missed.

About:CommunityContributor spotlight – MyeongJun Go

The beauty of an open source software lies in the collaborative spirit of its contributors. In this post, we’re highlighting the story of MyeongJun Go (Jun), who has been a dedicated contributor to the Performance Tools team. His contributions have made a remarkable impact on performance testing and tooling, from local tools like Mach Try Perf and Raptor to web-based tools such as Treeherder. Thanks to Jun, developers are even more empowered to improve the performance of our products.

Open source has offered me invaluable lessons that are hard to gain elsewhere. Working with people from around the world, I’ve learned effective collaboration practices that help us minimize disruptions and improve development quality. From code reviews, writing test cases, to clean code and refactoring practices, I’ve gained essential skills for producing maintainable, high quality code.

Q: Can you tell us a little about how you first got involved with Mozilla?

I felt a constant thirst for development while working on company services. I wanted to create something that could benefit the world and collaborate with developers globally. That’s when I decided to dive into open source development.

Around that time, I was already using Firefox as my primary browser, and I frequently referenced MDN for work, naturally familiarizing myself with Mozilla’s services. One day, I thought, how amazing would it be to contribute to a Mozilla open source project used by people worldwide? So, I joined an open source challenge.

At first, I wondered, can I really contribute to Firefox? But thanks to the supportive Mozilla staff, I was able to tackle one issue at a time and gradually build my experience.

Q: Your contributions have had a major impact on performance testing and tooling. What has been your favourite or most rewarding project to work on so far?

I’ve genuinely found every project and task rewarding—and enjoyable too. Each time I completed a task, I felt a strong sense of accomplishment.

If I had to pick one particularly memorable project, it would be the Perfdocs tool. It was my first significant project when I started contributing more actively, and its purpose is to automate documentation for the various performance tools scattered across the ecosystem. With every code push, Perfdocs automatically generates documentation in “Firefox Source Docs”.

Working on this tool gave me the chance to familiarize myself with various performance tools one by one, while also building confidence in contributing. It was rewarding to enhance the features and see the resulting documentation instantly, making the impact very tangible. Hearing from other developers about how much it simplified their work was incredibly motivating and made the experience even more fulfilling.

Q: Performance tools are critical for developers. Can you walk us through how your work helps improve the overall performance of Mozilla products?

I’ve applied various patches across multiple areas, but updates to tools like Mach Try Perf and Perfherder, which many users rely on, have had a particularly strong impact.

With Mach Try Perf, developers can easily perform performance tests by platform and category, comparing results between the base commit (before changes) and the head commit (after changes). However, since each test can take considerable time, I developed a caching feature that stores test results from previous runs when the base commit is the same. This allows us to reuse existing results instead of re-running tests, significantly reducing the time needed for performance testing.

I also developed several convenient flags to enhance testing efficiency. For instance, when an alert occurs in Perfherder, developers can now re-run tests simply by using the “–alert” flag with the alert ID in the Mach Try Perf command.

Additionally, I recently integrated Perfherder with Bugzilla to automatically file bugs. Now, with just a click of the ‘file bug’ button, related bugs are filed automatically, reducing the need for manual follow-up.

These patches, I believe, have collectively helped improve the productivity of Mozilla’s developers and contributors, saving a lot of time in the development process.

Q: How much of a challenge do you find being in a different time zone to the rest of the team? How do you manage this?

I currently live in South Korea (GMT+9), and most team meetings are scheduled from 10 PM to midnight my time. During the day, I focus on my job, and in the evening, I contribute to the project. This setup actually helps me use my time more efficiently. In fact, I sometimes feel that if we were in the same time zone, balancing both my work and attending team meetings might be even more challenging.

Q: What are some tools or methodologies you rely on?

When developing Firefox, I mainly rely on two tools: Visual Studio Code (VSC) on Linux and SearchFox. SearchFox is incredibly useful for navigating Mozilla’s vast codebase, especially as it’s web-based and makes sharing code with teammates easy.

Since Mozilla’s code is open source, it’s accessible for the world to see and contribute to. This openness encourages me to seek feedback from mentors regularly and to focus on refactoring through detailed code reviews, with the goal of continually improving code quality.

I’ve learned so much in this process, especially about reducing code complexity and enhancing quality. I’m always grateful for the detailed reviews and constructive feedback that help me improve.

Q: Are there any exciting projects you’d like to work on?

I’m currently finding plenty of challenge and growth working with testing components, so rather than seeking new projects, I’m focused on my current tasks. I’m also interested in learning Rust and exploring trends like AI and blockchain.

Recently, I’ve considered ways to improve user convenience in tools like Mach Try Perf and Perfherder, such as making test results clearer and easier to review. I’m happy with my work and growth here, but I keep an open mind toward new opportunities. After all, one thing I’ve learned in open source is to never say, ‘I can’t do this.’

Q: What advice would you give to someone new to contributing?

If you’re starting as a contributor to the codebase, building it alone might feel challenging. You might wonder, “Can I really do this?” But remember, you absolutely can. There’s one thing you’ll need: persistence. Hold on to a single issue and keep challenging yourself. As you solve each issue, you’ll find your skills growing over time. It’s a meaningful challenge, knowing that your contributions can make a difference. Contributing will make you more resilient and help you grow into a better developer.

Q: What’s something you’ve learned during your time working on performance tools?

Working with performance tools has given me valuable experience across a variety of tools, from local ones like Mach Try Perf, Raptor, and Perfdocs to web based tools such as Treeherder and Perfherder. Not only have I deepened my technical skills, but I also became comfortable using Python, which wasn’t my primary language before.

Since Firefox runs across diverse environments, I learned how to execute individual tests for different conditions and manage and visualize performance test results efficiently. This experience taught me the full extent of automation’s capabilities and inspired me to explore how far we can push it.

Through this large scale project, I’ve learned how to approach development from scratch, analyze requirements, and carry out development while considering the impact of changes. My skills in impact analysis and debugging have grown significantly.

Open source has offered me invaluable lessons that are hard to gain elsewhere. Working with people from around the world, I’ve learned effective collaboration practices that help us minimize disruptions and improve development quality. From code reviews, writing test cases, to clean code and refactoring practices, I’ve gained essential skills for producing maintainable, high quality code.

Q: What do you enjoy doing in your spare time when you’re not contributing to Mozilla?

I really enjoy reading and learning new things in my spare time. Books offer me a chance to grow, and I find it exciting to dive into new subjects. I also prioritize staying active with running and swimming to keep both my body and mind healthy. It’s a great balance that keeps me feeling refreshed and engaged.


Interested in contributing to performance tools like Jun? Check out our wiki to learn more.

The Servo BlogBehind the code: an interview with msub2

Behind the Code is a new series of interviews with the contributors who help propel Servo forward. Ever wondered why people choose to work on web browsers, or how they get started? We invite you to look beyond the project’s pull requests and issue reports, and get to know the humans who make it happen.


msub2

Some representative contributions:

Tell us about yourself!

My name is Daniel, though I more commonly go by my online handle “msub2”. I’m something of a generalist, but my primary interests are developing for the web, XR, and games. I created and run the WebXR Discord, which has members from both the Immersive Web Working Group and the Meta Browser team, among others. In my free time (when I’m not working, doing Servo things, or tending to my other programming projects) I’m typically watching videos from YouTube/Dropout/Nebula/etc and playing video games.

Why did you start contributing to Servo?

A confluence of interests, to put it simply. I was just starting to really get into Rust, having built a CHIP-8 emulator and an NES emulator to get my hands dirty, but I also had prior experience contributing to other browser projects like Chromium and Gecko. I was also eyeing Servo’s WebXR implementation (which I had submitted a couple small fixes for last year) as I could see there was still plenty of work that could be done there. To get started though, I looked for an adjacent area that I could work on to get familiar with the main Servo codebase, which led to my first contribution being support for non-XR gamepads!

What was challenging about your first contribution?

I’d say the most challenging part of my first contribution was twofold: the first was just getting oriented with how data flows in and out of Servo via the embedding API and the second was understanding how DOM structs, methods, and codegen all worked together in the script crate. Servo is a big project, but luckily I got lots of good help and feedback as I was working through it, which definitely made things easier. Looking at existing examples in the codebase of the things I was trying to do got me the rest of the way there I’d say.

What do you like about contributing to the project? What do you get out of it?

The thing I like most about Servo (and perhaps the web platform as an extension) is the amount of interesting problems that there are to solve when it comes to implementing/supporting all of its different features. While most of my contributions so far have been focused around Gamepad and WebXR, recently I’ve been working to help implement SubtleCrypto alongside another community member, which has been really interesting! In addition to the satisfaction I get just from being able to solve interesting problems, I also rather enjoy the feeling of contributing to a large, communal, open-source project.

Any final thoughts you’d like to share?

I’d encourage anyone who’s intrigued by the idea of contributing to Servo to give it a shot! The recent waves of attention for projects like Verso and Ladybird have shown that there is an appetite for new browsers and browser engines, and with Servo’s history it just feels right that it should finally be able to rise to a more prominent status in the ecosystem.

Don MartiLinks for 10 November 2024

Signal Is Now a Great Encrypted Alternative to Zoom and Google Meet These updates mean that Signal is now a free, robust, and secure video conferencing service that can hang with the best of them. It lets you add up to 50 people to a group call and there is no time limit on each call.

The New Alt Media and the Future of Publishing - Anil Dash

I’m a neuroscientist who taught rats to drive − their joy suggests how anticipating fun can enrich human life

Ecosia and Qwant, two European search engines, join forces

What can McCain’s Grand Prix win teach us? Nothing new Ever since Byron Sharp decided he was going for red for his book cover, marketing thinkers have assembled a quite extraordinary disciplinary playbook. And it’s one that looks nothing like the existing stuff that it replaced. Of course, the majority of marketers know nothing about any of it. They inhabit the murkier corners of marketing, where training is rejected because change is held up as a circuit-breaker for learning anything from the past. AI and the ‘new consumer’ mean everything we once knew is pointless now. Better to be ignorant and untrained than waste time on irrelevant historical stuff. But for those who know that is bullshit, who study, who respect marketing knowledge, who know the foundations do not change, the McCain case is a jewel sparkling with everything we have learned in these very fruitful 15 years.

The Counterculture Switch: creating in a hostile environment

Why Right-Wing Media Thrives While The Left Gets Left Behind

The Rogue Emperor, And What To Do About Them Anywhere there is an organisation or group that is centred around an individual, from the smallest organisation upwards, it’s possible for it to enter an almost cult-like state in which the leader both accumulates too much power, and loses track of some of the responsibilities which go with it. If it’s a tech company or a bowls club we can shrug our shoulders and move to something else, but when it occurs in an open source project and a benevolent dictator figure goes rogue it has landed directly on our own doorstep as the open-source community.

We need a Wirecutter for groceries

Historic calculators invented in Nazi concentration camp will be on exhibit at Seattle Holocaust center

One Company A/B Tested Hybrid Work. Here’s What They Found. According to the Society of Human Resource Management, each quit costs companies at least 50% of the employees’ annual salary, which for Trip.com would mean $30,000 for each quit. In Trip.com’s experiment, employees liked hybrid so much that their quit rates fell by more than a third — and saved the company millions of dollars a year.

Mozilla ThunderbirdVIDEO: Q&A with Mark Surman

Last month we had a great chat with two members of the Thunderbird Council, our community governance body. This month, we’re looking at the relationship between Thunderbird and our parent organization, MZLA, and the broader Mozilla Foundation. We couldn’t think of a better way to do this than sitting down for a Q&A with Mark Surman, president of the Mozilla Foundation.

We’d love to hear your suggestions for topics or guests for the Thunderbird Community Office Hours! You can always send them to officehours@thunderbird.org.

October Office Hours: Q&A with Mark Surman

In many ways, last month’s office hours was a perfect lead-in to this month’s, as our community and Mozilla have been big parts of the Thunderbird Story. Even though this year marks 20 years since Thunderbird 1.0, Thunderbird started as ‘Minotaur’ alongside ‘Phoenix,’ the original name for Firefox, in 2003. Heather, Monica, and Mark all discuss Thunderbird’s now decades-long journey, but this chat isn’t just about our past. We talk about what we hope is a a long future, and how and where we can lead the way.

If you’ve been a long-time user of Thunderbird, or are curious about how Thunderbird, MZLA, and the Mozilla Foundation all relate to each other, this video is for you.

Watch, Read, and Get Involved

We’re so grateful to Mark for joining us, and turning an invite during a chat at Mozweek into reality! We hope this video gives a richer context to Thunderbird’s past as it highlights one of the main characters in our long story.

VIDEO (Also on Peertube):

Thunderbird and Mozilla Resources:

The post VIDEO: Q&A with Mark Surman appeared first on The Thunderbird Blog.

Andrew HalberstadtJujutsu: A Haven for Mercurial Users at Mozilla

One of the pleasures of working at Mozilla, has been learning and using the Mercurial version control system. Over the past decade, I’ve spent countless hours tinkering my worfklow to be just so. Reading docs and articles, meticulously tweaking settings and even writing an extension.

I used to be very passionate about Mercurial. But as time went on, the culture at Mozilla started changing. More and more repos were created in Github, and more and more developers started using git-cinnabar to work on mozilla-central. Then my role changed and I found that 90% of my work was happening outside of mozilla-central and the Mercurial garden I had created for myself.

So it was with a sense of resigned inevitability that I took the news that Mozilla would be migrating mozilla-central to Git. The fire in me was all but extinguished, I was resigned to my fate. And what’s more, I had to agree. The time had come for Mozilla to officially make the switch.

Glandium wrote an excellent post outlining some of the history of the decisions made around version control, putting them into the context of the time. In that post, he offers some compelling wisdom to Mercurial holdouts like myself:

I’ll swim against the current here, and say this: the earlier you can switch to git, the earlier you’ll find out what works and what doesn’t work for you, whether you already know Git or not.

When I read that, I had to agree. But, I just couldn’t bring myself to do it. No, if I was going to have to give up my revsets and changeset obsolesence and my carefully curated workflows, then so be it. But damnit! I was going to continue using them for as long as possible.

And I’m glad I didn’t switch because then I stumbled upon Jujutsu.

The Servo BlogThis month in Servo: faster fonts, fetches, and flexbox!

Servo nightly showing new support for non-ASCII characters in <img srcset>, ‘transition-behavior: allow-discrete’, ‘mix-blend-mode: plus-lighter’, and ‘width: stretch’

Servo now supports ‘mix-blend-mode: plus-lighter’ (@mrobinson, #34057) and ‘transition-behavior: allow-discrete’ (@Loirooriol, #33991), including in the ‘transition’ shorthand (@Loirooriol, #34005), along with the fetch metadata request headers ‘Sec-Fetch-Site’, ‘Sec-Fetch-Mode’, ‘Sec-Fetch-User’, and ‘Sec-Fetch-Dest’ (@simonwuelker, #33830).

We now have partial support for the CSS size keywords ‘min-content’, ‘max-content’, ‘fit-content’, and ‘stretch’ (@Loirooriol, #33558, #33659, #33854, #33951), including in floats (@Loirooriol, #33666), atomic inlines (@Loirooriol, #33737), and elements with ‘position: absolute’ or ‘fixed’ (@Loirooriol, #33950).

We’re implementing the SubtleCrypto API, starting with full support for crypto.subtle.digest() (@simonwuelker, #34034), partial support for generateKey() with AES-CBC and AES-CTR (@msub2, #33628, #33963), and partial support for encrypt(), and decrypt() with AES-CBC (@msub2, #33795).

More engine changes

Servo’s architecture is improving, with a new cross-process compositor API that reduces memory copy overhead for video (@mrobinson, @crbrz, #33619, #33660, #33817). We’ve also started phasing out our old OpenGL bindings (gleam and sparkle) in favour of glow, which should reduce Servo’s complexity and binary size (@sagudev, @mrobinson, surfman#318, webxr#248, #33538, #33910, #33911).

We’ve updated to Stylo 2024-10-04 (@Loirooriol, #33767) and wgpu 23 (@sagudev, #34073, #33819, #33635). The new version of wgpu includes several patches from @sagudev, adding support for const_assert, as well as accessing const arrays with runtime index values. We’ve also reworked WebGPU canvas presentation to ensure that we never use old buffers by mistake (@sagudev, #33613).

We’ve also landed a bunch of improvements to our DOM geometry APIs, with DOMMatrix now supporting toString() (@simonwuelker, #33792) and updating is2D on mutation (@simonwuelker, #33796), support for DOMRect.fromRect() (@simonwuelker, #33798), and getBounds() on DOMQuad now handling NaN correctly (@simonwuelker, #33794).

We now correctly handle non-ASCII characters in <img srcset> (@evuez, #33873), correctly handle data: URLs in more situations (@webbeef, #33500), and no longer throw an uncaught exception when pages try to use IntersectionObserver (@mrobinson, #33989).

Outreachy contributors are doing great work in Servo again, helping us land many of this month’s improvements to GC static analysis (@taniishkaa, @webbeef, @chickenleaf, @jdm, @jahielkomu, @wulanseruniati, @lauwwulan, #33692, #33706, #33800, #33774, #33816, #33808, #33827, #33822, #33820, #33828, #33852, #33843, #33836, #33865, #33862, #33891, #33888, #33880, #33902, #33892, #33893, #33895, #33931, #33924, #33917, #33921, #33958, #33920, #33973, #33960, #33928, #33985, #33984, #33978, #33975, #34003, #34002) and code health (@chickenleaf, @DileepReddyP, @taniishkaa, @mercybassey, @jahielkomu, @cashall-0, @tony-nyagah, @lwz23, @Noble14477, #33959, #33713, #33804, #33618, #33625, #33631, #33632, #33633, #33643, #33643, #33646, #33648, #33653, #33664, #33685, #33686, #33689, #33686, #33690, #33705, #33707, #33724, #33727, #33728, #33729, #33730, #33740, #33744, #33757, #33771, #33757, #33782, #33790, #33809, #33818, #33821, #33835, #33840, #33853, #33849, #33860, #33878, #33881, #33894, #33935, #33936, #33943).

Performance improvements

Our font system is faster now, with reduced latency when loading system fonts (@mrobinson, #33638), layout no longer blocking on sending font data to WebRender (@mrobinson, #33600), and memory mapped system fonts on macOS and FreeType platforms like Linux (@mrobinson, @mukilan, #33747).

Servo now has a dedicated fetch thread (@mrobinson, #33863). This greatly reduces the number of IPC channels we create for individual requests, and should fix crashes related to file descriptor exhaustion on some platforms. Brotli-compressed responses are also handled more efficiently, such that we run the parser with up to 8 KiB of decompressed data at a time, rather than only 10 bytes of compressed data at a time (@crbrz, #33611).

Flexbox layout now uses caching to avoid doing unnecessary work (@mrobinson, @Loirooriol, #33964, #33967), and now has experimental tracing-based profiling support (@mrobinson, #33647), which in turn no longer spams RUST_LOG=info when not enabled (@delan, #33845). We’ve also landed optimisations in table layout (@Loirooriol, #33575) and in our layout engine as a whole (@Loirooriol, #33806).

Work continues on making our massive script crate build faster, with improved incremental builds (@sagudev, @mrobinson, #33502) and further patches towards splitting script into smaller crates (@sagudev, @jdm, #33627, #33665).

We’ve also fixed several crashes, including when initiating a WebXR session on macOS (@jdm, #33962), when laying out replaced elements (@Loirooriol, #34006), when running JavaScript modules (@jdm, #33938), and in many situations when garbage collection occurs (@chickenleaf, @taniishkaa, @Loirooriol, @jdm, #33857, #33875, #33904, #33929, #33942, #33976, #34019, #34020, #33965, #33937).

servoshell, embedding, and devtools

Devtools support (--devtools 6080) is now compatible with Firefox 131+ (@eerii, #33661), and no longer lists iframes as if they were inspectable tabs (@eerii, #34032).

Servo-the-browser now avoids unnecessary redraws (@webbeef, #34008), massively reducing its CPU usage, and no longer scrolls too slowly on HiDPI systems (@nicoburns, #34063). We now update the location bar when redirects happen (@rwakulszowa, #34004), and these updates are sent to all embedders of Servo, not just servoshell.

We’ve added a new --unminify-css option (@Taym95, #33919), allowing you to dump the CSS used by a page like you can for JavaScript. This will pave the way for allowing you to modify that CSS for debugging site compat issues, which is not yet implemented.

We’ve also added a new --screen-size option that can help with testing mobile websites (@mrobinson, #34038), renaming the old --resolution option to --window-size, and we’ve removed --no-minibrowser mode (@Taym95, #33677).

We now publish nightly builds for OpenHarmony on servo.org (@mukilan, #33801). When running servoshell on OpenHarmony, we now display toasts when pages load or panic (@jschwe, #33621), and you can now pass certain Servo options via hdc shell aa start or a test app (@jschwe, #33588).

Donations

Thanks again for your generous support! We are now receiving 4201 USD/month (+1.3% over September) in recurring donations. We are no longer accepting donations on LFX — if you were donating there, please move your recurring donations to GitHub or Open Collective.

Servo is also on thanks.dev, and already ten GitHub users that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

4201 USD/month
10000

With this money, we’ve been able to pay for a second Outreachy intern in this upcoming round, plus our web hosting and self-hosted CI runners for Windows and Linux builds. When the time comes, we’ll also be able to afford macOS runners and perf bots! As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Conference talks

Support.Mozilla.OrgCelebrating our top contributors on Firefox’s 20th anniversary

Firefox was built by a group of passionate developers, and has been supported by a dedicated community of caring contributors since day one.

The SUMO platform was originally built in 2007 to provide an open-source community support channel for users, and to help us collaborate more effectively with our volunteer contributors.

Over the years, SUMO has become a powerful platform that helps users get the most out of Firefox, provides opportunities for users to connect and learn more from each other, and allows us to gather important insights – all powered by our community of contributors.

SUMO is not just a support platform but a place where other like-minded users, who care about making the internet a better place for everyone, can find opportunities to grow their skills and contribute.

Our contributor community has been integral to Firefox’s success. Contributors humanize the experience across our support channels, champion meaningful fixes and changes, and help us onboard the next generation of Firefox users (and potential contributors!).

Fun facts about our community:

  • We’re global! We have active contributors in 63 countries.
  • 6 active contributors have been with us since day one (Shout outs to Cor-el, jscher2000, James, mozbrowser, AliceWyman, and marsf) and 16 contributors have been here for 15+ years!
  • In 2024*, our contributor community responded to 18,390 forum inquiries, made 747 en-US revisions and 5,684 l10n revisions to our Knowledge Base, responded to 441 Tweets, and issued 1,296 Play Store review responses (*from Jan-Oct 2024 for Firefox desktop, Android, and iOS. Non OP and non staff)

Screenshot of the top contributors from Jan-Oct 2024

Chart reflects top contributors for Firefox (Desktop, Android, and iOS)

Highlights from throughout the years:

Started in October 2007, SUMO has evolved in many different ways, but its spirit remains the same. It supports our wider user community while also allowing us to build strong relationships with our contributors. Below is a timeline of some key moments in SUMO’s history:

  • 2 October 2007 – SUMO launched on TikiWiki. Knowledge Base was implemented in this initial phase, but article localization wasn’t supported until February 2008.
  • 18 December 2007 – Forum went live
  • 28 December 2007 – Live chat launched
  • 5 February 2009 – SUMO logo was introduced
  • 11 October 2010 – We expanded to Twitter (now X) supported by the Army of Awesome
  • December 2010 – SUMO migrated from TikiWiki to Kitsune. The migration was done in stages and lasted most of 2010.
  • 14 March 2021 – We expanded to take on Play Store support and consolidated our social support platforms in Conversocial/Verint
  • 9 November 2024 – Our SUMO channels are largely powered by active contributors across forums, Knowledge Base and social

We are so grateful for our active community of contributors who bring our mission to life every day. Special thanks to those of you who have been with us since the beginning.

And to celebrate this milestone, we are going to reward top contributors (>99 contributions) for all products in 2024 with a special SUMO badge. Additionally, contributors with more than 999 contributions throughout SUMO’s existence and those with >99 contributions in 2024 will be given swag vouchers to shop at Mozilla’s swag stores.

Cheers to the progress we’ve made, and the incredible foundation we’ve built together. The best is yet to come!

 

P.S. Thanks to Chris Ilias for additional note on SUMO's history.

Mozilla Open Policy & Advocacy BlogJoin Us to Mark 20 Years of Firefox

You’re invited to Firefox’s 20th birthday!

 

We’re marking 20 years of Firefox — the independent open-source browser that has reshaped the way millions of people explore and experience the internet. Since its launch, Firefox has championed privacy, security, transparency, and put control back in the hands of people online.

Come celebrate two decades of innovation, advocacy, and community — while looking forward to what’s to come.

The post Join Us to Mark 20 Years of Firefox appeared first on Open Policy & Advocacy.

Mozilla Open Policy & Advocacy BlogBehind the Scenes of eIDAS: A Look at Article 45 and Its Implications

On October 21, 2024, Mozilla hosted a panel discussion during the Global Encryption Summit to explore the ongoing debate around Article 45 of the eIDAS regulation. Moderated by Robin Wilton from the Internet Society, the panel featured experts Dennis Jackson from Mozilla, Alexis Hancock from Certbot at EFF, and Thomas Lohninger from epicenter.works. Our panelists provided their insights on the technical, legal, and privacy concerns surrounding Article 45 and the potential impact on internet security and privacy. The panel, facilitated by Mozilla in connection with its membership on the Global Encryption Coalition Steering Committee, was part of the annual celebration of Global Encryption Day on October 21.

What is eIDAS and Why is Article 45 Important?

The original eIDAS regulation, introduced in 2014, aimed to create a unified framework for secure electronic identification (eID) and trust services across the European Union. Such trust services, provided by designated Trust Service Providers (TSPs), included electronic signatures, timestamps, and website authentication certificates. Subsequently, Qualified Web Authentication Certificates (QWACs) were also recognized as a method to verify that the entity behind a website also controls the domain in an effort to increase trust amongst users that they are accessing a legitimate website.

Over the years, the cybersecurity community has expressed its concerns for users’ privacy and security regarding the use of QWACs, as they can lead to a false sense of security. Despite this criticism, in 2021, an updated EU proposal to the original law, in essence, aimed to mandate the recognition of QWACs as long as they were issued by qualified TSPs. This, in practice, would undermine decades of web security measures and put users’ privacy and security at stake.

The Security Risk Ahead campaign raised awareness and addressed these issues by engaging widely with policymakers and including through a public letter signed by more than 500 experts that was also endorsed by organizations including Internet Society, European Digital Rights (EDRi), EFF, and Epicenter.works among others.

The European Parliament introduced last-minute changes to mitigate risks of surveillance and fraud, but these safeguards now need to be technically implemented to protect EU citizens from potential exposure.

Technical Concerns and Security Risks

Thomas Lohninger provided context on how Article 45 fits into the larger eIDAS framework. He explained that while eIDAS aims to secure the wider digital ecosystem, QWACs under Article 45 could erode trust in website security, affecting both European and global users.

Dennis Jackson, a member of Mozilla’s cryptography team, cautioned that without robust safeguards, Qualified Website Authentication Certificates (QWACs) could be misused, leading to increased risk of fraud. He noted limited involvement of technical experts in drafting Article 45 resulted in significant gaps within the law. The version of Article 45, as originally proposed in 2021, radically expanded the capabilities of EU governments to surveil their citizens by ensuring that cryptographic keys under government control can be used to intercept encrypted web traffic across the EU.

Why Extended Validation Certificates (EVs) Didn’t Work—and Why Article 45 Might Not Either

Alexis Hancock compared Article 45 to extended validation (EV) certificates, which were introduced years ago with similar intentions but ultimately failed to achieve their goals. EV certificates were designed to offer more information about the identity of websites but ended up being expensive and ineffective as most users didn’t even notice them.

Hancock cautioned that QWACs could suffer from the same problems. Instead of focusing on complex authentication mechanisms, she argued, the priority should be on improving encryption and keeping the internet secure for everyone, regardless of whether a website has paid for a specific type of certificate.

Balancing Security and Privacy: A Tough Trade-Off

A key theme was balancing online transparency and protecting user privacy. All the panelists agreed that while identifying websites more clearly may have its advantages, it should not come at the expense of privacy and security. The risk is that requiring more authentication online could lead to reduced anonymity and greater potential for surveillance, undermining the principles of free expression and privacy on the internet.

The panelists also pointed out that Article 45 could lead to a fragmented internet, with different regions adopting conflicting rules for registering and asserting ownership of a website. This fragmentation would make it harder to maintain a secure and unified web, complicating global web security.

The Role of Web Browsers in Protecting Users

Web browsers, like Firefox, play a crucial role in protecting users. The panelists stressed that browsers have a responsibility to push back against policies that could compromise user privacy or weaken internet security.

Looking Ahead: What’s Next for eIDAS and Web Security?

Thomas Lohninger raised the possibility of legal challenges to Article 45. If the regulation is implemented in a way that violates privacy rights or data protection laws, it could be contested under the EU’s legal frameworks, including the General Data Protection Regulation (GDPR) and the ePrivacy Directive. Such battles could be lengthy and complex however, underscoring the need for continued advocacy.

As the panel drew to a close, the speakers emphasized that while the recent changes to Article 45 represent progress, the fight is far from over. The implementation of eIDAS continues to evolve, and it’s crucial that stakeholders, including browsers, cybersecurity experts, and civil society groups, remain vigilant in advocating for a secure and open internet.

The consensus from the panel was clear: as long as threats to encryption and web security exist, the community must stay engaged in these debates. Scrutinizing policies like eIDAS  is essential to ensure they truly serve the interests of internet users, not just large institutions or governments.

The panelists concluded by calling for ongoing collaboration between policymakers, technical experts, and the public to protect the open web and ensure that any changes to digital identity laws enhance, rather than undermine, security and privacy for all.


You can watch the panel discussion here.

The post Behind the Scenes of eIDAS: A Look at Article 45 and Its Implications appeared first on Open Policy & Advocacy.

The Rust Programming Language BlogGoogle Summer of Code 2024 results

As we have previously announced, the Rust Project participated in Google Summer of Code (GSoC) for the first time this year. Nine contributors have been tirelessly working on their exciting projects for several months. The projects had various durations; some of them have ended in August, while the last one has been concluded in the middle of October. Now that the final reports of all the projects have been submitted, we can happily announce that all nine contributors have passed the final review! That means that we have deemed all of their projects to be successful, even though they might not have fulfilled all of their original goals (but that was expected).

We had a lot of great interactions with our GSoC contributors, and based on their feedback, it seems that they were also quite happy with the GSoC program and that they had learned a lot. We are of course also incredibly grateful for all their contributions - some of them have even continued contributing after their project has ended, which is really awesome. In general, we think that Google Summer of Code 2024 was a success for the Rust Project, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our project idea list.

Below you can find a brief summary of each of our GSoC 2024 projects, including feedback from the contributors and mentors themselves. You can find more information about the projects here.

Adding lint-level configuration to cargo-semver-checks

cargo-semver-checks is a tool designed for automatically detecting semantic versioning conflicts, which is planned to one day become a part of Cargo itself. The goal of this project was to enable cargo-semver-checks to ship additional opt-in lints by allowing users to configure which lints run in which cases, and whether their findings are reported as errors or warnings. Max achieved this goal by implementing a comprehensive system for configuring cargo-semver-checks lints directly in the Cargo.toml manifest file. He also extensively discussed the design with the Cargo team to ensure that it is compatible with how other Cargo lints are configured, and won't present a future compatibility problem for merging cargo-semver-checks into Cargo.

Predrag, who is the author of cargo-semver-checks and who mentored Max on this project, was very happy with his contributions that even went beyond his original project scope:

He designed and built one of our most-requested features, and produced design prototypes of several more features our users would love. He also observed that writing quality CLI and functional tests was hard, so he overhauled our test system to make better tests easier to make. Future work on cargo-semver-checks will be much easier thanks to the work Max put in this summer.

Great work, Max!

Implementation of a faster register allocator for Cranelift

The Rust compiler can use various backends for generating executable code. The main one is of course the LLVM backend, but there are other backends, such as GCC, .NET or Cranelift. Cranelift is a code generator for various hardware targets, essentially something similar to LLVM. The Cranelift backend uses Cranelift to compile Rust code into executable code, with the goal of improving compilation performance, especially for debug (unoptimized) builds. Even though this backend can already be faster than the LLVM backend, we have identified that it was slowed down by the register allocator used by Cranelift.

Register allocation is a well-known compiler task where the compiler decides which registers should hold variables and temporary expressions of a program. Usually, the goal of register allocation is to perform the register assignment in a way that maximizes the runtime performance of the compiled program. However, for unoptimized builds, we often care more about the compilation speed instead.

Demilade has thus proposed to implement a new Cranelift register allocator called fastalloc, with the goal of making it as fast as possible, at the cost of the quality of the generated code. He was very well-prepared, in fact he had a prototype implementation ready even before his GSoC project has started! However, register allocation is a complex problem, and thus it then took several months to finish the implementation and also optimize it as much as possible. Demilade also made extensive use of fuzzing to make sure that his allocator is robust even in the presence of various edge cases.

Once the allocator was ready, Demilade benchmarked the Cranelift backend both with the original and his new register allocator using our compiler benchmark suite. And the performance results look awesome! With his faster register allocator, the Rust compiler executes up to 18% less instructions across several benchmarks, including complex ones like performing a debug build of Cargo itself. Note that this is an end-to-end performance improvement of the time needed to compile a whole crate, which is really impressive. If you would like to examine the results in more detail or even run the benchmark yourself, check out Demilade's final report, which includes detailed instructions on how to reproduce the benchmark.

Apart from having the potential to speed up compilation of Rust code, the new register allocator can be also useful for other use-cases, as it can be used in Cranelift on its own (outside the Cranelift codegen backend). What can we can say other than we are very happy with Demilade's work! Note that the new register allocator is not yet available in the Cranelift codegen backend out-of-the-box, but we expect that it will eventually become the default choice for debug builds and that it will thus make compilation of Rust crates using the Cranelift backend faster in the future.

Improve Rust benchmark suite

This project was relatively loosely defined, with the overarching goal of improving the user interface of the Rust compiler benchmark suite. Eitaro tackled this challenge from various angles at once. He improved the visualization of runtime benchmarks, which were previously a second-class citizen in the benchmark suite, by adding them to our dashboard and by implementing historical charts of runtime benchmark results, which help us figure out how is a given benchmark behaving over a longer time span.

Another improvement that he has worked on was embedding a profiler trace visualizer directly within the rustc-perf website. This was a challenging task, which required him to evaluate several visualizers and figure out a way how to include them within the source code of the benchmark suite in a non-disruptive way. In the end, he managed to integrate Perfetto within the suite website, and also performed various optimizations to improve the performance of loading compilation profiles.

Last, but not least, Eitaro also created a completely new user interface for the benchmark suite, which runs entirely in the terminal. Using this interface, Rust compiler contributors can examine the performance of the compiler without having to start the rustc-perf website, which can be challenging to deploy locally.

Apart from the mentioned contributions, Eitaro also made a lot of other smaller improvements to various parts of the benchmark suite. Thank you for all your work!

Move cargo shell completions to Rust

Cargo's completion scripts have been hand maintained and frequently broken when changed. The goal for this effort was to have the completions automatically generated from the definition of Cargo's command-line, with extension points for dynamically generated results.

shanmu took the prototype for dynamic completions in clap (the command-line parser used by Cargo), got it working and tested for common shells, as well as extended the parser to cover more cases. They then added extension points for CLI's to provide custom completion results that can be generated on the fly.

In the next phase, shanmu added this to nightly Cargo and added different custom completers to match what the handwritten completions do. As an example, with this feature enabled, when you type cargo test --test= and hit the Tab key, your shell will autocomplete all the test targets in your current Rust crate! If you are interested, see the instructions for trying this out. The link also lists where you can provide feedback.

You can also check out the following issues to find out what is left before this can be stabilized:

Rewriting esoteric, error-prone makefile tests using robust Rust features

The Rust compiler has several test suites that make sure that it is working correctly under various conditions. One of these suites is the run-make test suite, whose tests were previously written using Makefiles. However, this setup posed several problems. It was not possible to run the suite on the Tier 1 Windows MSVC target (x86_64-pc-windows-msvc) and getting it running on Windows at all was quite challenging. Furthermore, the syntax of Makefiles is quite esoteric, which frequently caused mistakes to go unnoticed even when reviewed by multiple people.

Julien helped to convert the Makefile-based run-make tests into plain Rust-based tests, supported by a test support library called run_make_support. However, it was not a trivial "rewrite this in Rust" kind of deal. In this project, Julien:

  • Significantly improved the test documentation;
  • Fixed multiple bugs that were present in the Makefile versions that had gone unnoticed for years -- some tests were never testing anything or silently ignored failures, so even if the subject being tested regressed, these tests would not have caught that.
  • Added to and improved the test support library API and implementation; and
  • Improved code organization within the tests to make them easier to understand and maintain.

Just to give you an idea of the scope of his work, he has ported almost 250 Makefile tests over the span of his GSoC project! If you like puns, check out the branch names of Julien's PRs, as they are simply fantestic.

As a result, Julien has significantly improved the robustness of the run-make test suite, and improved the ergonomics of modifying existing run-make tests and authoring new run-make tests. Multiple contributors have expressed that they were more willing to work with the Rust-based run-make tests over the previous Makefile versions.

The vast majority of run-make tests now use the Rust-based test infrastructure, with a few holdouts remaining due to various quirks. After these are resolved, we can finally rip out the legacy Makefile test infrastructure.

Rewriting the Rewrite trait

rustfmt is a Rust code formatter that is widely used across the Rust ecosystem thanks to its direct integration within Cargo. Usually, you just run cargo fmt and you can immediately enjoy a properly formatted Rust project. However, there are edge cases in which rustfmt can fail to format your code. That is not such an issue on its own, but it becomes more problematic when it fails silently, without giving the user any context about what went wrong. This is what was happening in rustfmt, as many functions simply returned an Option instead of a Result, which made it difficult to add proper error reporting.

The goal of SeoYoung's project was to perform a large internal refactoring of rustfmt that would allow tracking context about what went wrong during reformatting. In turn, this would enable turning silent failures into proper error messages that could help users examine and debug what went wrong, and could even allow rustfmt to retry formatting in more situations.

At first, this might sound like an easy task, but performing such large-scale refactoring within a complex project such as rustfmt is not so simple. SeoYoung needed to come up with an approach to incrementally apply these refactors, so that they would be easy to review and wouldn't impact the entire code base at once. She introduced a new trait that enhanced the original Rewrite trait, and modified existing implementations to align with it. She also had to deal with various edge cases that we hadn't anticipated before the project started. SeoYoung was meticulous and systematic with her approach, and made sure that no formatting functions or methods were missed.

Ultimately, the refactor was a success! Internally, rustfmt now keeps track of more information related to formatting failures, including errors that it could not possibly report before, such as issues with macro formatting. It also has the ability to provide information about source code spans, which helps identify parts of code that require spacing adjustments when exceeding the maximum line width. We don't yet propagate that additional failure context as user facing error messages, as that was a stretch goal that we didn't have time to complete, but SeoYoung has expressed interest in continuing to work on that as a future improvement!

Apart from working on error context propagation, SeoYoung also made various other improvements that enhanced the overall quality of the codebase, and she was also helping other contributors understand rustfmt. Thank you for making the foundations of formatting better for everyone!

Rust to .NET compiler - add support for compiling & running cargo tests

As was already mentioned above, the Rust compiler can be used with various codegen backends. One of these is the .NET backend, which compiles Rust code to the Common Intermediate Language (CIL), which can then be executed by the .NET Common Language Runtime (CLR). This backend allows interoperability of Rust and .NET (e.g. C#) code, in an effort to bring these two ecosystems closer together.

At the start of this year, the .NET backend was already able to compile complex Rust programs, but it was still lacking certain crucial features. The goal of this GSoC project, implemented by Michał, who is in fact the sole author of the backend, was to extend the functionality of this backend in various areas. As a target goal, he set out to extend the backend so that it could be used to run tests using the cargo test command. Even though it might sound trivial, properly compiling and running the Rust test harness is non-trivial, as it makes use of complex features such as dynamic trait objects, atomics, panics, unwinding or multithreading. These features were especially tricky to implement in this codegen backend, because the LLVM intermediate representation (IR) and CIL have fundamental differences, and not all LLVM intrinsics have .NET equivalents.

However, this did not stop Michał. He has been working on this project tirelessly, implementing new features, fixing various issues and learning more about the compiler's internals every new day. He has also been documenting his journey with (almost) daily updates on Zulip, which were fascinating to read. Once he has reached his original goal, he moved the goalpost up to another level and attempted to run the compiler's own test suite using the .NET backend. This helped him uncover additional edge cases and also led to a refactoring of the whole backend that resulted in significant performance improvements.

By the end of the GSoC project, the .NET backend was able to properly compile and run almost 90% of the standard library core and std test suite. That is an incredibly impressive number, since the suite contains thousands of tests, some of which are quite arcane. Michał's pace has not slowed down even after the project has ended and he is still continuously improving the backend. Oh, and did we already mention that his backend also has experimental support for emitting C code, effectively acting as a C codegen backend?! Michał has been very busy over the summer.

We thank Michał for all his work on the .NET backend, as it was truly inspirational, and led to fruitful discussions that were relevant also to other codegen backends. Michał's next goal is to get his backend upstreamed and create an official .NET compilation target, which could open up the doors to Rust becoming a first-class citizen in the .NET ecosystem.

Sandboxed and deterministic proc macro using WebAssembly

Rust procedural (proc) macros are currently run as native code that gets compiled to a shared object which is loaded directly into the process of the Rust compiler. Because of this design, these macros can do whatever they want, for example arbitrarily access the filesystem or communicate through a network. This has not only obvious security implications, but it also affects performance, as this design makes it difficult to cache proc macro invocations. Over the years, there have been various discussions about making proc macros more hermetic, for example by compiling them to WebAssembly modules, which can be easily executed in a sandbox. This would also open the possibility of distributing precompiled versions of proc macros via crates.io, to speed up fresh builds of crates that depend on proc macros.

The goal of this project was to examine what would it take to implement WebAssembly module support for proc macros and create a prototype of this idea. We knew this would be a very ambitious project, especially since Apurva did not have prior experience with contributing to the Rust compiler, and because proc macro internals are very complex. Nevertheless, some progress was made. With the help of his mentor, David, Apurva was able to create a prototype that can load WebAssembly code into the compiler via a shared object. Some work was also done to make use of the existing TokenStream serialization and deserialization code in the compiler's proc_macro crate.

Even though this project did not fulfill its original goals and more work will be needed in the future to get a functional prototype of WebAssembly proc macros, we are thankful for Apurva's contributions. The WebAssembly loading prototype is a good start, and Apurva's exploration of proc macro internals should serve as a useful reference for anyone working on this feature in the future. Going forward, we will try to describe more incremental steps for our GSoC projects, as this project was perhaps too ambitious from the start.

Tokio async support in Miri

miri is an interpreter that can find possible instances of undefined behavior in Rust code. It is being used across the Rust ecosystem, but previously it was not possible to run it on any non-trivial programs (those that ever await on anything) that use tokio, due a to a fundamental missing feature: support for the epoll syscall on Linux (and similar APIs on other major platforms).

Tiffany implemented the basic epoll operations needed to cover the majority of the tokio test suite, by crafting pure libc code examples that exercised those epoll operations, and then implementing their emulation in miri itself. At times, this required refactoring core miri components like file descriptor handling, as they were originally not created with syscalls like epoll in mind.

Surprising to everyone (though probably not tokio-internals experts), once these core epoll operations were finished, operations like async file reading and writing started working in miri out of the box! Due to limitations of non-blocking file operations offered by operating systems, tokio is wrapping these file operations in dedicated threads, which was already supported by miri.

Once Tiffany has finished the project, including stretch goals like implementing async file operations, she proceeded to contact tokio maintainers and worked with them to run miri on most tokio tests in CI. And we have good news: so far no soundness problems have been discovered! Tiffany has become a regular contributor to miri, focusing on continuing to expand the set of supported file descriptor operations. We thank her for all her contributions!

Conclusion

We are grateful that we could have been a part of the Google Summer of Code 2024 program, and we would also like to extend our gratitude to all our contributors! We are looking forward to joining the GSoC program again next year.

The Rust Programming Language Bloggccrs: An alternative compiler for Rust

This is a guest post from the gccrs project, at the invitation of the Rust Project, to clarify the relationship with the Rust Project and the opportunities for collaboration.

gccrs is a work-in-progress alternative compiler for Rust being developed as part of the GCC project. GCC is a collection of compilers for various programming languages that all share a common compilation framework. You may have heard about gccgo, gfortran, or g++, which are all binaries within that project, the GNU Compiler Collection. The aim of gccrs is to add support for the Rust programming language to that collection, with the goal of having the exact same behavior as rustc.

First and foremost, gccrs was started as a project because it is fun. Compilers are incredibly rewarding pieces of software, and are great fun to put together. The project was started back in 2014, before Rust 1.0 was released, but was quickly put aside due to the shifting nature of the language back then. Around 2019, work on the compiler started again, led by Philip Herron and funded by Open Source Security and Embecosm. Since then, we have kept steadily progressing towards support for the Rust language as a whole, and our team has kept growing with around a dozen contributors working regularly on the project. We have participated in the Google Summer of Code program for the past four years, and multiple students have joined the effort.

The main goal of gccrs is to provide an alternative option for compiling Rust. GCC is an old project, as it was first released in 1987. Over the years, it has accumulated numerous contributions and support for multiple targets, including some not supported by LLVM, the main backend used by rustc. A practical example of that reach is the homebrew Dreamcast scene, where passionate engineers develop games for the Dreamcast console. Its processor architecture, SuperH, is supported by GCC but not by LLVM. This means that Rust is not able to be used on those platforms, except through efforts like gccrs or the rustc-codegen-gcc backend - whose main differences will be explained later.

GCC also benefits from the decades of software written in unsafe languages. As such, a high amount of safety features have been developed for the project as external plugins, or even within the project as static analyzers. These analyzers and plugins are executed on GCC's internal representations, meaning that they are language-agnostic, and can thus be used on all the programming languages supported by GCC. Likewise, many GCC plugins are used for increasing the safety of critical projects such as the Linux kernel, which has recently gained support for the Rust programming language. This makes gccrs a useful tool for analyzing unsafe Rust code, and more generally Rust code which has to interact with existing C code. We also want gccrs to be a useful tool for rustc itself by helping pan out the Rust specification effort with a unique viewpoint - that of a tool trying to replicate another's functionality, oftentimes through careful experimentation and source reading where the existing documentation did not go into enough detail. We are also in the process of developing various tools around gccrs and rustc, for the sole purpose of ensuring gccrs is as correct as rustc - which could help in discovering surprising behavior, unexpected functionality, or unspoken assumptions.

We would like to point out that our goal in aiding the Rust specification effort is not to turn it into a document for certifying alternative compilers as "Rust compilers" - while we believe that the specification will be useful to gccrs, our main goal is to contribute to it, by reviewing and adding to it as much as possible.

Furthermore, the project is still "young", and still requires a huge amount of work. There are a lot of places to make your mark, and a lot of easy things to work on for contributors interested in compilers. We have strived to create a safe, fun, and interesting space for all of our team and our GSoC students. We encourage anyone interested to come chat with us on our various communication platforms, and offer mentorship for you to learn how to contribute to the project and to compilers in general.

Maybe more importantly however, there is a number of things that gccrs is NOT for. The project has multiple explicit non-goals, which we value just as highly as our goals.

The most crucial of these non-goals is for gccrs not to become a gateway for an alternative or extended Rust-like programming language. We do not wish to create a GNU-specific version of Rust, with different semantics or slightly different functionality. gccrs is not a way to introduce new Rust features, and will not be used to circumvent the RFC process - which we will be using, should we want to see something introduced to Rust. Rust is not C, and we do not intend to introduce subtle differences in standard by making some features available only to gccrs users. We know about the pain caused by compiler-specific standards, and have learned from the history of older programming languages.

We do not want gccrs to be a competitor to the rustc_codegen_gcc backend. While both projects will effectively achieve the same goal, which is to compile Rust code using the GCC compiler framework, there are subtle differences in what each of these projects will unlock for the language. For example, rustc_codegen_gcc makes it easy to benefit from all of rustc's amazing diagnostics and helpful error messages, and makes Rust easily usable on GCC-specific platforms. On the other hand, it requires rustc to be available in the first place, whereas gccrs is part of a separate project entirely. This is important for some users and core Linux developers for example, who believe that having the ability to compile the entire kernel (C and Rust parts) using a single compiler is essential. gccrs can also offer more plugin entrypoints by virtue of it being its own separate GCC frontend. It also allows Rust to be used on GCC-specific platforms with an older GCC where libgccjit is not available. Nonetheless, we are very good friends with the folks working on rustc_codegen_gcc, and have helped each other multiple times, especially in dealing with the patch-based contribution process that GCC uses.

All of this ties into a much more global goal, which we could summarize as the following: We do not want to split the Rust ecosystem. We want gccrs to help the language reach even more people, and even more platforms.

To ensure that, we have taken multiple measures to make sure the values of the Rust project are respected and exposed properly. One of the features we feel most strongly about is the addition of a very annoying command line flag to the compiler, -frust-incomplete-and-experimental-compiler-do-not-use. Without it, you are not able to compile any code with gccrs, and the compiler will output the following error message:

crab1: fatal error: gccrs is not yet able to compile Rust code properly. Most of the errors produced will be the fault of gccrs and not the crate you are trying to compile. Because of this, please report errors directly to us instead of opening issues on said crate's repository.

Our github repository: https://github.com/rust-gcc/gccrs

Our bugzilla tracker: https://gcc.gnu.org/bugzilla/buglist.cgi?bug_status=__open__&component=rust&product=gcc

If you understand this, and understand that the binaries produced might not behave accordingly, you may attempt to use gccrs in an experimental manner by passing the following flag:

-frust-incomplete-and-experimental-compiler-do-not-use

or by defining the following environment variable (any value will do)

GCCRS_INCOMPLETE_AND_EXPERIMENTAL_COMPILER_DO_NOT_USE

For cargo-gccrs, this means passing

GCCRS_EXTRA_ARGS="-frust-incomplete-and-experimental-compiler-do-not-use"

as an environment variable.

Until the compiler can compile correct Rust and, most importantly, reject incorrect Rust, we will be keeping this command line option in the compiler. The hope is that it will prevent users from potentially annoying existing Rust crate maintainers with issues about code not compiling, when it is most likely our fault for not having implemented part of the language yet. Our goal of creating an alternative compiler for the Rust language must not have a negative effect on any member of the Rust community. Of course, this command line flag is not to the taste of everyone, and there has been significant pushback to its presence... but we believe it to be a good representation of our main values.

In a similar vein, gccrs separates itself from the rest of the GCC project by not using a mailing list as its main mode of communication. The compiler we are building will be used by the Rust community, and we believe we should make it easy for that community to get in touch with us and report the problems they encounter. Since Rustaceans are used to GitHub, this is also the development platform we have been using for the past five years. Similarly, we use a Zulip instance as our main communication platform, and encourage anyone wanting to chat with us to join it. Note that we still have a mailing list, as well as an IRC channel (gcc-rust@gcc.gnu.org and #gccrust on oftc.net), where all are welcome.

To further ensure that gccrs does not create friction in the ecosystem, we want to be extremely careful about the finer details of the compiler, which to us means reusing rustc components where possible, sharing effort on those components, and communicating extensively with Rust experts in the community. Two Rust components are already in use by gccrs: a slightly older version of polonius, the next-generation Rust borrow-checker, and the rustc_parse_format crate of the compiler. There are multiple reasons for reusing these crates, with the main one being correctness. Borrow checking is a complex topic and a pillar of the Rust programming language. Having subtle differences between rustc and gccrs regarding the borrow rules would be annoying and unproductive to users - but by making an effort to start integrating polonius into our compilation pipeline, we help ensure that the results we produce will be equivalent to rustc. You can read more about the various components we use, and we plan to reuse even more here. We would also like to contribute to the polonius project itself and help make it better if possible. This cross-pollination of components will obviously benefit us, but we believe it will also be useful for the Rust project and ecosystem as a whole, and will help strengthen these implementations.

Reusing rustc components could also be extended to other areas of the compiler: Various components of the type system, such as the trait solver, an essential and complex piece of software, could be integrated into gccrs. Simpler things such as parsing, as we have done for the format string parser and inline assembly parser, also make sense to us. They will help ensure that the internal representation we deal with will correspond to the one expected by the Rust standard library.

On a final note, we believe that one of the most important steps we could take to prevent breakage within the Rust ecosystem is to further improve our relationship with the Rust community. The amount of help we have received from Rust folks is great, and we think gccrs can be an interesting project for a wide range of users. We would love to hear about your hopes for the project and your ideas for reducing ecosystem breakage or lowering friction with the crates you have published. We had a great time chatting about gccrs at RustConf 2024, and everyone's interest in the project was heartwarming. Please get in touch with us if you have any ideas on how we could further contribute to Rust.

The Rust Programming Language BlogNext Steps on the Rust Trademark Policy

As many of you know, the Rust language trademark policy has been the subject of an extended revision process dating back to 2022. In 2023, the Rust Foundation released an updated draft of the policy for input following an initial survey about community trademark priorities from the previous year along with review by other key stakeholders, such as the Project Directors. Many members of our community were concerned about this initial draft and shared their thoughts through the feedback form. Since then, the Rust Foundation has continued to engage with the Project Directors, the Leadership Council, and the wider Rust project (primarily via all@) for guidance on how to best incorporate as much feedback as possible.

After extensive discussion, we are happy to circulate an updated draft with the wider community today for final feedback. An effective trademark policy for an open source community should reflect our collective priorities while remaining legally sound. While the revised trademark policy cannot perfectly address every individual perspective on this important topic, its goal is to establish a framework to help guide appropriate use of the Rust trademark and reflect as many common values and interests as possible. In short, this policy is designed to steer our community toward a shared objective: to maintain and protect the integrity of the Rust programming language.

The Leadership Council is confident that this updated version of the policy has addressed the prevailing concerns about the initial draft and honors the variety of voices that have contributed to its development. Thank you to those who took the time to submit well-considered feedback for the initial draft last year or who otherwise participated in this long-running process to update our policy to continue to satisfy our goals.

Please review the updated Rust trademark policy here, and share any critical concerns you might have via this form by November 20, 2024. The Foundation has also published a blog post which goes into more detail on the changes made so far. The Leadership Council and Project Directors look forward to reviewing concerns raised and approving any final revisions prior to an official update of the policy later this year.

Niko MatsakisMinPin: yet another pin proposal

This post floats a variation of boats’ UnpinCell proposal that I’m calling MinPin.1 MinPin’s goal is to integrate Pin into the language in a “minimally disruptive” way2 – and in particular a way that is fully backwards compatible. Unlike Overwrite, MinPin does not attempt to make Pin and &mut “play nicely” together. It does however leave the door open to add Overwrite in the future, and I think helps to clarify the positives and negatives that Overwrite would bring.

TL;DR: Key design decisions

Here is a brief summary of MinPin’s rules

  • The pinned keyword can be used to get pinned variations of things:
    • In types, pinned P is equivalent to Pin<P>, so pinned &mut T and pinned Box<T> are equivalent to Pin<&mut T> and Pin<Box<T>> respectively.
    • In function signatures, pinned &mut self can be used instead of self: Pin<&mut Self>.
    • In expressions, pinned &mut $place is used to get a pinned &mut that refers to the value in $place.
  • The Drop trait is modified to have fn drop(pinned &mut self) instead of fn drop(&mut self).
    • However, impls of Drop are still permitted (even encouraged!) to use fn drop(&mut self), but it means that your type will not be able to use (safe) pin-projection. For many types that is not an issue; for futures or other “address sensitive” types, you should use fn drop(pinned &mut self).
  • The rules for field projection from a s: pinned &mut S reference are based on whether or not Unpin is implemented:
    • Projection is always allowed for fields whose type implements Unpin.
    • For fields whose types are not known to implement Unpin:
      • If the struct S is Unpin, &mut projection is allowed but not pinned &mut.
      • If the struct S is !Unpin[^neg] and does not have a fn drop(&mut self) method, pinned &mut projection is allowed but not &mut.
      • If the type checker does not know whether S is Unpin or not, or if the type S has a Drop impl with fn drop(&mut self), neither form of projection is allowed for fields that are not Unpin.
  • There is a type struct Unpinnable<T> { value: T } that always implements Unpin.

Design axioms

Before I go further I want to layout some of my design axioms (beliefs that motivate and justify my design).

  • Pin is part of the Rust language. Despite Pin being entirely a “library-based” abstraction at present, it is very much a part of the language semantics, and it deserves first-class support. It should be possible to create pinned references and do pin projections in safe Rust.
  • Pin is its own world. Pin is only relevant in specific use cases, like futures or in-place linked lists.
  • Pin should have zero-conceptual-cost. Unless you are writing a Pin-using abstraction, you shouldn’t have to know or think about pin at all.
  • Explicit is possible. Automatic operations are nice but it should always be possible to write operations explicitly when needed.
  • Backwards compatible. Existing code should continue to compile and work.

Frequently asked questions

For the rest of the post I’m just going to go into FAQ mode.

I see the rules, but can you summarize how MinPin would feel to use?

Yes. I think the rule of thumb would be this. For any given type, you should decide whether your type cares about pinning or not.

Most types do not care about pinning. They just go on using &self and &mut self as normal. Everything works as today (this is the “zero-conceptual-cost” goal).

But some types do care about pinning. These are typically future implementations but they could be other special case things. In that case, you should explicitly implement !Unpin to declare yourself as pinnable. When you declare your methods, you have to make a choice

  • Is the method read-only? Then use &self, that always works.
  • Otherwise, use &mut self or pinned &mut self, depending…
    • If the method is meant to be called before pinning, use &mut self.
    • If the method is meant to be called after pinning, use pinned &mut self.

This design works well so long as all mutating methods can be categorized into before-or-after pinning. If you have methods that need to be used in both settings, you have to start using workarounds – in the limit, you make two copies.

How does MinPin compare to UnpinCell?

Those of you who have been following the various posts in this area will recognize many elements from boats’ recent UnpinCell. While the proposals share many elements, there is also one big difference between them that makes a big difference in how they would feel when used. Which is overall better is not yet clear to me.

Let’s start with what they have in common. Both propose syntax for pinned references/borrows (albeit slightly different syntax) and both include a type for “opting out” from pinning (the eponymous UnpinCell<T> in UnpinCell, Unpinnable<T> in MinPin). Both also have a similar “special case” around Drop in which writing a drop impl with fn drop(&mut self) disables safe pin-projection.

Where they differ is how they manage generic structs like WrapFuture<F>, where it is not known whether or not they are Unpin.

struct WrapFuture<F: Future> {
    future: F,
}

The r: pinned &mut WrapFuture<F>, the question is whether we can project the field future:

impl<F: Future> WrapFuture<F> {
    fn method(pinned &mut self) {
        let f = pinned &mut r.future;
        //      --------------------
        //      Is this allowed?
    }
}

There is a specific danger case that both sets of rules are trying to avoid. Imagine that WrapFuture<F> implements Unpin but F does not – e.g., imagine that you have a impl<F: Future> Unpin for WrapFuture<F>. In that case, the referent of the pinned &mut WrapFuture<F> reference is not actually pinned, because the type is unpinnable. If we permitted the creation of a pinned &mut F, where F: !Unpin, we would be under the (mistaken) impression that F is pinned. Bad.

UnpinCell handles this case by saying that projecting from a pinned &mut is only allowed so long as there is no explicit impl of Unpin for WrapFuture (“if [WrapFuture<F>] implements Unpin, it does so using the auto-trait mechanism, not a manually written impl”). Basically: if the user doesn’t say whether the type is Unpin or not, then you can do pin-projection. The idea is that if the self type is Unpin, that will only be because all fields are unpin (in which case it is fine to make pinned &mut references to them); if the self type is not Unpin, then the field future is pinned, so it is safe.

In contrast, in MinPin, this case is only allowed if there is an explicit !Unpin impl for WrapFuture:

impl<F: Future> !Unpin for WrapFuture<F> {
    // This impl is required in MinPin, but not in UnpinCell
}

Explicit negative impls are not allowed on stable, but they were included in the original auto trait RFC. The idea is that a negative impl is an explicit, semver-binding commitment not to implement a trait. This is different from simply not including an impl at all, which allows for impls to be added later.

Why would you prefer MinPin over UnpinCell or vice versa?

I’m not totally sure which of these is better. I came to the !Unpin impl based on my axiom that pin is its own world – the idea was that it was better to push types to be explicitly unpin all the time than to have “dual-mode” types that masquerade as sometimes pinned and sometimes not.

In general I feel like it’s better to justify language rules by the presence of a declaration than the absence of one. So I don’t like the idea of saying “the absence of an Unpin impl allows for pin-projection” – after all, adding impls is supposed to be semver-compliant. Of course, that’s much lesss true for auto traits, but it can still be true.

In fact, Pin has had some unsoundness in the past based on unsafe reasoning that was justified by the lack of an impl. We assumed that &T could never implemented DerefMut, but it turned out to be possible to add weird impls of DerefMut in very specific cases. We fixed this by adding an explicit impl<T> !DerefMut for &T impl.

On the other hand, I can imagine that many explicitly implemented futures might benefit from being able to be ambiguous about whether they are Unpin.

What does your design axiom “Pin is its own world” mean?

The way I see it is that, in Rust today (and in MinPin, pinned places, UnpinCell, etc), if you have a T: !Unpin type (that is, a type that is pinnable), it lives a double life. Initially, it is unpinned, and you interact can move it, &-ref it, or &mut-ref it, just like any other Rust value. But once a !Unpin value becomes pinned to a place, it enters a different state, in which you can no longer move it or use &mut, you have to use pinned &mut:

flowchart TD
Unpinned[
    Unpinned: can access 'v' with '&' and '&mut'
]

Pinned[
    Pinned: can access 'v' with '&' and 'pinned &mut'
]

Unpinned --
    pin 'v' in place (only if T is '!Unpin')
--> Pinned
  

One-way transitions like this limit the amount of interop and composability you get in the language. For example, if my type has &mut methods, I can’t use them once the type is pinned, and I have to use some workaround, such as duplicating the method with pinned &mut.3 In this specific case, however, I don’t think this transition is so painful, and that’s because of the specifics of the domain: futures go through a pretty hard state change where they start in “preparation mode” and then eventually start executing. The set of methods you need at these two phases are quite distinct. So this is what I meant by “pin is its own world”: pin is not very interopable with Rust, but this is not as bad as it sounds, because you don’t often need that kind of interoperability.

How would Overwrite affect pin being in its own world?

With Overwrite, when you pin a value in place, you just gain the ability to use pinned &mut, you don’t give up the ability to use &mut:

flowchart TD
Unpinned[
    Unpinned: can access 'v' with '&' and '&mut'
]

Pinned[
    Pinned: can additionally access 'v' with 'pinned &mut'
]

Unpinned --
    pin 'v' in place (only if T is '!Unpin')
--> Pinned
  

Making pinning into a “superset” of the capabilities of pinned means that pinned &mut can be coerced into an &mut (it could even be a “true subtype”, in Rust terms). This in turn means that a pinned &mut Self method can invoke &mut self methods, which helps to make pin feel like a smoothly integrated part of the language.3

So does the axiom mean you think Overwrite is a bad idea?

Not exactly, but I do think that if Overwrite is justified, it is not on the basis of Pin, it is on the basis of immutable fields. If you just look at Pin, then Overwrite does make Pin work better, but it does that by limiting the capabilities of &mut to those that are compatible with Pin. There is no free lunch! As Eric Holk memorably put it to me in privmsg:

It seems like there’s a fixed amount of inherent complexity to pinning, but it’s up to us how we distribute it. Pin keeps it concentrated in a small area which makes it seem absolutely terrible, because you have to face the whole horror at once.4

I think Pin as designed is a “zero-conceptual-cost” abstraction, meaning that if you are not trying to use it, you don’t really have to care about it. That’s worth maintaining, if we can. If we are going to limit what &mut can do, the reason to do it is primarily to get other benefits, not to benefit pin code specifically.

To be clear, this is largely a function of where we are in Rust’s evolution. If we were still in the early days of Rust, I would say Overwrite is the correct call. It reminds me very much of the IMHTWAMA, the core “mutability xor sharing” rule at the heart of Rust’s borrow checker. When we decided to adopt the current borrow checker rules, the code was about 85-95% in conformance. That is, although there was plenty of aliased mutation, it was clear that “mutability xor sharing” was capturing a rule that we already mostly followed, but not completely. Because combining aliased state with memory safety is more complicated, that meant that a small minority of code was pushing complexity onto the entire language. Confining shared mutation to types like Cell and Mutex made most code simpler at the cost of more complexity around shared state in particular.

There’s a similar dynamic around replace and swap. Replace and swap are only used in a few isolated places and in a few particular ways, but the all code has to be more conservative to account for that possibility. If we could go back, I think limiting Replace to some kind of Replaceable<T> type would be a good move, because it would mean that the more common case can enjoy the benefits: fewer borrow check errors and more precise programs due to immutable fields and the ability to pass an &mut SomeType and be sure that your callee is not swapping the value under your feet (useful for the “scope pattern” and also enables Pin<&mut> to be a subtype of &mut).

Why did you adopt pinned &mut and not &pin mut as the syntax?

The main reason was that I wanted a syntax that scaled to Pin<Box<T>>. But also the pin! macro exists, making the pin keyword somewhat awkward (though not impossible).

One thing I was wondering about is the phrase “pinned reference” or “pinned pointer”. On the one hand, it is really a reference to a pinned value (which suggests &pin mut). On the other hand, I think this kind of ambiguity is pretty common. The main thing I have found is that my brain has trouble with Pin<P> because it wants to think of Pin as a “smart pointer” versus a modifier on another smart pointer. pinned Box<T> feels much better this way.

Can you show me an example? What about the MaybeDone example?

Yeah, totally. So boats pinned places post introduced two futures, MaybeDone and Join. Here is how MaybeDone would look in MinPin, along with some inline comments:

enum MaybeDone<F: Future> {
    Polling(F),
    Done(Unpinnable<Option<F::Output>>),
    //   ---------- see below
}

impl<F: Future> !Unpin for MaybeDone<F> { }
//              -----------------------
//
// `MaybeDone` is address-sensitive, so we
// opt out from `Unpin` explicitly. I assumed
// opting out from `Unpin` was the *default* in
// my other posts.

impl<F: Future> MaybeDone<F> {
    fn maybe_poll(pinned &mut self, cx: &mut Context<'_>) {
        if let MaybeDone::Polling(fut) = self {
            //                    ---
            // This is in fact pin-projection, although
            // it's happening implicitly as part of pattern
            // matching. `fut` here has type `pinned &mut F`.
            // We are permitted to do this pin-projection
            // to `F` because we know that `Self: !Unpin`
            // (because we declared that to be true).
            
            if let Poll::Ready(res) = fut.poll(cx) {
                *self = MaybeDone::Done(Some(res));
            }
        }
    }

    fn is_done(&self) -> bool {
        matches!(self, &MaybeDone::Done(_))
    }

    fn take_output(pinned &mut self) -> Option<F::Output> {
        //         ----------------
        //     This method is called after pinning, so it
        //     needs a `pinned &mut` reference...  

        if let MaybeDone::Done(res) = self {
            res.value.take()
            //  ------------
            //
            //  ...but take is an `&mut self` method
            //  and `F:Output: Unpin` is known to be true.
            //  
            //  Therefore we have made the type in `Done`
            //  be `Unpinnable`, so that we can do this
            //  swap.
        } else {
            None
        }
    }
}

Can you translate the Join example?

Yep! Here is Join:

struct Join<F1: Future, F2: Future> {
    fut1: MaybeDone<F1>,
    fut2: MaybeDone<F2>,
}

impl<F1: Future, F2: Future> !Unpin for Join<F> { }
//                           ------------------
//
// Join is a custom future, so implement `!Unpin`
// to gain access to pin-projection.

impl<F1: Future, F2: Future> Future for Join<F1, F2> {
    type Output = (F1::Output, F2::Output);

    fn poll(pinned &mut self, cx: &mut Context<'_>) -> Poll<Self::Output> {
        // The calls to `maybe_poll` and `take_output` below
        // are doing pin-projection from `pinned &mut self`
        // to a `pinned &mut MaybeDone<F1>` (or `F2`) type.
        // This is allowed because we opted out from `Unpin`
        // above.

        self.fut1.maybe_poll(cx);
        self.fut2.maybe_poll(cx);
        
        if self.fut1.is_done() && self.fut2.is_done() {
            let res1 = self.fut1.take_output().unwrap();
            let res2 = self.fut2.take_output().unwrap();
            Poll::Ready((res1, res2))
        } else {
            Poll::Pending
        }
    }
}

What’s the story with Drop and why does it matter?

Drop’s current signature takes &mut self. But recall that once a !Unpin type is pinned, it is only safe to use pinned &mut. This is a combustible combination. It means that, for example, I can write a Drop that uses mem::replace or swap to move values out from my fields, even though they have been pinned.

For types that are always Unpin, this is no problem, because &mut self and pinned &mut self are equivalent. For types that are always !Unpin, I’m not too worried, because Drop as is is a poor fit for them, and pinned &mut self will be beter.

The tricky bit is types that are conditionally Unpin. Consider something like this:

struct LogWrapper<T> {
    value: T,
}

impl<T> Drop for LogWrapper<T> {
    fn drop(&mut self) {
        ...
    }
}

At least today, whether or not LogWrapper is Unpin depends on whether T: Unpin, so we can’t know it for sure.

The solution that boats and I both landed on effectively creates three categories of types:5

  • those that implement Unpin, which are unpinnable;
  • those that do not implement Unpin but which have fn drop(&mut self), which are unsafely pinnable;
  • those that do not implement Unpin and do not have fn drop(&mut self), which are safely pinnable.

The idea is that using fn drop(&mut self) puts you in this purgatory category of being “unsafely pinnable” (it might be more accurate to say being “maybe unsafely pinnable”, since often at compilation time with generics we won’t know if there is an Unpin impl or not). You don’t get access to safe pin projection or other goodies, but you can do projection with unsafe code (e.g., the way the pin-project-lite crate does it today).

It feels weird to have Drop let you use &mut self when other traits don’t.

Yes, it does, but in fact any method whose trait uses pinned &mut self can be implemented safely with &mut self so long as Self: Unpin. So we could just allow that in general. This would be cool because many hand-written futures are in fact Unpin, and so they could implement the poll method with &mut self.

Wait, so if Unpin types can use &mut self, why do we need special rules for Drop?

Well, it’s true that an Unpin type can use &mut self in place of pinned &mut self, but in fact we don’t always know when types are Unpin. Moreover, per the zero-conceptual-cost axiom, we don’t want people to have to know anything about Pin to use Drop. The obvious approaches I could think of all either violated that axiom or just… well… seemed weird:

  • Permit fn drop(&mut self) but only if Self: Unpin seems like it would work, since most types are Unpin. But in fact types, by default, are only Unpin if their fields are Unpin, and so generic types are not known to be Unpin. This means that if you write a Drop impl for a generic type and you use fn drop(&mut self), you will get an error that can only be fixed by implementing Unpin unconditionally. Because “pin is its own world”, I believe adding the impl is fine, but it violates “zero-conceptual-cost” because it means that you are forced to understand what Unpin even means in the first place.
  • To address that, I considered treating fn drop(&mut self) as implicitly declaring Self: Unpin. This doesn’t violate our axioms but just seems weird and kind of surprising. It’s also backwards incompatible with pin-project-lite.

These considerations let me to conclude that actually the current design kind of puts in a place where we want three categories. I think in retrospect it’d be better if Unpin were implemented by default but not as an auto trait (i.e., all types were unconditionally Unpin unless they declare otherwise), but oh well.

What is the forwards compatibility story for Overwrite?

I mentioned early on that MinPin could be seen as a first step that can later be extended with Overwrite if we choose. How would that work?

Basically, if we did the s/Unpin/Overwrite/ change, then we would

  • rename Unpin to Overwrite (literally rename, they would be the same trait);
  • prevent overwriting the referent of an &mut T unless T: Overwrite (or replacing, swapping, etc).

These changes mean that &mut T is pin-preserving. If T: !Overwrite, then T may be pinned, but then &mut T won’t allow it to be overwritten, replaced, or swapped, and so pinning guarantees are preserved (and then some, since technically overwrites are ok, just not replacing or swapping). As a result, we can simplify the MinPin rules for pin-projection to the following:

Given a reference s: pinned &mut S, the rules for projection of the field f are as follows:

  • &mut projection is allowed via &mut s.f.
  • pinned &mut projection is allowed via pinned &mut s.f if S: !Unpin

What would it feel like if we adopted Overwrite?

We actually got a bit of a preview when we talked about MaybeDone. Remember how we had to introduce Unpinnable around the final value so that we could swap it out? If we adopted Overwrite, I think the TL;DR of how code would be different is that most any code that today uses std::mem::replace or std::mem::swap would probably wind up using an explicit Unpinnable-like wrapper. I’ll cover this later.

This goes a bit to show what I meant about there being a certain amount of inherent complexity that we can choose to distibute: in MinPin, this pattern of wrapping “swappable” data is isolated to pinned &mut self methods in !Unpin types. With Overwrite, it would be more widespread (but you would get more widespread benefits, as well).

Conclusion

My conclusion is that this is a fascinating space to think about!6 So fun.


  1. Hat tip to Tyler Mandry and Eric Holk who discussed these ideas with me in detail. ↩︎

  2. MinPin is the “minimal” proposal that I feel meets my desiderata; I think you could devise a maximally minimal proposal is even smaller if you truly wanted. ↩︎

  3. It’s worth noting that coercions and subtyping though only go so far. For example, &mut can be coerced to &, but we often need methods that return “the same kind of reference they took in”, which can’t be managed with coercions. That’s why you see things like last and last_mut↩︎ ↩︎

  4. I would say that the current complexity of pinning is, in no small part, due to accidental complexity, as demonstrated by the recent round of exploration, but Eric’s wider point stands. ↩︎

  5. Here I am talking about the category of a particular monomorphized type in a particular version of the crate. At that point, every type either implements Unpin or it doesn’t. Note that at compilation time there is more grey area, as they can be types that may or may not be pinnable, etc. ↩︎

  6. Also that I spent way too much time iterating on this post. JUST GONNA POST IT. ↩︎

Mozilla ThunderbirdThunderbird Monthly Development Digest: October 2024

Hello again Thunderbird Community! The last few months have involved a lot of learning for me, but I have a much better appreciation (and appetite!) for the variety of challenges and opportunities ahead for our team and the broader developer community. Catch up with last month’s update, and here’s a quick summary of what’s been happening across the different teams:

Exchange Web Services support in Rust

An important member of our team left recently and while we’ll very much miss the spirit and leadership, we all learned a lot and are in a good position to carry the project forwards. We’ve managed to unstick a few pieces of the backlog and have a few sprints left to complete work on move/copy operations, protocol logging and priority two operations (flagging messages, folder rename & delete, etc). New team members have moved past the most painful stages and have patches that have landed. Kudos to the patient mentors involved in this process!

QR Code Cross-Device Account Import

Thunderbird for Android launched this week, and the desktop client (Daily, Beta & ESR 128.4.0) now provides a simple and secure account transfer mechanism, so that account settings don’t have to be re-entered for new users of the mobile app. Download Thunderbird for Android from the Play store

Account Hub

Development of a refreshed account hub is moving forward apace and with the critical path broken down into sprints, our entire front end team is working to complete things in the next two weeks. Meta bug & progress tracking.

Clean up on aisle 2

In addition to our project work, we’ve had to be fairly nimble this month, with a number of upstream changes breaking our builds and pipelines. We get a ton of benefit from the platforms we inherit but at times it feels like we’re dealing with many things out of our control. Mental note: stay calm and focus on future improvements!

Global Database, Conversation View & folder corruption issues

On top of the conversation view feature and core refactoring to tackle the inner workings of thread-safe folder and message manipulation, work to implement a long term database replacement is well underway. Preliminary patches are regularly pumped into the development ecosystem for discussion and review, for which we’re very excited!

In-App Notifications

With phase 1 of this project now complete, we’ve scoped out additions that will make it even more flexible and suitable for a variety of purposes. Beta users will likely see the first notifications coming in November, so keep your eyes peeled. Meta Bug & progress tracking.

New Features Landing Soon

Several requested features are expected to debut this month (or very soon) and include…

As usual, if you want to see things as they land, and help us squash some early bugs, you can always check the pushlog and try running daily, which would be immensely helpful for catching things early.

See you next month.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: October 2024 appeared first on The Thunderbird Blog.

Don Martilinks for 3 November 2024

Remote Startups Will Win the War for Top Talent Ironically, in another strike against the spontaneous collaboration argument, a study of two Fortune 500 headquarters found that transitioning from cubicles to an open office layout actually reduced face-to-face interactions by 70 percent.

Why Strava Is a Privacy Risk for the President (and You Too) Not everybody uses their real names or photos on Strava, but many do. And if a Strava account is always in the same place as the President, you can start to connect a few dots.

Why Getting Your Neighborhood Declared a Historic District Is a Bad Idea Historic designations are commonly used to control what people can do with their own private property, and can be a way of creating a kind of “backdoor” homeowners association. Some historic neighborhoods (many of which have dubious claims to the designation) around the country have HOA-like restrictions on renovations, repairs, and even landscaping.

Donald Trump Talked About Fixing McDonald’s Ice Cream Machines. Lina Khan Actually Did. Back in March, the FTC submitted a comment to the US Copyright Office asking to extend the right to repair certain equipment, including commercial soft-serve equipment.

An awful lot of FOSS should thank the Academy Linux and open source in general seem to be huge components of the movie special effects industry – to an extent that we had not previously realized. (unless you have a stack of old Linux Journal back issues from the early 2000s—we did a lot of movie covers at the time that much of this software was being developed.)

Using an 8K TV as a Monitor For programming, word processing, and other productive work, consider getting an 8K TV instead of a multi-monitor setup. An 8K TV will have superior image quality, resolution, and versatility compared to multiple 4K displays, at roughly the same size. (huge TVs are an under-rated, subsidized technology, like POTS lines. Most or all of the huge TVs available today are smart and sold with the expectation that they’ll drive subscription and advertising revenue, which means a discount for those who use them as monitors.)

Suchir Balaji, who spent four years at OpenAI, says OpenAI’s use of copyrighted data broke the law and failed to meet fair use criteria; he left in August 2024 Mr. Balaji believes the threats are more immediate. ChatGPT and other chatbots, he said, are destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems.

The Unlikely Inventor of the Automatic Rice Cooker Fumiko, the mother of six children, spent five years researching and testing to document the ideal recipe. She continued to make rice three times a day, carefully measuring water-to-rice ratios, noting temperatures and timings, and prototyping rice-cooker designs. Conventional wisdom was that the heat source needed to be adjusted continuously to guarantee fluffy rice, but Fumiko found that heating the water and rice to a boil and then cooking for exactly 20 minutes produced consistently good results.

Comments on TSA proposal for decentralized nonstandard ID requirements Compliance with the REAL-ID Act requires a state to electronically share information concerning all driver’s licenses and state-issued IDs with all other states, but not all states do so. Because no state complies with this provision of the REAL-ID Act, or could do so unless and until all states do so, no state-issued driver’s licenses or ID cards comply with the REAL-ID Act.

Don Martior we could just not

previously: Sunday Internet optimism

The consensus, dismal future of the Internet is usually wrong. Dystopias make great fiction, but the Internet is surprisingly good at muddling through and reducing each one to nuisance level.

  • We don’t have Clipper Chip dystopia that would have put backdoors in all cryptography.

  • We don’t have software patent cartel dystopia that would have locked everyone in to limited software choices and functionality, and a stagnant market.

  • We don’t have Fritz Chip dystopia that would have mandated Digital Rights Management on all devices.

None of these problems have gone away entirely—encryption backdoors, patent trolls, and DRM are all still there—but none have reached either Internet-wide catastrophe level or faded away entirely.

Today’s hottest new dystopia narrative is that we’re going to end up with surveillance advertising features in web browsers. They’ll be mathematically different from old-school cookie tracking, so technically they won’t make it possible to identify anyone individually, but they’ll still impose the same old surveillance risks on users, since real-world privacy risks are collective.

Compromising with the dystopia narrative always looks like the realistic or grown-up path forward, until it doesn’t. And then the non-dystopia timeline generally looks inevitable once you get far enough along it. This time it’s the same way. We don’t need cross-context personalized (surveillance) advertising in our web browsers any more than we need SCO licensesnot counting the SCO license timeline as dystopia, but another good example of dismal timeline averted in our operating systems. Let’s look at the numbers. I’m going to make all the assumptions most favorable to the surveillance advertising argument. It’s actually probably a lot better than this. And it’s probably better in other countries, since the USA is relatively advanced in the commercial surveillance field. (If you have these figures for other countries, please let me know and I’ll link to them.)

Total money spent on advertising in the USA: $389.49 billion

USA population: 335,893,238

That comes out to about $1,160 spent on advertising to reach the average person in the USA every year. That’s $97 per month.

So let’s assume (again, making the assumption most favorable to the surveillance side) that all advertising is surveillance advertising. And ads without the surveillance, according to Professor Garrett Johnson are worth 52 percent less than the surveillance ads.

So if you get rid of the surveillance, your ad subsidy goes from $97 to $46. Advertisers would be spending $51 less to advertise to you, and the missing $51 is a good-sized amount of extra money to come up with every month. But remember, that’s advertising money, total, not the amount that actually makes it to the people who make the ad-supported resources you want. Since the problem is how to replace the income for the artists, writers, and everyone else who makes ad-supported content, we need to multiply the missing ad subsidy by the fraction of that top-level advertising total that makes it through to the content creator in order to come up with the amount of money that needs to be filled in from other sources like subscriptions and memberships.

How much do you need to spend on subscriptions to replace $51 in ad money? That’s going to depend on your habits. But even if you have everything set up totally right, a dollar spent on ads to reach you will buy you less than a dollar you spend yourself. Thomas Baekdal writes, in How independent publishing has changed from the 1990s until today,

Up until this point, every publisher had focused on ‘traffic at scale’, but with the new direct funding focus, every individual publisher realized that traffic does not equal money, and you could actually make more money by having an audience who paid you directly, rather than having a bunch of random clicks for the sake of advertising. The ratio was something like 1:10,000. Meaning that for every one person you could convince to subscribe, donate, become a member, or support you on Patreon … you would need 10,000 visitors to make the same amount from advertising. Or to put that into perspective, with only 100 subscribers, I could make the same amount of money as I used to earn from having one million visitors.

All surveillance ad media add some kind of adtech tax. The Association of National Advertisers found that about 1/3 of the money spent to buy ad space makes it through to the publisher.

A subscription platform and subscriber services impose some costs too. To be generous to the surveillance side, let’s say that a subscription dollar is only three times as valuable as an advertising dollar. So that $51 in missing ad money means you need to come up with $17 from somewhere. This estimate is really on the high side in practice. A lot of ad money goes to overhead and to stuff like retail ad networks (online sellers bidding for better spots in shopping search results) and to ad media like billboards that don’t pay for content at all.

So, worst case, where do you get the $17? From buying less crap, that’s where. Mustri et al.(PDF) write,

[behaviorally] targeted ads are more likely to be associated with lower quality vendors, and higher prices for identical products…

You also get a piece of the national security and other collective security benefits of eliminating surveillance, some savings in bandwidth and computing resources, and a lower likelihood of becoming a victim of fraud and identity theft. But that’s pure bonus benefit on top of the win from saving money by spending less on overpriced, personally targeted, low-quality products. (If privacy protection didn’t help you buy better stuff, the surveillance companies would have said so by now.) Because surveillance advertising gives an advantage to deceptive advertisers over legit ones, the end of surveillance advertising would also mean an increase in sales for legit brands.

And we’re not done. As a wise man once said, But wait! There’s more! Before you rush to do effective privacy tips or write to your state legislators to support anti-surveillance laws, there’s one more benefit for getting rid of surveillance/personalized advertising. Remember that extra $51 that went away? It didn’t get burned up in a fire just because it didn’t get spent on surveillance advertising. Companies still have it, and they still want to sell you stuff. Without surveillance, they’ll have to look for other ways to spend it. And many of the options are win-win for the customer. In Product is the P all marketers should strive to influence, Mark Ritson points out the marketing wins from incremental product improvements, and that’s the kind of work that often gets ignored in favor of niftier, short-term, surveillance advertising projects. Improving service and pricing are other areas that will will also do better without surveillance advertising contending for budgets. There is a lot of potential gain for a lot of people in getting rid of surveillance advertising, so let’s not waste the opportunity. Don’t worry, we’ll get another Internet dystopia narrative to worry about eventually.

More: stop putting privacy-enhancing technologies in web browsers

Related

Product is the P all marketers should strive to influence If there is one thing I have learned from a thousand customers discussing a hundred different products it’s that the things a company thinks are small are, from a consumer perspective, big. And the grand improvements the company is spending bazillions on are probably of little significance. Finding out from the source what needs to be fixed or changed and then getting it done is the quiet product work of proper marketers. (yes, I linked to this twice.)

I Bought Tech Dupes on Temu. The Shoddy Gear Wasn’t Worth the $1,260 in Savings My journey into the shady side of shopping brought me to the world of dupes — from budget alternatives to bad knockoffs of your favorite tech.

Political fundraisers WinRed and ActBlue are taking millions of dollars in donations from elderly dementia patients to fuel their campaigns [S]some of these elderly, vulnerable consumers have unwittingly given away six-figure sums – most often to Republican candidates – making them among the country’s largest grassroots political donors.

Bonus links

Marketers in a dying internet: Why the only option is a return to simplicity With machine-generated content now cluttering the most visible online touchpoints (like the frontpage of Google, or your Facebook timeline), it feels inevitable that consumer behaviors will shift as a result. And so marketers need to change how they reach target audiences.

I attended Google’s creator conversation event, and it turned into a funeral

Is AI advertising going to be too easy for its own good? As Rory Sutherland said, When human beings process a message, we sort of process how much effort and love has gone into the creation of this message and we pay attention to to the message accordingly. It’s costly signaling of a kind.

How Google is Killing Bloggers and Small Publishers – And Why

Exploiting Meta’s Weaknesses, Deceptive Political Ads Thrived on Facebook and Instagram in Run-Up to Election

Ninth Circuit Upholds AADC Ban on “Dark Patterns”

Economist ‘future-proofing’ bid brings back brand advertising and targets students

The Talospace ProjectUpdated Baseline JIT OpenPOWER patches for Firefox 128ESR

I updated the Baseline JIT patches to apply against Firefox 128ESR, though if you use the Mercurial rebase extension (and you should), it will rebase automatically and only one file had to be merged — which it did for me also. Nevertheless, everything is up to date against tip again, and this patchset works fine for both Firefox and Thunderbird. I kept the fix for bug 1912623 because I think Mozilla's fix in bug 1909204 is wrong (or at least suboptimal) and this is faster on systems without working Wasm. Speaking of, I need to get back into porting rr to ppc64le so I can solve those startup crashes.

Mozilla Performance BlogPerformance Testing Newsletter (Q3 Edition)

Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products.

Last quarter was MozWeek, and we had a great time meeting a number of you in our PerfTest Regression Workshop – thank you all for joining us, and making it a huge success! If you didn’t get a chance to make it, you can find the slides here, and most of the information from the workshop (including some additional bits) can be found in this documentation page. We will be running this workshop again next MozWeek, along with a more advanced version.

See below for highlights from the changes made in the last quarter.

Highlights

Blog Posts ✍️

Contributors

  • Myeongjun Go [:myeongjun]
  • Mayank Bansal [:mayankleoboy1]

If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.

The Rust Programming Language BlogOctober project goals update

The Rust project is currently working towards a slate of 26 project goals, with 3 of them designed as flagship goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

The biggest elements of our goal are solving the "send bound" problem via return-type notation (RTN) and adding support for async closures. This month we made progress towards both. For RTN, @compiler-errors extended the return-type notation landed support for using RTN in self-types like where Self::method(): Send. He also authored a blog post with a call for testing explaining what RTN is and how it works. For async closures, the lang team reached a preliminary consensus on the async Fn syntax, with the understanding that it will also include some "async type" syntax. This rationale was documented in RFC #3710, which is now open for feedback. The team held a design meeting on Oct 23 and @nikomatsakis will be updating the RFC with the conclusions.

We have also been working towards a release of the dynosaur crate that enables dynamic dispatch for traits with async functions. This is intended as a transitionary step before we implement true dynamic dispatch. The next steps are to polish the implementation and issue a public call for testing.

With respect to async drop experiments, @nikomatsakis began reviews. It is expected that reviews will continue for some time as this is a large PR.

Finally, no progress has been made towards async WG reorganization. A meeting was scheduled but deferred. @tmandry is currently drafting an initial proposal.

We have made significant progress on resolving blockers to Linux building on stable. Support for struct fields in the offset_of! macro has been stabilized. The final naming for the "derive-smart-pointer" feature has been decided as #[derive(CoercePointee)]; @dingxiangfei2009 prepared PR #131284 for the rename and is working on modifying the rust-for-linux repository to use the new name. Once that is complete, we will be able to stabilize. We decided to stabilize support for references to statics in constants pointers-refs-to-static feature and are now awaiting a stabilization PR from @dingxiangfei2009.

Rust for Linux (RfL) is one of the major users of the asm-goto feature (and inline assembly in general) and we have been examining various extensions. @nbdd0121 authored a hackmd document detailing RfL's experiences and identifying areas for improvement. This led to two immediate action items: making target blocks safe-by-default (rust-lang/rust#119364) and extending const to support embedded pointers (rust-lang/rust#128464).

Finally, we have been finding an increasing number of stabilization requests at the compiler level, and so @wesleywiser and @davidtwco from the compiler team have started attending meetings to create a faster response. One of the results of that collaboration is RFC #3716, authored by Alice Ryhl, which proposes a method to manage compiler flags that modify the target ABI. Our previous approach has been to create distinct targets for each combination of flags, but the number of flags needed by the kernel make that impractical. Authoring the RFC revealed more such flags than previously recognized, including those that modify LLVM behavior.

The Rust 2024 edition is progressing well and is on track to be released on schedule. The major milestones include preparing to stabilize the edition by November 22, 2024, with the actual stabilization occurring on November 28, 2024. The edition will then be cut to beta on January 3, 2025, followed by an announcement on January 9, 2025, indicating that Rust 2024 is pending release. The final release is scheduled for February 20, 2025.

The priorities for this edition have been to ensure its success without requiring excessive effort from any individual. The team is pleased with the progress, noting that this edition will be the largest since Rust 2015, introducing many new and exciting features. The process has been carefully managed to maintain high standards without the need for high-stress heroics that were common in past editions. Notably, the team has managed to avoid cutting many items from the edition late in the development process, which helps prevent wasted work and burnout.

All priority language items for Rust 2024 have been completed and are ready for release. These include several key issues and enhancements. Additionally, there are three changes to the standard library, several updates to Cargo, and an exciting improvement to rustdoc that will significantly speed up doctests.

This edition also introduces a new style edition for rustfmt, which includes several formatting changes.

The team is preparing to start final quality assurance crater runs. Once these are triaged, the nightly beta for Rust 2024 will be announced, and wider testing will be solicited.

Rust 2024 will be stabilized in nightly in late November 2024, cut to beta on January 3, 2025, and officially released on February 20, 2025. More details about the edition items can be found in the Edition Guide.

Goals with updates

  • camelid has started working on using the new lowering schema for more than just const parameters, which once done will allow the introduction of a min_generic_const_args feature gate.
  • compiler-errors has been working on removing the eval_x methods on Const that do not perform proper normalization and are incompatible with this feature.
  • Posted the September update.
  • Created more automated infrastructure to prepare the October update, utilizing an LLM to summarize updates into one or two sentences for a concise table.
  • No progress has been made on this goal.
  • The goal will be closed as consensus indicates stabilization will not be achieved in this period; it will be revisited in the next goal period.
  • No major updates to report.
  • Preparing a talk for next week's EuroRust has taken away most of the free time.
  • Key developments: With the PR for supporting implied super trait bounds landed (#129499), the current implementation is mostly complete in that it allows most code that should compile, and should reject all code that shouldn't.
  • Further testing is required, with the next steps being improving diagnostics (#131152), and fixing more holes before const traits are added back to core.
  • A working-in-process pull request is available at https://github.com/weihanglo/cargo/pull/66.
  • The use of wasm32-wasip1 as a default sandbox environment is unlikely due to its lack of support for POSIX process spawning, which is essential for various build script use cases.
  • The Autodiff frontend was merged, including over 2k LoC and 30 files, making the remaining diff much smaller.
  • The Autodiff middle-end is likely getting a redesign, moving from a library-based to a pass-based approach for LLVM.
  • Significant progress was made with contributions by @x-hgg-x, improving the resolver test suite in Cargo to check feature unification against a SAT solver.
  • This was followed by porting the test cases that tripped up PubGrub to Cargo's test suite, laying the groundwork to prevent regression on important behaviors when Cargo switches to PubGrub and preparing for fuzzing of features in dependency resolution.
  • The team is working on a consensus for handling generic parameters, with both PRs currently blocked on this issue.
  • Attempted stabilization of -Znext-solver=coherence was reverted due to a hang in nalgebra, with subsequent fixes improving but not fully resolving performance issues.
  • No significant changes to the new solver have been made in the last month.
  • GnomedDev pushed rust-lang/rust#130553, which replaced an old Clippy infrastructure with a faster one (string matching into symbol matching).
  • Inspections into Clippy's type sizes and cache alignment are being started, but nothing fruitful yet.
  • The linting behavior was reverted until an unspecified date.
  • The next steps are to decide on the future of linting and to write the never patterns RFC.
  • The PR https://github.com/rust-lang/crates.io/pull/9423 has been merged.
  • Work on the frontend feature is in progress.
  • Key developments in the 'Scalable Polonius support on nightly' project include fixing test failures due to off-by-one errors from old mid-points, and ongoing debugging of test failures with a focus on automating the tracing work.
  • Efforts have been made to accept variations of issue #47680, with potential adjustments to active loans computation and locations of effects. Amanda has been cleaning up placeholders in the work-in-progress PR #130227.
  • rust-lang/cargo#14404 and rust-lang/cargo#14591 have been addressed.
  • Waiting on time to focus on this in a couple of weeks.
  • Key developments: Added the cases in the issue list to the UI test to reproduce the bug or verify the non-reproducibility.
  • Blockers: null.
  • Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue.
  • Students from the CMU Practicum Project have started writing function contracts that include safety conditions for some unsafe functions in the core library, and verifying that safe abstractions respect those pre-conditions and are indeed safe.
  • Help is needed to write more contracts, integrate new tools, review pull requests, or participate in the repository discussions.
  • Progress has been made in matching rustc suggestion output within annotate-snippets, with most cases now aligned.
  • The focus has been on understanding and adapting different rendering styles for suggestions to fit within annotate-snippets.

Goals without updates

The following goals have not received updates in the last month: