The Mozilla BlogYour child’s name makes a horrible password

What’s in a name? A lot. It’s the first piece of information that identifies a person — from their first name given at birth to their last name which connects them to their family lineage. Even a fictional name like Clark Kent says a lot. Not surprisingly a lot of people use the name of their favorite superhero as passwords, which made us wonder: Do people still use their names or the names of their nearest and dearest as passwords? The unfortunate answer: Yes, they do. This year, in recognition of Safer Internet Day, we explore how common this is and why it is not a good idea.

Before we share the most used names as passwords, let’s talk about why your password matters. When setting up a password, it’s easy to default to something memorable like your birth date or name. You might use that same password for other online accounts, like your banking, credit card or other services that might have sensitive and personal information about you. The risk you face when using personal information as a password multiple times is that it can fall in the wrong hands through a data breach.  

Last year, there were more than 1,774 data breaches reported by the Identify Theft Resource Center. Data breaches happen for a variety of reasons. It could be hackers who want access to people’s information, financial gain or simply for fun. How do you safeguard your information when data breaches happen? Meet Firefox Monitor, a product that lets you know if your email address and passwords have been in a data breach. It leverages the Have I Been Pwned? database, which shares lists of known passwords used in data breaches. It also has a tool that lets you look up passwords that have been compromised in data breaches. 

Already, we’re seeing parenting community sites predict and share the top baby names of 2023. So we wanted to see how those names fare when they’re used as passwords. To give you a hint, we’ll say it again: Names, including your children’s, make horrible passwords.

Popular names used as passwords appear in data breaches…a lot, a community site for parents, released its list of top names for 2023. For females, three of the most popular names commonly found in passwords include “isabella” (141,731 times used as a password), “abigail” (94,834 times used as a password) and “selena” (61,112 times used as a password). The male names included “joshua” (388,793 times used as a password), “oliver” (259,274 times used as a password) and “august” (135,258 times used as a password).

So, have we convinced you that using your child’s name as a password is a terrible idea? It’s not just your child’s name but it could be your name, your mother or father’s name, too. We took a look at the top first names from the last 100 years (listed here on the Social Security Number site) to see how many times they’ve been used as a password in a data breach. The most popular female names include “jessica” (492,196 times used as a password), “jennifer” (380,112 times used as a password) and “patricia” (178,581 times used as a password). The top male names found in data breaches include “michael” (651,846 times used as a password), “robert” (362,548 times used as a password) and “william” (330,750 times used as a password).

Even hipster and “unique” names shouldn’t be used as passwords recently released a list of popular hipster baby names, spanning from vintage to nature-inspired. The hipster vintage names include old favorites that could be names of your child or your great grandmother (what is old is new again!). We looked at the list of female names to see how many times a name has been used as a password in a data breach and found that “florence” (87,298 times used as a password), “minerva” (25,631 times used as a password) and “betty” (21,811 times used as a password) were the top female names used in breached passwords. For males, it was “casper” (183,879 times used as a password), “chester” (182,768 times used as a password) and “stanley” (80,476 times used as a password). 

Hipster nature names didn’t fare well either. Under this category, the female names most overused as passwords include “pepper” (295,778 times used as a password), “soleil” (145,721 times used as a password), “cricket” (110,732 times used as a password) and “andromeda” (52,784 times used as a password). The male names most overused as passwords include “mercury” (83,966 times used as a password), “canyon” (42,125 times used as a password) and “wolf” (21,993 times used as a password). Even if you think you have a unique name, it doesn’t mean that it is OK to use as a password.

We’re not telling you to avoid these names altogether. Just don’t use them as passwords! Your children’s names are their personal information, so take care in how you share them online. Here’s what you can do to keep your family’s personal information safe: 

  • Use a password generator to create a password –  When you visit a site and are asked to create an account and password, Mozilla’s Firefox browser has a password generator that will recommend a secure and strong password that includes a random combination of numbers, letters and symbols.
  • Keep your passwords safe with a password manager – Stop writing your passwords on a piece of paper or a notebook, and use a password manager. There are services that will store your passwords for you online so that the next time you visit a site your password will instantly pop up in the field for you. 
  • Protect your email address  – Firefox Relay is a product that hides your true email address to help protect your identity. It has blocked more than 1.3 million unwanted emails from people’s inboxes while keeping true email addresses from trackers across the web. Sign up here.
  • Know when your password is breached with Firefox Monitor – Learn about hacks and breaches by signing up with Firefox Monitor. You’ll get alerts delivered to your email whenever there’s been a data breach or if your accounts have been hacked.

As we think about how to make the internet a better place, it’s taking those first baby steps to make changes that will help you safely navigate the web. If you’d like to do more to create better online habits and routines, try our Mozilla tech challenge. We’ll give you simple tasks that you can complete each week for four weeks so you can better enjoy what the internet has to offer. 

How did we get these numbers? We looked these up in We couldn’t access any data files, browse lists of passwords or link passwords to logins — that info is inaccessible and kept secure — but we could look up random passwords manually. Current numbers on the site may be higher than at time of publication as new datasets are added to HIBP. Alas, data breaches keep happening. There’s no time like the present to make sure you have strong passwords.

The post Your child’s name makes a horrible password appeared first on The Mozilla Blog.

Wladimir PalantWeakening TLS protection, South Korean style

Note: This article is also available in Korean.

Normally, when you navigate to your bank’s website you have little reason to worry about impersonations. The browser takes care of verifying that you are really connected to the right server, and that your connection is safely encrypted. It will indicate this by showing a lock icon in the address bar.

So even if you are connected to a network you don’t trust (such as open WiFi), nothing can go wrong. If somebody tries to impersonate your bank, your browser will notice. And it will refuse connecting.

Screenshot of an error message: Did Not Connect: Potential Security Issue. Firefox detected a potential security threat and did not continue to because this website requires a secure connection.

This is achieved by means of a protocol called Transport Layer Security (TLS). It relies on a number of trusted Certification Authorities (CAs) to issue certificates to websites. These certificates allow websites to prove their identity.

When investigating South Korea’s so-called security applications I noticed that all of them add their own certification authorities that browsers have to trust. This weakens the protection provided by TLS considerably, as misusing these CAs allows impersonating any website towards a large chunk of South Korean population. This puts among other things the same banking transactions at risk that these applications are supposed to protect.

Which certification authorities are added?

After doing online banking on your computer in South Korea, it’s worth taking a look at the trusted certification authorities of your computer. Most likely you will see names that have no business being there. Names like iniLINE, Interezen or Wizvera.

Screenshot of the Windows “Trusted Root Certification Authorities” list. Among names like GTE CyberTrust or Microsoft, also iniLINE and Interezen are listed.

None of these are normally trusted. They have rather been added to the operating system’s storage by the respective applications. These applications also add their certification authorities to Firefox which, unlike Google Chrome or Microsoft Edge, won’t use operating system’s settings.

So far I found the following certification authorities being installed by South Korean applications:

Name Installing application(s) Validity Serial number
ASTxRoot2 AhnLab Safe Transaction 2015-06-18 to 2038-06-12 009c786262fd7479bd
iniLINE CrossEX RootCA2 TouchEn nxKey 2018-10-10 to 2099-12-31 01
INTEREZEN CA Interezen IPInside Agent 2021-06-09 to 2041-06-04 00d5412a38cb0e4a01
LumenSoft CA KeySharp CertRelay 2012-08-08 to 2052-07-29 00e9fdfd6ee2ef74fc
WIZVERA-CA-SHA1 Wizvera Veraport 2019-10-23 to 2040-05-05 74b7009ee43bc78fce69 73ade1da8b18c5e8725a
WIZVERA-CA-SHA2 Wizvera Veraport, Wizvera Delfino 2019-10-23 to 2040-05-05 20bbeb748527aeaa25fb 381926de8dc207102b71

And these certification authorities will stay there until removed manually. The applications’ uninstallers won’t remove them.

They are also enabled for all purposes. So one of these authorities being compromised will not merely affect web server identities but also application or email signatures for example.

Will a few more certification authorities really hurt?

If you look at the list of trusted certification authorities, there are more than 50 entries on it anyways. What’s the problem if a few more are added?

Running a Certificate Authority is a huge responsibility. Anyone with access to the private key of a trusted certification authority will be able to impersonate any website. Criminals and governments around the world would absolutely love to have this power. The former need it to impersonate your bank for example, the latter to spy on you undetected.

That’s why there are strict rules for certification authorities, making sure the access to the CA’s private key is restricted and properly secured. Running a certification authority also requires regular external audits to ensure that all the security parameters are still met.

Now with these South Korean applications installing their own Certificate Authorities on so many computers in South Korea, they become a huge target for hackers and governments alike. If a private key for one of these Certificate Authorities is compromised, TLS will provide very little protection in South Korea.

How do AhnLab, RaonSecure, Interezen, Wizvera deal with this responsibility? Do they store the private keys in a Hardware Security Module (HSM)? Are these in a secure location? Who has access? What certificates have been issued already? We have no answer to these questions. There are no external audits, no security practices that they have to comply with.

So people are supposed to simply trust these companies to keep the private key secure. As we’ve already seen from my previous articles however, they have little expertise in keeping things secure.

How could this issue be solved?

The reason for all these certificate authorities seems to be: the applications need to enable TLS on their local web server. Yet no real certificate authority will issue a certificate for, so they have to add their own.

If a certificate for is all they need, there is a simple solution. Instead of adding the same CA on all computers, it should be a different CA for each computer.

So the applications should do the following during the installation:

  1. Generate a new (random) certificate authority and the corresponding private key.
  2. Import this CA into the list of trusted certification authorities on the computer.
  3. Generate a certificate for and sign it with this CA. Application can now use it for its local web server.
  4. Destroy the private key of the CA.

In fact, Initech CrossWeb Ex V3 seems to do exactly that. You can easily recognize it because the displayed validity starts at the date of the installation. While it also installs its certificate authority, this one is valid for one computer only and thus unproblematic.

Oh, and one more thing to be taken care of: any CAs added should be removed when the application is uninstalled. Currently none of the applications seem to do it.

Alex VincentIntroducing Motherhen: Gecko-based applications from scratch

Mozilla‘s more than just Firefox. There’s Thunderbird for e-mail, BlueGriffon for creating web pages, and ye olde SeaMonkey Application Suite. Once upon a time, there was the ability to create custom front-ends on top of Firefox using the -app option. (It’s still there but not really supported.) Mozilla’s source code provides a rich ecosystem to build atop of.

With all that said, creating new Gecko-based applications has always been a challenge at best. There’s a surprising amount of high-quality software using a Chromium-based framework, Electron – and yes, I use some of them. (Visual Studio Code, in particular.) There should be a similar framework for Mozilla code.

Now there is: Motherhen, which I am releasing under the MPL 2 license. This is a GitHub template repository, meaning you can create a complete copy of the repository and start your own projects with it.

Motherhen is at release version 1.0, beta 2: it supports creating, building, running and packaging Mozilla applications only on Linux. On MacOS, the mach package command doesn’t work. No one’s tried this on Windows yet. I need help with both of those, if someone’s willing.

Speaking of help, a big shout-out to TrickyPR from Pulse Browser for his contributions, especially with patches to Mozilla’s code to get this working and for developer tools support in progress!

<figcaption class="wp-element-caption">Motherhen screenshot</figcaption>

David TellerAbout Safety, Security and yes, C++ and Rust

Recent publications by Consumer Reports and the NSA have launched countless conversations in development circles about safety and its benefits.

In these conversations, I’ve seen many misunderstandings about what safety means in programming and how programming languages can implement, help or hinder safety. Let’s clarify a few things.

About:CommunityMeet us at FOSDEM 2023

Hello everyone,

It is that time of the year, and we are off to Brussels for  FOSDEM 2023!

FOSDEM is a central appointment for the Open Source community.

This is the first year the conference will be back in person and Mozilla will be there, with a stand on the conference floor and many interesting talks in our DevRoom.

We are all looking forward to meet-up in person with developers and Open Source enthusiasts from all over Europe (and beyond).

The event will take place on the 4th and 5th of February, including more than 700 talks and 60 stands.

If you are there, come to say hi to our stand or watch the streaming of our talks on the FOSDEM website!

Many mozillians that are going to FOSDEM will also be in this Matrix Room, so feel free to join and ask any questions.


The Mozilla Stand


Our stand will be in building K level 2 and will be managed by many enthusiastic Mozillians. Come pick up a sticker and chat all that is Mozilla, including Firefox, MDN, Hubs, digital policy, and many other projects.


Mozilla DevRoom – UA2.220 (Guillissen)


The Mozilla DevRoom will take place on Saturday between 15:00 and 19:00. If you cannot make it, all the talks will be streamed during the event (click on the event link to find the streaming link).


15:00 – 15:30

Understanding the energy use of Firefox. With less power comes more sustainability – Florian Quèze


15:30 – 16:00

What’s new with the Firefox Profiler. Power tracks, UI improvements, importers – Nazım Can Altınova


16:00 – 16:30

Over a decade of anti-tracking work at Mozilla – Vincent Tunru


16:30 – 17:00

The Digital Services Act 101. What is it and why should you care – Claire Pershan


17:00 – 17:30

Cache The World. Adventures in A11Y Performance – Benjamin De Kosnik, Morgan Reschenberg


17:30 – 18:00

Firefox Profiler beyond the web. Using Firefox Profiler to view Java profiling data – Johannes Bechberger


18:00 – 18:30

Localize your open source project with Pontoon – Matjaž Horvat


18:30 – 19:00

The Road to Intl.MessageFormat – Eemeli Aro


Other Mozilla Talks


But that’s not all. There will also be other Mozilla-related talks around FOSDEM such as


We look forward to seeing you all.

Community Programs Team

Karl DubostBlade Runner 2023

Graffiti of a robot on a wall with buildings in the background.

Webcompat engineers will never be over their craft. I've seen things you people wouldn't believe. Large websites broken off the shoulder of developer tools. I watched Compat-beams glitter in the dark near the Interoperability Gate. All those moments will be lost in time, like tears in rain. Time to die.

In other news: Pushing Interop Forward in 2023

Now we are pleased to announce this year’s Interop 2023 project! Once again, we are joining with Bocoup, Google, Igalia, Microsoft, and Mozilla to move the interoperability of the web forward.


The Servo BlogServo 2023 Roadmap

As we move forward with our renewed project activity, we would like to share more details about our plans for 2023. We’ve recently published the Servo 2023 roadmap on the project wiki, and our community and governance and technical plans are outlined below.

Servo 2023 Roadmap. Project reactivation Q1-Q4. Project outreach Q1-Q4. Main dependencies upgrade Q1-Q3. Layout engine selection Q1-Q2. Progress towards basic CSS2 support Q3-Q4. Explore Android support Q3-Q4. Embeddable web engine experiments Q4.

Community and governance

We’re restarting all the usual activities, including PR triage and review, public communications about the project, and arranging TSC meetings. We will also make some outreach efforts in order to attract more collaborators, partners, and potential sponsors interested in working, participating, and funding the project.


We want to upgrade the main dependencies of Servo, like WebRender and Stylo, to get them up to date. We will also analyse the status of the two layout engines in Servo, and select one of them for continued development. Our plan is to then work towards basic CSS2 conformance.

Regarding platform support, we would like to explore the possibility of supporting Android. We would also like to experiment with making Servo a practical embeddable web rendering engine.

As with any software project, this roadmap will evolve over time, but we’ll keep you posted. We hope you’ll join us in making it happen.

The Mozilla BlogHow to talk to kids about the news

<figcaption class="wp-element-caption">Credit: Nick Velazquez / Mozilla</figcaption>

Carlos Moreno smiles for a photograph.
Carlos Moreno is an activist, a graphic designer at CAP Tulsa and leads Tulsa’s Code for America volunteer brigade. He has a master’s degree in public policy and is the author of ”The Victory of Greenwood” and "A Kids Book about the Tulsa Race Massacre." You can follow him on Twitter. Photo: Jamie Glisson

As the father of a teenager, I find myself worrying – and not just about their grades and how quickly they’re growing up. Dating? Driver’s permit? I’m not ready for this! I also worry about how my child, through the internet, is experiencing the world at a much quicker pace than I did.

When I was younger, I remember when the video of police brutally beating Rodney King surfaced. I watched the uprising that unfolded on the streets of Los Angeles on the news in real time. I was 15. That’s six years after I, along with my classmates and teachers, watched the space shuttle Challenger unexpectedly explode on television. Both times, I didn’t fully grasp what I’d just watched – my understanding came with help from my family and teachers and with time. 

Now, with the web at their fingertips, children today are being exposed to a Challenger-level disaster in some part of the world in real time, every week it seems. I fear that without context and guidance to keep up with the internet news cycle, teens are becoming cynical, distrustful and isolated. So how can we, as parents, support and empower them to navigate it all?

Text: How to talk to kids about the news (and getting involved) Ask them what they care about. Value their voice. Explore local resources together.

‘What good is marching in the streets or signing a petition going to do?’

Recently, I visited my teenage child’s youth group at school and asked them how they’re learning about the issues that affect them. I learned that most of the dozen high schoolers I spoke with were getting their information from social media accounts, like @mr_fish_news on TikTok’s news fish. That, or they were too overwhelmed to care. 

Through the internet, young people have access to information unlike any of the generations before them. But politically, my child feels completely helpless. “I don’t have any power or money,” they said. “The rich and powerful make all the decisions, so what good is marching in the streets or signing a petition going to do?” 

I didn’t have a good answer. Keep fighting? Try to avoid doomscrolling? Focus on taking care of those around you? It didn’t seem like enough. 

Learn from news literacy and advocacy organizations 

Through my own work as an activist, I’m reminded that there are always organizations who can help. To learn how I can better talk to my kid about the news, I found the list of resources provided by Media Literacy Now to be a great starting point. 

I also recommend contacting your local library to find out what digital media literacy workshops they might have. There’s also Generation Citizen, which offers resources across the U.S. for what it calls “action civics,” teaching a combination of media literacy and the inner-workings of local government.

In my community, the nonprofit Oklahoma Center for Community and Justice worked to create a safe space where young people in Tulsa can speak to each other after the 2020 protests prompted by George Floyd’s murder. 

Not only can families bring kids to these spaces to process difficult issues among their peers, but parents can also use all of these resources to educate themselves for having tough conversations about how what’s happening in the world is affecting them.

How to talk to kids about the news (and getting involved)

In today’s political and media climate, as amplified by the internet for better or worse, it’s easy to feel distraught. It’s a place I find myself more often than I care to admit both as a community advocate and a parent. 

Still, there are great resources that are made more accessible by the internet – and parents don’t need to figure things out alone. There are organizations that we can learn from and that are constantly working to help meet our children’s needs. This is what gives me a bit of hope.

Here’s what I’ve learned that can help you talk to kids about news events:

1. Ask them what they care about. Your kids will surprise you with what they’re already discussing with their teachers and friends. Make time to talk about complicated issues they care about or that are relevant in their lives. Read a book, search for trustworthy articles online or watch an informative video, then try to understand it together. 

2. Value their voice. The word “advocacy” means “giving voice.” Listen to what your kids want to express. Ask what voices or perspectives are missing from the conversation. Together, explore the voices of those who are most affected by the issue you’re discussing. The internet makes it easier to find connections. This is one way to use it for good.

3. Explore local resources together. Activism is about building power. Go online to find local avenues — a school board, town hall or city council meeting, where teenagers can learn about issues going on in their own community and where they can express their views. Help write emails to the editor of a local paper. Ask for meetings with local elected officials. Trust me, you’ll be surprised that your family has more power than you think!

What we see and hear amid the constant online news cycle can be discouraging. But the internet also makes plenty of resources and guidance accessible, so that parents are equipped to have tough but empowering conversations with their kids.  

The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

An illustration reads: The Tech Talk

Talk to your kids about online safety

Get tips

The post How to talk to kids about the news appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgAnnouncing Interop 2023

A key difference between the web and other platforms is that the web puts users in control: people are free to choose whichever browser best meets their needs, and use it with any website. This is interoperability: the ability to pick and choose components of a system as long as they adhere to common standards.

For Mozilla, interoperability based on standards is an essential element of what makes the web special and sets it apart from other, proprietary, platforms. Therefore it’s no surprise that maintaining this is a key part of our vision for the web.

However, interoperability doesn’t just happen. Even with precise and well-written standards it’s possible for implementations to have bugs or other deviations from the agreed-upon behavior. There is also a tension between the desire to add new features to the platform, and the effort required to go back and fix deficiencies in already shipping features.

Interoperability gaps can result in sites behaving differently across browsers, which generally creates problems for everyone. When site authors notice the difference, they have to spend time and energy working around it. When they don’t, users suffer the consequences. Therefore it’s no surprise that authors consider cross-browser differences to be one of the most significant frustrations when developing sites.

Clearly this is a problem that needs to be addressed at the source. One of the ways we’ve tried to tackle this problem is via web-platform-tests. This is a shared testsuite for the web platform that everyone can contribute to. This is run in the Firefox CI system, as well as those of other vendors. Whenever Gecko engineers implement a new feature, the new tests they write are contributed back upstream so that they’re available to everyone.

Having shared tests allows us to find out where platform implementations are different, and gives implementers a clear target to aim for. However, users’ needs are large, and as a result, the web platform is large. That means that simply trying to fix every known test failure doesn’t work: we need a way to prioritize and ensure that we strike a balance between fixing the most important bugs and shipping the most useful new features.

The Interop project is designed to help with this process, and enable vendors to focus their energies in the way that’s most helpful to the long term health of the web. Starting in 2022, the Interop project is a collaboration between Apple, Bocoup, Google, Igalia, Microsoft and Mozilla (and open to any organization implementing the web platform) to set a public metric to measure improvements to interoperability on the web.

Interop 2022 showed significant improvements in the interoperability of multiple platform features, along with several cross-browser investigations that looked into complex, under-specified, areas of the platform where interoperability has been difficult to achieve. Building on this, we’re pleased to announce Interop 2023, the next iteration of the Interop project.

Interop 2023

Like Interop 2022, Interop 2023 considers two kinds of platform improvement:

Focus areas cover parts of the platform where we already have a high quality specification and good test coverage in web-platform-tests. Therefore progress is measured by looking at the pass rate of those tests across implementations. “Active focus areas” are ones that contribute to this year’s scores, whereas “inactive” focus areas are ones from previous years where we don’t anticipate further improvement.

As well as calculating the test pass rate for each browser engine, we’re also computing the “Interop” score: how many tests are passed by all of Gecko, WebKit and Blink. This reflects our goal not just to improve one browser, but to make sure features work reliably across all browsers.

Investigations are for areas where we know interoperability is lacking, but can’t make progress just by passing existing tests. These could include legacy parts of the platform which shipped without a good specification or tests, or areas which are hard to test due to missing test infrastructure. Progress on these investigations is measured according to a set of mutually agreed goals.

Focus Areas

The complete list of focus areas can be seen in the Interop 2023 readme. This was the result of a consensus based process, with input from web authors, for example using the results of the State of CSS 2022 survey, and MDN “short surveys”. That process means you can have confidence that all the participants are committed to meaningful improvements this year.

Rather than looking at all the focus areas in detail, I’ll just call out some of the highlights.


Over the past several years CSS has added powerful new layout primitives — flexbox and grid, followed by subgrid — to allow sophisticated, easy to maintain, designs. These are features we’ve been driving & championing for many years, and which we were very pleased to see included in Interop 2022. They have been carried forward into Interop 2023, adding additional tests, reflecting the importance of ensuring that they’re totally dependable across implementations.

As well as older features, Interop 2023 also contains some new additions to CSS. Based on feedback from web developers we know that two of these in particular are widely anticipated: Container Queries and parent selectors via :has(). Both of these features are currently being implemented in Gecko; Container Queries are already available to try in prerelease versions of Firefox, and is expected to be released in Firefox 110 later this month, whilst :has() is under active development. We believe that including these new features in Interop 2023 will help ensure that they’re usable cross-browser as soon as they’re shipped.

Web Apps

Several of the features included in Interop 2023 are those that extend and enhance the capability of the platform; either allowing authors to achieve things that were previously impossible, or improving the ergonomics of building web applications.

The Web Components focus area is about ergonomics; components allow people to create and share interactive elements that encapsulate their behavior and integrate into native platform APIs. This is especially important for larger web applications, and success depends on the implementations being rock solid across all browsers.

Offscreen Canvas and Web Codecs are focus areas which are really about extending the capabilities of the platform; allowing rich video and graphics experiences which have previously been difficult to implement efficiently using web technology.


Unlike the other focus areas, Web Compatibility isn’t about a specific feature or specification. Instead the tests in this focus area have been written and selected on the basis of observed site breakage, for example from browser bug reports or via The fact that these bugs are causing sites to break immediately makes them a very high priority for improving interoperability on the web.


Unfortunately not all interoperability challenges can be simply defined in terms of a set of tests that need to be fixed. In some cases we need to do preliminary work to understand the problem, or to develop new infrastructure that will allow testing.

For 2023 we’re going to concentrate on two areas in which we know that our current test infrastructure is insufficient: mobile platforms and accessibility APIs.

Mobile browsing interaction modes often create web development and interoperability challenges that don’t occur on desktop. For example, the browser viewport is significantly more dynamic and complex on mobile, reflecting the limited screen size. Whilst browser vendors have ways to test their own mobile browsers, we lack shared infrastructure required to run mobile-specific tests in web-platform-tests and include the results in Interop metrics. The Mobile Testing investigation will look at plugging that gap.

Users who make use of assistive technology (e.g., screen readers) depend on parts of the platform that are currently difficult to test in a cross-browser fashion. The Accessibility Testing investigation aims to ensure that accessibility technologies are just as testable as other parts of the web technology stack and can be included in future rounds of Interop as focus areas.

Together these investigations reflect the importance of ensuring that the web works for everyone, irrespective of how they access it.


Interop 2023 Dashboard as of January 2023, showing an Interop score of 61, an Investigation Score of 0, and browser engine scores of 86 for Blink and WebKit and 74 for Gecko.

To follow progress on Inteop 2023, see the dashboard on This gives detailed scores for each focus area, as well as overall progress on Interop and the investigations.

Mozilla & Firefox

The Interop project is an important part of Mozilla’s vision for a safe & open web where users are in control, and can use any browser on any device. Working with other vendors to focus efforts towards improving cross-browser interoperability is a big part of making that vision a reality. We also know how important it is to lead through our products, and look forward to bringing these improvements to Firefox and into the hands of users.

Partner Announcements

The post Announcing Interop 2023 appeared first on Mozilla Hacks - the Web developer blog.

Niko MatsakisAsync trait send bounds, part 1: intro

Nightly Rust now has support for async functions in traits, so long as you limit yourself to static dispatch. That’s super exciting! And yet, for many users, this support won’t yet meet their needs. One of the problems we need to resolve is how users can conveniently specify when they need an async function to return a Send future. This post covers some of the background on send futures, why we don’t want to adopt the solution from the async_trait crate for the language, and the general direction we would like to go. Follow-up posts will dive into specific solutions.

Why do we care about Send bounds?

Let’s look at an example. Suppose I have an async trait for performs some kind of periodic health check on a given server:

trait HealthCheck {
    async fn check(&mut self, server: &Server) -> bool;

Now suppose we want to write a function that, given a HealthCheck, starts a parallel task that runs that check every second, logging failures. This might look like so:

fn start_health_check<H>(health_check: H, server: Server)
    H: HealthCheck + Send + 'static,
    tokio::spawn(async move {
        while health_check.check(&server).await {

So far so good! So what happens if we try to compile this? You can try it yourself if you use the async_fn_in_trait feature gate, you should see a compilation error like so:

error: future cannot be sent between threads safely
   --> src/
15  |       tokio::spawn(async move {
    |  __________________^
16  | |         while health_check.check(&server).await {
17  | |             tokio::time::sleep(Duration::from_secs(1)).await;
18  | |         }
19  | |         emit_failure_log(&server).await;
20  | |     });
    | |_____^ future created by async block is not `Send`
    = help: within `[async block@src/ 20:6]`, the trait `Send` is not implemented for `impl Future<Output = bool>`

The error is saying that the future for our task cannot be sent between threads. But why not? After all, the health_check value is both Send and ’static, so we know that health_check is safe to send it over to the new thread. But the problem lies elsewhere. The error has an attached note that points it out to us:

note: future is not `Send` as it awaits another future which is not `Send`
   --> src/
16  |         while health_check.check(&server).await {
    |               ^^^^^^^^^^^^^^^^^^^^^^^^^^^ await occurs here

The problem is that the call to check is going to return a future, and that future is not known to be Send. To see this more clearly, let’s desugar the HealthCheck trait slightly:

trait HealthCheck {
    // async fn check(&mut self, server: &Server) -> bool;
    fn check(&mut self, server: &Server) -> impl Future<Output = bool>;
                                           // ^ Problem is here! This returns a future, but not necessarily a `Send` future.

The problem is that check returns an impl Future, but the trait doesn’t say whether this future is Send or not. The compiler therefore sees that our task is going to be awaiting a future, but that future might not be sendable between threads.

What does the async-trait crate do?

Interestingly, if you rewrite the above example to use the async_trait crate, it compiles. What’s going on here? The answer is that the async_trait proc macro uses a different desugaring. Instead of creating a trait that yields -> impl Future, it creates a trait that returns a Pin<Box<dyn Future + Send>>. This means that the future can be sent between threads; it also means that the trait is dyn-safe.

This is a good answer for the async-trait crate, but it’s not a good answer for a core language construct as it loses key flexibility. We want to support async in single-threaded executors, where the Send bound is irrelevant, and we also to support async in no-std applications, where Box isn’t available. Moreover, we want to have key interop traits (e.g., Read) that can be used for all three of those applications at the same time. An approach like the used in async-trait cannot support a trait that works for all three of those applications at once.

How would we like to solve this?

Instead of having the trait specify whether the returned future is Send (or boxed, for that matter), our preferred solution is to have the start_health_check function declare that it requires check to return a sendable future. Remember that health_check already included a where clause specifying that the type H was sendable across threads:

fn start_health_check<H>(health_check: H, server: Server)
    H: HealthCheck + Send + 'static,
    // —————  ^^^^^^^^^^^^^^ “sendable to another disconnected thread”
    //     |
    // Implements the `HealthCheck` trait

Right now, this where clause says two independent things:

  • H implements HealthCheck;
  • values of type H can be sent to an independent task, which is really a combination of two things
    • type H can be sent between threads (H: Send)
    • type H contains no references to the current stack (H: ‘static)

What we want is to add syntax to specify an additional condition:

  • H implements HealthCheck and its check method returns a Send future

In other words, we don’t want just any type that implements HealthCheck. We specifically want a type that implements HealthCheck and returns a Send future.

Note the contrast to the desugaring approach used in the async_trait crate: in that approach, we changed what it means to implement HealthCheck to always require a sendable future. In this approach, we allow the trait to be used in both ways, but allow the function to say when it needs sendability or not.

The approach of “let the function specify what it needs” is very in-line with Rust. In fact, the existing where-clause demonstrates the same pattern. We don’t say that implementing HealthCheck implies that H is Send, rather we say that the trait can be implemented by any type, but allow the function to specify that H must be both HealthCheck and Send.

Next post: Let’s talk syntax

I’m going to leave you on a cliffhanger. This blog post setup the problem we are trying to solve: for traits with async functions, we need some kind of syntax for declaring that you want an implementation that returns Send futures, and not just any implementation. In the next set of posts, I’ll walk through our proposed solution to this, and some of the other approaches we’ve considered and rejected.

Appendix: Why does the returned future have to be send anyway?

Some of you may wonder why it matters that the future returned is not Send. After all, the only thing we are actually sending between threads is health_check — the future is being created on the new thread itself, when we call check. It is a bit surprising, but this is actually highlighting an area where async tasks are different from threads (and where we might consider future language extensions).

Async is intended to support a number of different task models:

  • Single-threaded: all tasks run in the same OS thread. This is a great choice for embedded systems, or systems where you have lightweight processes (e.g., Fuchsia1).
  • Work-dealing, sometimes called thread-per-core: tasks run in multiple threads, but once a task starts in a thread, it never moves again.
  • Work-stealing: tasks start in one thread, but can migrate between OS threads while they execute.

Tokio’s spawn function supports the final mode (work-stealing). The key point here is that the future can move between threads at any await point. This means that it’s possible for the future to be moved between threads while awaiting the future returned by check. Therefore, any data in this future must be Send.

This might be surprising. After all, the most common example of non-send data is something like a (non-atomic) Rc. It would be fine to create an Rc within one async task and then move that task to another thread, so long as the task is paused at the point of move. But there are other non-Send types that wouldn’t work so well. For example, you might make a type that relies on thread-local storage; such a type would not be Send because it’s only safe to use it on the thread in which it was created. If that type were moved between threads, the system could break.

In the future, it might be useful to separate out types like Rc from other Send types. The distinguishing characteristic is that Rc can be moved between threads so long as all possible aliases are also moved at the same time. Other types are really tied to a specific thread. There’s no example in the stdlib that comes to mind, but it seems like a valid pattern for Rust today that I would like to continue supporting. I’m not sure yet the right way to think about that!

  1. I have finally learned how to spell this word without having to look it up! 💪 

This Week In RustThis Week in Rust 480

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is symphonia, a collection of pure-Rust audio decoders for many common formats.

Thanks to Kornel for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

377 pull requests were merged in the last week

Rust Compiler Performance Triage

Overall a positive week, with relatively few regressions overall and a number of improvements.

Triage done by @simulacrum. Revision range: c8e6a9e..a64ef7d


(instructions:u) mean range count
Regressions ❌
0.6% [0.6%, 0.6%] 1
Regressions ❌
0.3% [0.3%, 0.3%] 1
Improvements ✅
-0.8% [-2.0%, -0.2%] 27
Improvements ✅
-0.9% [-1.9%, -0.5%] 11
All ❌✅ (primary) -0.8% [-2.0%, 0.6%] 28

2 Regressions, 4 Improvements, 6 Mixed; 2 of them in rollups 44 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-02-01 - 2023-03-01 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Compilers are an error reporting tool with a code generation side-gig.

Esteban Küber on Hacker News

Thanks to Stefan Majewsky for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Rust Programming Language BlogAnnouncing Rustup 1.25.2

The rustup working group is announcing the release of rustup version 1.25.2. Rustup is the recommended tool to install Rust, a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of rustup installed, getting rustup 1.25.2 is as easy as stopping any programs which may be using Rustup (e.g. closing your IDE) and running:

rustup self update

Rustup will also automatically update itself at the end of a normal toolchain update:

rustup update

If you don't have it already, you can get rustup from the appropriate page on our website.

What's new in rustup 1.25.2

This version of rustup fixes a warning incorrectly saying that signature verification failed for Rust releases. The warning was due to a dependency of Rustup including a time-based check preventing the use of SHA-1 from February 1st, 2023 onwards.

Unfortunately Rust's release signing key uses SHA-1 to sign its subkeys, which resulted in all signatures being marked as invalid. Rustup 1.25.2 temporarily fixes the problem by allowing again the use of SHA-1.

Why is signature verification failure only a warning?

Signature verification is currently an experimental and incomplete feature included in rustup, as it's still missing crucial features like key rotation. Until the feature is complete and ready for use, its outcomes are only displayed as warnings without a way to turn them into errors.

This is done to avoid potentially breaking installations of rustup. Signature verification will error out on failure only after the design and implementation of the feature will be finished.


Thanks again to all the contributors who made rustup 1.25.2 possible!

  • Daniel Silverstone (kinnison)
  • Pietro Albini (pietroalbini)

The Mozilla BlogHow to have the tech talk with kids, according to TikTok’s ‘Mom Friend’

Cathy Pedrayes smiles for a photo. Surrounding illustration shows icons for a thumbs-up sign and a heart.<figcaption class="wp-element-caption">Cathy Pedrayes is best known as TikTok’s “Mom Friend.” </figcaption>

Cathy Pedrayes earned a following as TikTok’s “Mom Friend” for her practical safety tips – from how to break a car window in an emergency to what not to post on social media. She’s a TV host and has been featured on Today Parents, The Miami Herald, BuzzFeed News, The Bump and Good Morning America. Her book, “The Mom Friend Guide to Everyday Safety and Security,” was published last year.

Growing up, safety talk in my family centered around physical safety: Look both ways when crossing the street. Don’t talk to strangers. Don’t get me wrong, they’re still useful tips today. But, as we spend more time on screens – a virtual world with no big red stop signs and full of mostly harmless strangers – the safety conversation could use some updates. 

I’m a mom who happens to spend a lot of time online, including on social media. I’ve learned about the risks, like the lack of privacy, the spread of misinformation and being vulnerable to public judgment. But I also see how the internet can educate, entertain and bring people together. Here’s my advice on how to approach the tech talk with kids:

‘Hey, I found this. What do you think?’

It’s going to be different with every kid. But if you come across something online that looks a little off, maybe a spam message on Instagram, or a TikTok that could have misleading information, ask them: “Hey, I found this. What do you think?” Together, you can think it through. Is the post real or fake? Should you respond to that message or not? Admit it when you’re not sure. Investigate together.

Sharing is caring, but not when it comes to personal information 

When we think about online safety, we tend to overlook data privacy. We’re regularly sharing our personal information on the internet, whether we’re entering our addresses on online forms or our names and ages when downloading apps. We can be thoughtful about what we disclose about our children online as parents or guardians. But as they venture into the online world on their own, it’s important to instill in our kids the importance of keeping their information safe. 

For example, when you’re posting a pic online, make it a game to spot things that you may not want to share with the world – like your address number in the background or an ID badge that you’re wearing. Got a random message from a person you don’t know? Unless it’s inappropriate, show it to your kid and talk about whether or not you should respond. Give them the basics of privacy so that they can feel empowered to make decisions about their data on their own. 

Keep it brief

I personally hated long lectures as a kid. The tech talk doesn’t have to be one long dialogue, but something you could have whenever it naturally comes up. Have that initial conversation. Then maybe you talk about it in a different way next week. Technology is constant in our lives. The trick is being intentional in sharing our experiences as a family. 

Learn from your kid

I spent a lot of time in chat rooms when I was a teen. And while my parents knew what chat rooms were, they didn’t have as much knowledge as I did because I was the one using them. Expect that kids likely know more about the online platforms they use than you do. Talk to them about their interests. When you come from a place of curiosity, not fear, there’s a better chance that your kid comes to you for guidance when they need it. 

The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

An illustration reads: The Tech Talk

Talk to your kids about online safety

Get tips

The post <strong>How to have the tech talk with kids, according to TikTok’s ‘Mom Friend’ </strong> appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgInterop 2022: Outcomes

Last March we announced the Interop 2022 project, a collaboration between Apple, Bocoup, Google, Igalia, Microsoft, and Mozilla to improve the quality and consistency of their implementations of the web platform.

Now that it’s 2023 and we’re deep into preparations for the next iteration of Interop, it’s a good time to reflect on how the first year of Interop has gone.

Interop Wins

Happily, Interop 2022 appears to have been a big success. Every browser has made significant improvements to their test pass rates in the Interop focus areas, and now all browsers are scoring over 90%. A particular success can be seen in the Viewport Units focus area, which went from 0% pass rate in all browsers to 100% in all browsers in less than a year. This almost never happens with web platform features!

Looking at the release version of browsers — reflecting what actually ships to users — Firefox started the year with a score of around 60% in Firefox 95 and reached 90% in Firefox 108, which was released in December. This reflects a great deal of effort put into Gecko, both in adding new features and improving the quality of implementation of existing features like CSS containment, which jumped from 85% pass rate to 98% with the improvements that were part of Firefox 103.

One of the big new web-platform features in 2022 was Cascade Layers, which first shipped as part of Firefox 97 in February. This was swiftly followed by implementations shipping in Chrome 99 and Safari 15.4, again showing the power of Interop to rapidly drive a web platform feature from initial implementation to something production-quality and available across browsers.

Another big win that’s worth highlighting was the progress of all browsers to >95% on the “Web Compatibility” focus area. This focus area consisted of a small set of tests from already implemented features where browser differences were known to cause problems for users (e.g. through bug reports to In an environment where it’s easy to fixate on the new, it’s very pleasing to see everyone come together to clean up these longstanding problems that broke sites in the wild.

Other new features that have shipped, or become interoperable, as part of Interop 2022 have been written about in retrospectives by Apple and Google. There’s a lot of work there to be proud of, and I’d suggest you check out their posts.


Along with the “focus areas” based on counts of passing tests, Interop 2022 had three “investigations”, covering areas where there’s less clarity on what’s required to make the web interoperable, and progress can’t be characterized by a test pass rate.

The Viewport investigation resulted in multiple spec bugs being filed, as well as agreement with the CSSWG to start work on a Viewport Specification. We know that viewport-related differences are a common source of pain, particularly on mobile browsers; so this is very promising for future improvements in this area.

The Mouse and Pointer Events investigation collated a large number of browser differences in the handling of input events. A subset of these issues got tests and formed the basis for a proposed Interop 2023 focus area. There is clearly still more to be done to fix other input-related differences between implementations.

The Editing investigation tackled one of the most historically tricky areas of the platform, where it has long been assumed that complex tasks require the use of libraries that smooth over differences with bespoke handling of each browser engine. One thing that became apparent from this investigation is that IME input (used to input characters that can’t be directly typed on the keyboard) has behavioral differences for which we lack the infrastructure to write automated cross-browser tests. This Interop investigation looks set to catalyze future work in this area.

Next Steps

All the signs are that Interop 2022 was helpful in aligning implementations of the web and ensuring that users are able to retain a free choice of browser without running into compatibility problems. We plan to build on that success with the forthcoming launch of Interop 2023, which we hope will further push the state of the art for web developers and help web browser developers focus on the most important issues to ensure the future of a healthy open web.

The post Interop 2022: Outcomes appeared first on Mozilla Hacks - the Web developer blog.

The Mozilla BlogPocket kicks off 2023 with new and expanded publisher partnerships

New curated collections focused on “People of Deutschland” book

The Pocket editorial and product teams have been busy over the past couple of months to continue delivering the great experience Pocket users have come to expect. Here’s a breakdown of what’s new at Pocket, starting with our newest and returning publisher partnerships, followed by the latest updates to Pocket Android.  

Pocket Partners
Since 2020, Pocket has expanded its commitment to high-quality content discovery with the development of Pocket Collections: human-curated reading lists that help users connect with the best of the web. In late 2022, Pocket announced a slew of new Collections partners that include PRX, Thomson Reuters Foundation, WIRED, The Atlantic and the News Literacy Project, as well as an expansion of existing partnerships with Slate and Sh*t You Should Care About

“By expanding Pocket Collections’ roster of curators to include these fantastic publishers, we hope to connect our users with truly great stories and to help our partners find eager new audiences for their work,” said Carolyn O’Hara, Pocket’s senior director of content discovery.

Collections enable Pocket users to go deep with exceptional stories on topics they are interested in and to expand their interests into new areas of discovery. Pocket Collections are curated both by in-house editors, whose work is informed by the millions of saves Pocket users make each month across the web, as well as by notable subject matter experts on a large and diverse range of topics.

The new partnerships include: 

PRXThree collections for their podcast “The Science of Happiness,” from UC Berkeley’s Greater Good Science Center.

WIRED – Three collections around featured articles by Adrienne So, Andy Greenberg and Reece Rogers.

Context – Powered by the Thomson Reuters Foundation, this media platform provides news and analysis that contextualizes how critical issues and events affect ordinary people, society and the environment. Pocket will showcase three collections, beginning with “the future of food.”

News Literacy Project – Three collections on topics around news literacy education

“As a regular Pocket user, I know there are times when I want to dig into an article but can’t at that moment. So it’s a great resource for catching up on the important information you see online – and also learning about today’s most noteworthy topics through carefully curated collections made up of multiple sources,” said Jake Lloyd, New Literacy Project’s social media manager. “This collection on conspiratorial thinking will be a valuable tool for Pocket users to consult, and to understand the dangers of such falsehoods and how to push back against them.”

The Atlantic – A collection by staff writer and newsletter author Derek Thompson.

Sh*t You Should Care About – Four collections beginning with a collection on how pop culture helps explain the world.

As a go-to place to discover, save and spend time with the best stories on the web, Pocket has been a natural partner for publishers looking to amplify high-quality journalism to new audiences. At the science magazine Nautilus, for example, Pocket often accounts for more than 20% of the monthly web traffic. Pocket’s syndication program can also drive new exposure to overlooked stories that remain highly relevant and of interest to users. “With Pocket, syndicated pieces that normally get 100,000 page views per month increased to over 700,000 per month,” said Nautilus publisher John Steele.

Pocket Curated Collections
Pocket Collections with publishers offer readers a backstage pass to explore the stories behind articles and podcast episodes straight from reporters’ notes, allowing journalists to further dive into their expertise. Pocket Collections are also curated by  influential subject matter experts — including journalists and authors like Pocket’s Top Saved author for 2022 The Atlantic’s Arthur C. Brooks, Claire Saffitz, Safiya Umoja Noble, Adam Grant and Simran Jeet Singh, just to name a few – on topics of the curators’ choosing. These “mixtapes” of fantastic articles, videos, recipes, playlists and more come complete with personal annotations from each curator about why each item is worthy of a Pocket user’s time and attention.

Writer, editor, strategist and public speaker Rachel Hislop recently curated her own Collection on Pocket and reflected on the process: “As an editor, Pocket served as a tool to keep some of my favorite editorial pieces in one place. Many of the picks in my final curation were already saved to my Pocket. So much of my work is about elevating the voices of others or implementing strategies to help tell the most impactful stories, so having a place to curate examples of that is important. I often send links to my friends and group chats and when I led an editorial team, I implemented an ‘outside reading’ Slack channel, where we would share aspirational or interesting reads from around the web; curating a Pocket collection was simply an elevated version of that practice.”

This February, Pocket will launch a partnership in Germany with co-collaborators Martina Rink and Simon Usifo in time for the launch of their book, People of Deutschland, out Feb. 4, 2023. The book features 45 stories from multicultural artists, actors, creatives, politicians, top managers and celebrities sharing their personal experiences on the reality of life and on achieving success as German People of Color.

Rink and Usifo, in addition to TV show host Milka Loff Fernandes, podcaster Frank Joung, politician Mirrianne Mahn, business leader Lisanne Dorn, journalist and author Düzen Tekkal and business innovation expert Deepa Gautam-Nigge will each create German Pocket Collections expanding on the stories of the book and of their lives. The Collections are about struggles and strides, about what empowers the authors to get up and fight, and what brings them joy at the end of a long day. With putting a spotlight on their stories and creating broad visibility around this collaboration, Pocket supports the vision of the book: to change structures in Germany for the better and inspire future generations.

New and Expanded Collections
Pocket has also expanded its partnership with Slate to include four new collections for their podcast shows “Slow Burn,” “Amicus,” “ICYMI” and “How To!”

“We’re excited to expand our partnership with Pocket to feature more of our podcasts and pieces. These collections not only give our audience more of the content they come to Slate for daily, but a deeper and more direct connection with the hosts and writers that produce it,” said Bill Carey, senior director of strategy for Slate.

Slate’s podcast, Slow Burn, which was the first recipient of the Apple Podcasts “Show of the Year” award, returned for its seventh season with a deep dive into Roe v. Wade.
Host and Slate executive editor Susan Matthews explores the path to Roe — a time when more Republicans than Democrats supported abortion rights. Listeners will hear the forgotten story of the first woman to be convicted of manslaughter for having an abortion, the unlikely Catholic power couple who helped ignite the pro-life movement, and a rookie Supreme Court justice who got assigned the opinion of a lifetime.

Slate’s ICYMI podcast explains how a 500,000-word Harry Potter fan fiction took over the internet (and was listed on Pocket’s Best of 2022 list!)
One particular work of fan fiction has exploded over the last several years. It’s called All the Young Dudes, and it’s a 526,969-word fic that currently has a whopping 7.5 million hits on the fanfiction site Archive of Our Own. All the Young Dudes is set in the era when Harry’s parents attended Hogwarts and features both familiar faces and a budding romance between two of the series’ most beloved figures, Sirius Black and Remus Lupin.

Additional highlights for Pocket publisher Curated Collections: 

PRX explores well-being practices with its “Happiness Breaks” series from The Science of Happiness podcast 
Join Dr. Dacher Keltner, host of the podcast The Science of Happiness (co-produced by UC Berkeley’s Greater Good Science Center), for a deep dive into the new series Happiness Breaks. Explore the research behind science-backed practices for well-being like the art of connecting with the natural world and using your imagination to visualize your best possible self.

News Literacy Project “goes down the rabbit hole” to explain why people fall for conspiracy theories.
This collection of articles and podcasts, which was featured on Pocket’s Best of 2022 list, helps you understand the appeal of conspiratorial thinking and how recent events have been influenced by conspiratorial beliefs. Plus, you’ll find resources to help you talk to anyone in your life who’s fallen down the rabbit hole and needs a hand climbing out—whether they realize it or not.

WIRED’s Adrienne So’s search for the perfect emergency prep plan for her family led her to a path toward something much bigger.
Readers get to follow the author as she trains for and rehearses a bold and strenuous challenge in preparation for disaster: specifically competing in the Disaster Relief Trials, a 30-mile bike race meant to simulate the chaotic post-“Big One” conditions in Portland, Oregon.

The Atlantic’s Derek Thompson created a collection based on his newsletter, Work in Progress. 
An age of extraordinary communications technology has coincided with an era of declining physical-world progress. Since the beginning of 2022, Derek has been exploring what an abundance agenda might look like for the U.S. And this collection makes the case for what he refers to as a new philosophy of the future

Pocket Android Updates
As previously announced, Pocket Android app has new updates to make it easier to discover your saved and new stories.

Google recently named Pocket as one of the best apps of 2022, and it’s only getting better. We spent a lot of time with our users last year to see how we can improve the experience on the Pocket Android app. This month, we’re rolling out updates based on user feedback so you can easily find the stories and topics you care about.

Read on to learn more about what’s new in the Pocket Android app.

Save and discover the best articles, stories and videos on the web

Get Pocket

The post Pocket kicks off 2023 with new and expanded publisher partnerships   appeared first on The Mozilla Blog.

Wladimir PalantPassword strength explained

The conclusion of my blog posts on the LastPass breach and on Bitwarden’s design flaws is invariably: a strong master password is important. This is especially the case if you are a target somebody would throw considerable resources at. But everyone else might still get targeted due to flaws like password managers failing to keep everyone on current security settings.

There is lots of confusion about what constitutes a strong password however. How strong is my current password? Also, how strong is strong enough? These questions don’t have easy answers. I’ll try my best to explain however.

If you are only here for recommendations on finding a good password, feel free to skip ahead to the Choosing a truly strong password section.

Where strong passwords are crucial

First of all, password strength isn’t always important. If your password is stolen as clear text via a phishing attack or a compromised web server, a strong password won’t help you at all.

In order to reduce the damage from such attacks, it’s way more important that you do not reuse passwords – each web service should have its own unique password. If your login credentials for one web service get into the wrong hands, these shouldn’t be usable to compromise all your other accounts e.g. by means of credential stuffing. And since you cannot possibly keep hundreds of unique passwords in your head, using a password manager (which can be the one built into your browser) is essential.

But this password manager becomes a single point of failure. Especially if you upload the password manager data to the web, be it to sync it between multiple devices or simply as a backup, there is always a chance that this data is stolen.

Of course, each password manager vendor will tell you that all the data is safely encrypted. And that you are the only one who can possibly decrypt it. Sometimes this is true. Often enough this is a lie however. And the truth is rather: nobody can decrypt your data as long as they are unable to guess your master password.

So that one password needs to be very hard to guess. A strong password.

Oh, and don’t forget enabling Multi-factor authentication (MFA) where possible regardless.

How password guessing works

When someone has your encrypted data, guessing the password it is encrypted with is a fairly straightforward process.

A flow chart starting with box 1 “Produce a password guess.” An arrow leads to a decision element 2 “Does this password work?” An arrow titled “No” leads to the original box 1. An arrow titled “Yes” leads to box 3 “Decrypt passwords.”

Ideally, your password manager made step 2 in the diagram above very slow. The recommendation for encryption is allowing at most 1,000 guesses per second on common hardware. This renders guessing passwords slow and expensive. Few password managers actually match this requirement however.

But password guesses will not be generated randomly. Passwords known to be commonly chosen like “Password1” or “Qwerty123” will be tested among the first ones. No amount of slowing down the guessing will prevent decryption of data if such an easy to guess password is used.

So the goal of choosing a strong password isn’t choosing a password including as many character classes as possible. It isn’t making the password look complex either. No, making it very long also won’t necessarily help. What matters is that this particular password comes up as far down as possible in the list of guesses.

The mathematics of guessing passwords

A starting point for password guessing are always passwords known from previous data leaks. For example, security professionals often refer to rockyou.txt: a list with 14 million passwords leaked 2009 in the RockYou breach.

If your password is somewhere on this list, even at 1,000 guesses per second it will take at most 14,000 seconds (less than 4 hours) to find your password. This isn’t exactly a long time, and that’s already assuming that your password manager vendor has done their homework. As past experience shows, this isn’t an assumption to be relied on.

Since we are talking about computers here, the “proper” way to express large numbers is via powers of two. So we say: a password on the RockYou list has less than 24 bits of entropy, meaning that it will definitely be found after 224 (16,777,216) guesses. Each bit of entropy added to the password results in twice the guessing time.

But obviously the RockYou passwords are too primitive. Many of them wouldn’t even be accepted by a modern password manager. What about using a phrase from a song? Shouldn’t it be hard to guess because of its length already?

Somebody calculated (and likely overestimated) the number of available song phrases as 15 billion, so we are talking about at most 34 bits of entropy. This appears to raise the password guessing time to half a year.

Except: the song phrase you are going to choose won’t actually be at the bottom of any list. That’s already because you don’t know all the 30 million songs out there. You only know the reasonably popular ones. In the end it’s only a few thousand songs you might reasonably choose, and your date of birth might help narrow down the selection. Each song has merely a few dozen phrases that you might pick. You are lucky if you get to 20 bits of entropy this way.

Estimating the complexity of a given password

Now it’s hard to tell how quickly real password crackers will narrow down on a particular password. One can look at all the patterns however that went into a particular password and estimate how many bits these contribute to the result. Consider this XKCD comic:

An XKCD comic comparing the complexity of the passwords “Tr0ub4dor&3” and “correct horse battery staple”<figcaption> Source: XKCD 936 </figcaption>

An uncommon base word chosen from a dictionary with approximately 50,000 words contributes 16 bits. The capitalization at the beginning of the word on the other hand contributes only one bit because there are only two options: capitalizing or not capitalizing. There are common substitutions and some junk added at the end contributing a few more bits. But the end result are rather unimpressive 28 bits, maybe a few more because the password creation scheme has to be guessed as well. So this is a password looking complex, it isn’t actually strong however.

The (unmaintained) zxcvbn library tries to automate this process. You can try it out on a webpage, it runs entirely in the browser and doesn’t upload your password anywhere. The guesses_log10 value in the result can be converted to bits: divide through 3 and multiply with 10.

For Tr0ub4dor&3 it shows guesses_log10 as 11. Calculating 11 ÷ 3 × 10 gives us approximately 36 bits.

Note that zxcvbn is likely to overestimate password complexity, like it happened here. While this library knows some common passwords, it knows too few. And while it recognizes some English words, it won’t recognize some of the common word modifications. You cannot count on real password crackers being similarly unsophisticated.

How strong are real passwords?

So far we’ve only seen password creation approaches that max out at approximately 35 bits of entropy. My guess it that this is in fact the limit for almost any human-chosen password. Unfortunately, at this point it is only my guess. There isn’t a whole lot of information to either support or disprove it.

For example, Microsoft published a large-scale passwords study in 2007 that arrives on the average (not maximum) password strength being 40 bits. However, this study is methodically flawed and wildly overestimates password strength. In 2007 neither XKCD comic 936 nor zxcvbn existed. So the researchers calculate password strength by looking at the character classes used. Going by their method, “Password1!” is a perfect password, whooping 63 bit strong. The zxcvbn estimate for the same password is merely 14 bits.

Another data point is the password strength indicator used for example on LastPass and Bitwarden registration pages. How strong are the passwords at the maximum strength?

Screenshot of a page titled “Create account.” The entered master password is “abcd efgh 1!” and the strength indicator below it is full.

Turns out, both these password managers use zxcvbn on their registration pages. And both will display a full strength bar for the maximum zxcvbn score: 4 out of 4. Which is assigned to any password that zxcvbn considers stronger than 33 bits.

Finally, there is another factor to consider: we aren’t very good at remembering complex passwords. A study from 2014 concluded that humans are capable of remembering passwords with 56 bits of entropy via a method the researchers called “spaced repetition.” Even using their method, half of the participants needed more than 35 login attempts in order to learn this password.

Given this, it’s reasonable to assume that in reality most people choose considerably weaker passwords: passwords that are still shown as “strong” by their password manager’s registration page, and that they can remember without a week of exercises.

Choosing a truly strong password

As I mentioned already, we are terrible at choosing strong passwords. The only realistic way to get a strong password is having it generated randomly.

But we are also very bad at remembering some gibberish mix of letters and digits. Which brings us to passphrases: sequences of multiple random words, much easier to remember at the same strength.

A typical way to generate such a passphrase would be diceware. You could use the EFF word list for five dice for example. Either use real dice or a website that will roll some fake dice for you.

Let’s say the result is ⚄⚀⚂⚅⚀. You look up 51361 in the dictionary and get “renovate.” This is the first word of your passphrase. Repeat the process to get the necessary number of words.

Update (2023-01-31): If you want it more comfortable, the Bitwarden password generator will do all the work for you while using the same EFF word list (type has to be set to “passphrase”).

How many words do you need? As a “regular nobody,” you can probably feel confident if guessing your password takes a century on common hardware. While not impossible, decrypting your passwords will simply cost too much even on future hardware and won’t be worth it. Even if your password manager doesn’t protect you well and allows 1,000,000 guesses per second, a passphrase consisting out of four words (51 bits of entropy) should be sufficient.

Maybe you are a valuable target however. If you hold the keys to lots of money or some valuable secrets, someone might decide to use more hardware for you specifically. You probably want to use at least five words then (64 bits of entropy). Even at a much higher rate of 1,000,000,000 guesses per second, guessing your password will take 900 years.

Finally, you may be someone of interest to a state-level actor. If you are an important politician, an opposition figure or a dissident of some kind, some unfriendly country might decide to invest lots of money in order to gain access to your data. A six words password (77 bits of entropy) should be out of reach even to those actors for the foreseeable future.

Firefox NightlyA Variety of Improvements At January’s End – These Weeks in Firefox: Issue 131


Screenshot of the about:logins page before new UI changes, where login updates where simply displayed as text at the bottom of the page.

      • After:

Screenshot of the about:logins page after new UI changes, where a new timeline visual is now visible at the bottom of the page to indicate when a login was last updated.

  • Picture-in-Picture updates:
    • kpatenio updated the Dailymotion wrapper, so captions should appear again on the PiP window
    • kpatenio resolved issues where PiP touch events changed playback while toggling PiP
    • Niklas fixed the Netflix wrapper when seeking forward or backward and scrubbing
    • Niklas increased the seek bar slider clickable area, making it easier to select the scrubber with the mouse
  • The DevTools team have updated our main highlighters to use less aggressive styling when prefers-reduced-motion is enabled (bug)

A screenshot of the DevTools' new highlighter appearing on the Wikipedia landing page when a user enables the setting prefers-reduced-motion.

  • There is a new context menu option for opening source view in Firefox Profiler. Thanks to our contributor Krishna Ravishankar!

Screenshot of a new Firefox Profiler context menu option, particularly for viewing a source file called Interpreter.cpp.

Friends of the Firefox team


  • [mconley] Introducing Jonathan Epstein (jepstein) who is coming to us from the Rally team as a new Engineering Manager! Welcome!

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug
  • CanadaHonk [:CanadaHonk]
  • Gregory Pappas [:gregp]
  • Jonas Jenwald [:Snuffleupagus]
  • kernp25
  • Oriol Brufau [:Oriol]
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtension APIs
  • Oriol Brufau contributed a fix to the “tabs.move” API method when used to move multiple tabs into a different browser window – Bug 1809364
  • Gregory Pappas contributed a new “matchDiacritics” option to the (Firefox-specific) find API – Bug 1680606
  • All manifest_version 3 extensions that want to use the “webRequest.filterResponseData” API method will have to request the new “webRequestFilterResponse” permission (in addition to the “webRequest” and “webRequestBlocking” permissions that were already needed to get access to this API in manifest_version 2 extensions) – Bug 1809235
  • declarativeNetRequest API:
    • Constants representing the values, used internally to enforce limits to the DNR rules that each extension is allowed to define and enable, are now exposed as declarativeNetRequest API namespace properties – Bug 1809721
    • Update JSONSchema and tests to explicitly cover the expected default value set for the DNR rule condition property “isUrlFilterCaseSensitive”, which should be false per consensus reached in the WECG (WebExtensions Community Group) – Bug 1811498
  • As part of tweaks that aim to reduce the number of changes needed to port a Chrome’s manifest_version 3 extension to Firefox, In Firefox >= 110 the optional “extension_ids” property part of the manifest_version 3 “web_accessible_resources” manifest property can be set to an empty array – Bug 1809431
WebExtensions Framework
  • Extensions button and panel:
    • Cleanups for the remaining bits of the legacy implementation (which also covered the removal of the pref) – Bug 1799009, Bug 1801540
    • Introduction of a new “Origin Controls” string to be shown to the users in the extensions panel when an extension has access to the currently active tab but limited to the current visit (which will only be valid while the tab is not navigated) – Bug 1805523

Developer Tools

  • Thanks to Tom for grouping CSP warnings in the console (bug)

A screenshot showcasing more descriptive Content Security Policy, alias CSP, warnings on the browser console.

  • Thanks to rpl for fixing dynamic updates of the extension storage in the Storage panel (bug)
  • Alex fixed a recent regression for the Browser Toolbox in parent process mode, where several panels would start breaking when doing a navigation in the Browser (bug)
  • Alex also fixed several issues in the Debugger for sources created using `new Function` (eg bug)
  • Nicolas fixed several bugs for the autocomplete in the Browser Toolbox / Console, which could happen when changing context in the context selector (bug and bug)
WebDriver BiDi
  • Thanks to :CanadaHonk for fixing bugs or adding missing features in our CDP implementation (bug, bug, bug, bug, bug)
  • Henrik updated events of the browsingContext module (eg `domContentLoaded`, `load`, …) to provide a timestamp, which can be useful to collect page performance data (bug)
  • Sasha updated our vendored Puppeteer to version 18.0.0, which now includes a shared test expectation file, which means less maintenance for us and a better test coverage for Firefox on puppeteer side (bug and bug).
  • We implemented the network.responseCompleted event (bug) and updated our example web-client for webdriver BiDi to provide a simplified version of a network monitor (

ESMification status

  • ESMified status:
    • browser: 46.1%
      • Dropped a little bit because we removed a large number of sys.mjs files we didn’t need any more.
    • toolkit: 38.3%
      • Bilal has been working on migrating various actors.
    • Total: 46.54% (up from 46.0%)
  • #esmification on Matrix
  • Migration Document (with a walkthrough!)

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)


Performance Tools (aka Firefox Profiler)

  • Support zip files on windows. Thanks to our contributor Krishna Ravishankar!
  • Scroll item horizontally in the virtuallist, taking into account the fixed size.
  • Remove the “optimizations” field from the frame table. This should reduce our profile data size.
  • Allow pinning source code view files to specific git tags.
  • Enable screenshots on talos profiling jobs on treeherder.
  • Remove some Timestamp::Now calls when the profiler is not running.
  • Fix Firefox version inside the profile data.

Search and Navigation

Storybook / Reusable components

A screenshot of the about:addons page displaying details about an add-on called "Tree Style Tab" and showcasing moz-toggle components in use.

  • Bug 1809457 –  Our common stylesheet no longer conflicts with Storybook styles
  • Bug 1801927 – The “Learn more” links in the about:preferences#general tab have been updated to use `moz-support-link`
  • Bug 1803155 (Heading to autoland) – ./mach storybook install is going away in favor of automatically installing dependencies when ./mach storybook is run

The Mozilla Blog#AskFirefox host Chenae Moore on internet pranks and losing sleep over recipe videos

Chenae Moore smiles for a photo.<figcaption class="wp-element-caption">Chenae Moore is the host of our YouTube series, #AskFirefox.</figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we are also quick to point out that the internet is pretty darn magical. The internet opens up doors and opportunities, allows for people to connect with others, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, what we should save in Pocket to read later, and what sites and forums shaped them.

This month we chat with Chenae Moore. She’s the host of our YouTube series, #AskFirefox, where we answer your pressing questions to help you understand the web and live your best online life.

What is your favorite corner of the internet?

My favorite corner of the internet is the TikTok “prank” section. Not the ones where people violate your personal space or give people near heart attacks, but the ones where people do silly things at home to get their spouse’s natural reactions. Or when people walk up to strangers and act like they know them and the stranger feels bad for having forgotten how they met so they go along with it. I can’t get enough. It literally brings me to tears.

What is an internet deep dive that you can’t wait to jump back into?

The most ridiculously satisfying deep dive is watching ingrown toenail removal videos. Dr. Toenail and The Meticulous Manicurist are two of my favorites. I can sit and watch for hours because the end results are just that rewarding. Makes me want to grab some tools and go to work on my own toes. Unfortunately, I don’t have ingrown toenails. Dangit.

What is the one tab you always regret closing?

I always regret closing recipe videos. I’ll find an incredible recipe and put my phone down, lose the recipe and lose sleep because I won’t be able to stop thinking about it. Now I just DM myself tons of recipes. My DMs are basically a cookbook, and I’m practically a professional chef now.

What can you not stop talking about on the internet right now? 

Tech! I enjoy tech so much I went and got a fiancé in tech and a tech show! I host a show called #AskFirefox where I answer questions and provide resources on how to navigate the wonderful world of tech. It’s pretty groovy if I do say so myself!

When I want to catch up on the reality tea, it’s all about “The Real Housewives” for me. This previous season of “The Real Housewives of Beverly Hills” was DRAMA PATCHED! I follow tons of Bravo accounts, so I’m even hip to some of the drama for shows I’ve never even seen.

What was the first online community you engaged with?

Probably MySpace way back in the day. Oh what a time to be alive.

What articles and videos are in your Pocket waiting to be read/watched right now?

Definitely tons of recipes, especially brunch recipes. My family and I do a brunch every Christmas so I start finding recipes months in advance. Games and gift ideas for my nieces so I can spoil them. Workouts for abs and the booty.

If you could create your own corner of the internet what would it look like?

Mary-Kate and Ashley rare sightings. As their No. 1 fan, it’s pretty difficult to keep up with what they have going on since they’re such private people. Nineties sitcom TV clips so we can remember just how incredible family TV once was and also get a life lesson right along with it (a two for one!), ‘90s R&B throwbacks and 2000s boy band hits so we can learn the dance moves that took the world by storm before TikTok and be blessed by the incredible fashions at the same time. Another winner winner, chicken dinner!

Chenae Moore is a professional screen actor and a Michigan native. With a focus on comedy and branded content, her recent commercial work includes brands such as Tillamook, Fujifilm, and Mattel, in addition to film and voiceover roles.

To keep up with Chenae, you can catch new episodes of #AskFirefox on YouTube every Thursday. Also look out for her as one of REACT Media’s newest cast members: @REACT on YouTube and TikTok, and @REACTmedia on Instagram. Or simply give her a follow on Instagram: @Hey_Nae.

The post #AskFirefox host Chenae Moore on internet pranks and losing sleep over recipe videos appeared first on The Mozilla Blog.

The Talospace ProjectFirefox 109 on POWER

Firefox 109 is out with new support for Manifest V3 extensions, but without the passive-aggressive deceitful crap Google was pushing (yet another reason not to use Chrome). There are also modest HTML, CSS and JS improvements.

As before linking still requires patching for bug 1775202 using this updated small change or the browser won't link on 64-bit Power ISA (alternatively put --disable-webrtc in your .mozconfig if you don't need WebRTC). Otherwise the browser builds and runs fine with the LTO-PGO patch for Firefox 108 and the .mozconfigs from Firefox 105.

Mozilla Localization (L10N)L10n Report: January 2023 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Punjabi from Pakistan (pa-pk) was recently added to Pontoon.

New content and projects

What’s new or coming up in Firefox desktop

Firefox 111, shipping to release users on March 14, is going to include two new locales: Friulian (fur) and Sardinian (sc). Congratulations to the team for this achievement, it’s been a long time since we added new locales to release (Firefox 91).

A new locale is also available in Nightly, Saraiki (skr). Unfortunately, it’s currently blocked by missing information in the Unicode (CLDR) database that prevents the layout from being correctly displayed with right-to-left direction. If you want to help them, feel free to reach out to the locale manager.

In terms of content, one major feature coming is Cookie Banner Reduction, which will allow users to automatically reject all cookies in cookie banner requests. Several strings already landed over the last weeks, but expect some changes and instructions on how to test the feature (and different variations of messages used for testing).

What’s new or coming up in mobile

Just as for Firefox desktop, the v111 release ships on March 14 for all mobile projects, and also contains strings for the new Cookie Banner Reduction feature (see section above). Stay tuned for more information around that.

What’s new or coming up in web projects

The site is going to go through some transformation this year. It involves restructuring such as removing pages with duplicate information, consolidating other pages, redesigning the site, and rewriting some copy. Having said that, the effort involves several cross functional teams to accomplish. Impact of these changes on localization is estimated to be in the second half of the year.

If your locales have some catching up to do, please continue working. Your work won’t go wasted as it will be stored in the translation memory in Pontoon. Speaking of such, congratulations to the Saraiki (skr) team for completing the project. The site was recently launched on production.


Strings related to tools for reviewer and admins have been removed from Pontoon. The features used to be available for vetted contributors plus Mozilla staff and contractors in the production environment, but now it’s no longer the case. Since the localized strings can’t be reviewed in context by localizers, the team has decided to separate the strings from landing in Pontoon. Currently the feature is partially localized if your locale has done some or all the work in the past.

Firefox Accounts

Behind the scenes, the Firefox Accounts team are in the process of refactoring a number of pages to use Fluent. This means we will see a number of strings reusing translations from older file formats with updated Fluent syntax. These strings are in the process of landing, but won’t be exposed until the rework is done, so it may be some time before strings can be reviewed in production.

Congratulations to Baurzhan of the Kazakh (kk) team for recently raising the completion rate of his locale from 30% to 100%. The Kazakh locale is already activated on staging and will soon be released to production.

What’s new or coming up in SUMO

  • What did SUMO accomplish in 2022? Check out our 2022 summary in this blog post.
  • Please join our discussion on how we would like to present ourselves in Mozilla.Social!
  • SUMO just redesigned our Contribute Page recently. Check out the news and the new page if you haven’t already!
  • The Android mobile team (Firefox for Android and Firefox Focus for Android) have decided to move to Bugzilla. If you’re a mobile contributor, make sure to direct users to the right place for bug report by referring them to this article.
  • Check out the SUMO Sprint for Firefox 109 to learn more about how you can help with this release.
  • Are you a KB or article localization contributor and experience issue with special characters when copying tags? Please chime in on the discussion thread or directly in the bug report (Thanks to Tim for filing that bug).
  • If you’re a Social Support or Mobile Store Support contributor, make sure to watch the contributor forum to get updates about queue stats every week. Kiki will post the update by the end of the week to make sure that you’re updated. Here’s the latest one from last week.

You can now learn more about Kitsune releases by following this Discourse topic.

What’s new or coming up in Pontoon

Changes to the Editor

Pontoon’s editor is undergoing improvements, thanks to some deeper data model changes. The “rich” editor is now able to work with messages with multiple selectors, with further improvements incoming as this work progresses.

As with all other aspects of Pontoon, please let us know if you’ve any comments on these changes as they are deployed.


We started evaluating the Pretranslation feature on Testing is currently limited to 2 locales, but we’ll start adding more when we reach the satisfactory level of quality and stability.

New contributions

Thanks to our army of awesome contributors for recent improvements to our codebase:

  • Willian made his first contributions to Pontoon, including upgrading our legacy jQuery library.
  • Tomás fixed a bug in the local setup, which was also his first contribution.
  • Vishal fixed several bugs in the Pretranslation feature, which he developed a while ago.


Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

Image by Elio Qoshi

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

The Rust Programming Language BlogAnnouncing Rust 1.67.0

The Rust team is happy to announce a new version of Rust, 1.67.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.67.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.67.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.67.0 stable

#[must_use] effective on async fn

async functions annotated with #[must_use] now apply that attribute to the output of the returned impl Future. The Future trait itself is already annotated with #[must_use], so all types implementing Future are automatically #[must_use], which meant that previously there was no way to indicate that the output of the Future is itself significant and should be used in some way.

With 1.67, the compiler will now warn if the output isn't used in some way.

async fn bar() -> u32 { 0 }

async fn caller() {
warning: unused output of future returned by `bar` that must be used
 --> src/
5 |     bar().await;
  |     ^^^^^^^^^^^
  = note: `#[warn(unused_must_use)]` on by default

std::sync::mpsc implementation updated

Rust's standard library has had a multi-producer, single-consumer channel since before 1.0, but in this release the implementation is switched out to be based on crossbeam-channel. This release contains no API changes, but the new implementation fixes a number of bugs and improves the performance and maintainability of the implementation.

Users should not notice any significant changes in behavior as of this release.

Stabilized APIs

These APIs are now stable in const contexts:

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.67.0

Many people came together to create Rust 1.67.0. We couldn't have done it without all of you. Thanks!

Wladimir PalantIPinside: Korea’s mandatory spyware

Note: This article is also available in Korean.

On our tour of South Korea’s so-called security applications we’ve already took a look at TouchEn nxKey, an application meant to combat keyloggers by … checks notes … making keylogging easier. Today I want to shed some light on another application that many people in South Korea had to install on their computers: IPinside LWS Agent by Interezen.

The stated goal of the application is retrieving your “real” IP address to prevent online fraud. I found however that it collects way more data. And while it exposes this trove of data to any website asking politely, it doesn’t look like it is all too helpful for combating actual fraud.

How does it work?

Similarly to TouchEn nxKey, the IPinside LWS Agent application also communicates with websites via a local web server. When a banking website in South Korea wants to learn more about you, it will make a JSONP request to localhost:21300. If this request fails, the banking website will deny entry and ask that you install IPinside LWS Agent first. So in South Korea running this application isn’t optional.

On the other hand, if the application is present the website will receive various pieces of data in the wdata, ndata and udata fields. Quite a bit of data actually:

Screenshot of a browser window with the address open. The response is a jQuery callback with some data including wdata, ndata and udata fields and base64-encoded values.

This data is supposed to contain your IP address. But even from the size of it, it’s obvious that it cannot be only that. In fact, there is a whole lot more data being transmitted.

What data is it?


Let’s start with wdata which is the most interesting data structure here. When decrypted, you get a considerable amount of binary data:

A hex dump with some binary data but also obvious strings like QEMU Harddisk or Gigabit Network Connection

As you can see from the output, I am running IPinside in a virtual machine. It even says VirtualBox at the end of the output, even though this particular machine is no longer running on VirtualBox.

Another obvious thing are the two hard drives of my virtual machine, one with the serial number QM00001 and another with the serial number abcdef. That F0129A45 is the serial number of the primary hard drive volume. You can also see my two network cards, both listed as Intel(R) 82574L Gigabit Network Connection. There is my keyboard model (Standard PS/2 Keyboard) and keyboard layout (de-de).

And if you look closely, you’ll even notice the byte sequences c0 a8 7a 01 (standing for my gateway’s IP address, c0 a8 7a 8c (, the local IP address of the first network card) and c0 a8 7a 0a (, the local IP address of the second network card).

But there is way more. For example, that 65 (letter e) right before the hard drive information is the result of calling GetProductInfo() function and indicates that I’m running Windows 10 Home. And 74 (letter t) before it encodes my exact Windows version.

Information about running processes

One piece of the data is particularly interesting. Don’t you wonder where the firefox.exe comes from here? It indicates that the Mozilla Firefox process is running in the background. This information is transmitted despite the active application being Google Chrome.

See, websites give IPinside agent a number of parameters that determine the output produced. One such parameter is called winRemote. It’s mildly obfuscated, but after removing the obfuscation you get:


So banking websites are interested in whether you are running remote access tools. If a process is detected that matches one of these strings, the match is added to the wdata response.

And of course this functionality isn’t limited to searching for remote access tools. I replaced the winRemote parameter by AGULAAAAAAtmaXJlZm94LmV4ZQA= and got the information back whether Firefox is currently running. So this can be abused to look for any applications of interest.

And even that isn’t the end of it. IPinside agent will match substrings as well! So it can tell you whether a process with fire in its name is currently running.

That is enough for a website to start searching your process list without knowing what these processes could be. I created a page that would start with the .exe suffix and do a depth-first search. The issue here was mostly IPinside response being so slow, each request taking half a second. I slightly optimized the performance by testing multiple guesses with one request and got a proof of concept page that would turn up a process name every 40-50 seconds:

Screenshot of a page saying: “Please wait, fetching your process list… Testing suffix oerver-svg.exe cortana.exe.” It also lists already found processes: i3gproc.exe asdsvc.exe wpmsvc.exe i3gmainsvc.exe

With sufficient time, this page could potentially enumerate every process running on the system.


The ndata part of the response is much simpler. It looks like this:


No, I didn’t mess up decoding the data. Yes, is really in the response. The idea here was actually to use (reverse tilde symbol) as a separator. But since my operating system isn’t Korean, the character encoding for non-Unicode applications (like IPinside LWS Agent) isn’t set to EUC-KR. The application doesn’t expect this and botches the conversion to UTF-8.

▚▚▚.▚▚▚.▚▚▚.▚▚▚ on the other hand was me censoring my public IP address. The application gets it by two different means. VD1NATIP appears to come from my home router.

HDATAIP on the other hand comes from a web server. Which web server? That’s determined by the host_info parameter that the website provides to the application. It is also obfuscated, the actual value is:

Only the first two parts appear to be used, the application makes a request to One of the response headers is RESPONSE_IP. You guessed it: that’s your IP address as this web server sees it.

The application uses low-level WS2_32.DLL APIs here, probably as an attempt to prevent this traffic from being routed through some proxy server or VPN. After all, the goal is deanonymizing you.


Finally, there is udata where “u” stands for “unique.” There are several different output types here, this is type 13:

[52-54-00-A7-44-B5:1:0:Intel(R) 82574L Gigabit Network Connection];[52-54-00-4A-FD-6E:0:0:Intel(R) 82574L Gigabit Network Connection #2];$[QM00001:QEMU HARDDISK:];[abcdef:QEMU HARDDISK:];[::];[::];[::];

Once again a list of network cards and hard drives, but this time MAC addresses of the network cards are listed as well. Other output types are mostly the same data in different formats, except for type 30. This one contains a hexadecimal CPU identifier, representing 16 bytes generated by mashing together the results of 15 different CPUID calls.

How is this data protected?

So there is a whole lot of data which allows deanonymizing users, learning about the hardware and software they use, potentially facilitating further attacks by exposing which vulnerabilities are present on their systems. Surely this kind of data is well-protected, right? I mean: sure, every Korean online banking website has access to it. And Korean government websites. And probably more Interezen customers. But nobody else, right?

Well, the server under localhost:21300 doesn’t care who it responds to. Any website can request the data. But it still needs to know how to decode it.

When talking about wdata, there are three layers of protection being applied: obfuscation, compression and encryption. Yes, obfuscating data by XOR’ing it with a single random byte probably isn’t adding much protection. And compression doesn’t really count as protection either if people can easily find the well-known GPL-licensed source code that Interezen used without complying with the license terms. But there is encryption, and it is even using public-key cryptography!

So the application only contains the public RSA key, that’s not sufficient to decrypt the data. The private key is only known to Interezen. And any of their numerous customers. Let’s hope that all these customers sufficiently protect this private key and don’t leak it to some hackers.

Otherwise RSA encryption can be considered secure even with moderately sized keys. Except… we aren’t talking about a moderately sized key here. We aren’t even talking about a weak key. We are talking about a 320 bits key. That’s shorter than the very first key factored in the RSA Factoring Challenge. And that was in April 1991, more than three decades ago. Sane RSA libraries don’t even work with keys this short.

I downloaded msieve and let it run on my laptop CPU, occupying a single core of it:

$ ./msieve 108709796755756429540066787499269637…

sieving in progress (press Ctrl-C to pause)
86308 relations (21012 full + 65296 combined from 1300817 partial), need 85977
sieving complete, commencing postprocessing
linear algebra completed 80307 of 82231 dimensions (97.7%, ETA 0h 0m)
elapsed time 02:36:55

Yes, it took me 2 hours and 36 minutes to calculate the private key on very basic hardware. That’s how much protection this RSA encryption provides.

When talking about ndata and udata, things look even more dire. The only protection layer here is encryption. No, not public-key cryptography but symmetric encryption via AES-256. And of course the encryption key is hardcoded in the application, there is no other way.

To add insult to injury, the application produces identical ciphertext on each run. At first I thought this to be the result of the deprecated ECB block chaining mode being used. But: no, the application uses CBC block chaining mode. But it fails to pass in an initialization vector, so the cryptography library in question always fills the initialization vector with zeroes.

Which is a long and winded way of saying: the encryption would be broken regardless of whether one can retrieve the encryption key from the application.

To sum up: no, this data isn’t really protected. If the user has the IPinside LWS Agent installed, any website can access the data it collects. The encryption applied is worthless.

And the overall security of the application?

That web server the application runs on port 21300, what is it? Turns out, it’s their own custom code doing it, built on low-level network sockets functionality. That’s perfectly fine of course, who hasn’t built their own rudimentary web server using substring matches to parse requests and deployed it to millions of users?

Their web server still needs SSL support, so it relies on the OpenSSL library for that. Which library version? Why, OpenSSL 1.0.1j of course. Yes, it was released more than eight years ago. Yes, end of support for OpenSSL 1.0.1 was six years ago. Yes, there were 11 more releases on the 1.0.1 branch after 1.0.1j, with numerous vulnerabilities fixed, and not even these fixes made it into IPinside LWS Agent.

Sure, that web server is also single-threaded, why wouldn’t it be? It’s not like people will open two banking websites in parallel. Yes, this makes it trivial for a malicious website to lock up that server with long-running requests (denial-of-service attack). But that merely prevents people from logging into online banking and government websites, not a big deal.

Looking at how this server is implemented, there is code that essentially looks like this:

BYTE inputBuffer[8192];
char request[8192];
char debugString[8192];

memset(inputBuffer, 0, sizeof(inputBuffer));
memset(request, 0, sizeof(request));

int count = ssl_read(ssl, inputBuffer, sizeof(inputBuffer));
if (count <= 0)

memcpy(request, inputBuffer, count);

memset(debugString, 0, sizeof(debugString));
sprintf(debugString, "Received data from SSL socket: %s", request);


Can you spot the issues with this code?

Come on, I’m waiting.

Yes, I’m cheating. Unlike you I actually debugged that code and saw live just how badly things went here.

First of all, it can happen that ssl_read will produce exactly 8192 bytes and fill the entire buffer. In that case, inputBuffer won’t be null-terminated. And its copy in request won’t be null-terminated either. So attempting to use request as a null-terminated string in sprintf() or handle_request() will read beyond the end of the buffer. In fact, with the memory layout here it will continue into the identical inputBuffer memory area and then into whatever comes after it.

So the sprintf() call actually receives more than 16384 bytes of data, and its target buffer won’t be nearly large enough for that. But even if this data weren’t missing the terminating zero: taking a 8192 byte string, adding a bunch more text to it and trying to squeeze the result into a 8192 byte buffer isn’t going to work.

This isn’t an isolated piece of bad code. While researching the functionality of this application, I couldn’t fail noticing several more stack buffer overflows and another buffer over-read. To my (very limited) knowledge of binary exploitation, these vulnerabilities cannot be turned into Remote Code Execution thanks to StackGuard and SafeSEH protection mechanisms being active and effective. If somebody more experienced finds a way around that however, things will get very ugly. The application has neither ASLR nor DEP protection enabled.

Some of these vulnerabilities can definitely crash the application however. I created two proof of concept pages which did so repeatedly. And that’s another denial-of-service attack, also effectively preventing people from using online banking in South Korea.

When will it be fixed?

I submitted three vulnerability reports to KrCERT on October 21st, 2022. By November 14th KrCERT confirmed forwarding all these reports to Interezen. I did not receive any communication after that.

Prior to this disclosure, a Korean reporter asked Interezen to comment. They confirmed receiving my reports but claimed that they only received one of them on January 6th, 2023. Supposedly because of that they plan to release their fix in February, at which point it would be up to their customers (meaning: banks and such) to distribute the new version to the users.

Like other similar applications, this software won’t autoupdate. So users will need to either download and install an update manually or perform an update via a management application like Wizvera Veraport. Neither is particularly likely unless banks start rejecting old IPinside versions and requiring users to update.

Does IPinside actually make banking safer?

Interezen isn’t merely providing the IPinside agent application. According to their self-description, they are a company who specializes in BigData. They provide the service of collecting and analyzing data to numerous banks, insurances and government agencies.

Screenshot of a website section titled: “Client Companies. With the number one products in this industry, INTEREZEN is providing the best services for more than 200 client companies.” Below it the logos of Woori Bank, Industrial Bank of Korea, KEB Hana Card, National Tax Service, MG Non-Life Insurance, Hyundai Card as well as a “View more” button.

Online I could find a manual from 2009 showing screenshots from Interezen’s backend solution. One can see all website visitors being tracked along with their data. Back in 2009 the application collected barely more than the IP addresses, but it can be assumed that the current version of this backend makes all the data provided by the agent application accessible.

Screenshot of a web interface listing requests for a specific date range. Some of the table columns are: date, webip, proxyip, natip, attackip<figcaption> Screenshot from IPinside 3.0 product manual </figcaption>

In addition to showing detailed information on each user, in 2009 this application was already capable of producing statistical overviews based e.g. on IP address, location, browser or operating system.

Screenshot of a web interface displaying user shares for Windows 98, Windows 2000, Windows 2003 and Windows XP<figcaption> Screenshot from IPinside 3.0 product manual </figcaption>

The goal here isn’t protecting users, it’s protecting banks and other Interezen customers. The idea is that a bank will have it easier to detect and block fraud or attacks if it has more information available to it. Fraudsters won’t simply be able to obfuscate their identities by using proxies or VPNs, banks will be able to block them regardless.

In fact, Interezen filed several patents in Korea for their ideas. The first one, patent 10-1005093 is called “Method and Device for Client Identification.” In the patent filing, the reason for the “invention” is the following (automatic translation):

The importance and value of a method for identifying a client in an Internet environment targeting an unspecified majority is increasing. However, due to the development of various camouflage and concealment methods and the limitations of existing identification technologies, proper identification and analysis are very difficult in reality.

It goes on to explain how cookies are insufficient and the user’s real IP address needs to be retrieved.

The patent 10-1088084 titled “Method and system for monitoring and cutting off illegal electronic-commerce transaction” expands further on the reasoning (automatic translation):

The present invention is a technology that enables real-time processing, which was impossible with existing security systems, in the detection/blocking of illegal transactions related to all e-commerce services through the Internet, and e-commerce illegal transactions that cannot but be judged as normal transactions with existing security technologies.

This patent also introduces the idea of forcing the users to install the agent in order to use the website.

But does the approach even work? Is there anything to stop fraudsters from setting up their own web server on localhost:21300 and feeding banking websites bogus data?

Ok, someone would have to reverse engineer the functionality of the IPinside LWS Agent application and reproduce it. I mean, it’s not that simple. It took me … checks notes … one work week, proof of concept creation included. Fraudsters certainly don’t have that kind of time to invest into deciphering all the various obfuscation levels here.

But wait, why even go there? A replay attack is far simpler, giving websites pre-recorded legitimate responses will just do. There is no challenge-handshake scheme here, no timestamp, nothing to prevent this attack. If anything, websites could recognize responses they’ve previously seen. But even that doesn’t really work: ndata and udata obfuscation has no randomness in it, the data is expected to be always identical. And wdata has only one random byte in its obfuscation scheme, that’s not sufficient to reliably distinguish legitimately identical responses from replayed ones.

So it would appear that IPinside is massively invading people’s privacy, exposing way too much of their data to anybody asking, yet falling short of really stopping illegal transactions as they claim. Prove me wrong.

This Week In RustThis Week in Rust 479

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is Darkbird, a mnesia-inspired high concurrency, real time, in-memory storage library.

Thanks to DanyalMh for the self-suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

378 pull requests were merged in the last week

Rust Compiler Performance Triage

Largely a win for compiler performance with 100 test cases in real-world crates showing some sort of change in performance with an average 1% improvement. These wins were a combination of many different changes including how doc(hidden) gets more efficiently encoded in metadata, some optimizations in the borrow checker, and simplification of the output from derive(Debug) for fieldless enums.

Triage done by @rylev. Revision range: 1f72129f..c8e6a9e8


(instructions:u) mean range count
Regressions ❌
0.4% [0.2%, 0.7%] 19
Regressions ❌
0.9% [0.2%, 1.5%] 34
Improvements ✅
-1.3% [-17.2%, -0.2%] 81
Improvements ✅
-2.1% [-7.1%, -0.2%] 64
All ❌✅ (primary) -1.0% [-17.2%, 0.7%] 100

2 Regressions, 5 Improvements, 3 Mixed; 1 of them in rollups 34 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-01-25 - 2023-02-22 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Rust has demonstrated that you using a type system as a vehicle for separation logic works, even in imperative languages, and it's nothing as arcane as those immutable functional predecessors would suggest. It did this by making sure the language defines a type system that helps you, by making sure core properties of soundness can be expressed in it.

  • soundness requirement for memory access: lifetimes
  • soundness requirements for references with value semantics: > &/&mut _
  • soundness requirements for resources: Copy and Drop
  • making sure your logic is monotic: traits instead of inheritance, lack of specialization (yes, that's a feature).
  • (notably missing: no dependent types; apparently not 'necessary' but I'm sure it could be useful; however, research is heavily ongoing; caution is good)

This allows the standard library to encode all of its relevant requirements as types. And doing this everywhere is its soundness property: safe functions have no requirements beyond the sum of its parameter type, unsafe functions can. Nothing new or special there, nothing that makes Rust's notion of soundness special.

Basing your mathematical reasoning on separation logic makes soundness reviews local instead of requiring whole program analysis. This is what makes it practical. It did this pretty successfully and principled, but did no single truly revolutionary thing. It's a sum of good bits from the last decade of type system research. That's probably why people refer to it as 'the soundness definition', it's just a very poignant way to say: "we learned that a practical type systems works as a proof checker".

HeroicKatora on /r/cpp

Thanks to Stephan Sokolow for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox NightlyNew year, new updates to Firefox – These Weeks in Firefox: Issue 130


  • Thanks to Alex Poirot’s work on Bug 1410932, starting from Firefox 110, errors raised from WebExtensions content scripts should be visible in the related tab’s DevTools webconsole
  • Migrators for Opera, Opera GX and Vivaldi have been enabled by default and should hit release for Firefox 110 in February! Special thanks to Nolan Ishii and Evan Liang from CalState LA for their work there.
  • Various improvements to the Picture-in-Picture player window have landed – see the Picture-in-Picture section below for details.
    • Many of these improvements are currently gated behind a pref. Set `media.videocontrols.picture-in-picture.improved-video-controls.enabled` to true to check them out! You can file bugs here if you find any.
  • Firefox Profiler updates
    • Implement resizing columns in the TreeView (Merge PR #4204). This works in the Call Tree and the Marker Table that both use this component. Thanks Johannes Bechberger!
    • Add carbon metrics information to Firefox profiler (Merge PR #4372). Thanks Chris Adams!
  • Mark Banner fixed an issue with the default search engine being reset when the user upgrades to 108 if the profile was previously copied from somewhere else.

Friends of the Firefox team


  • [mconley] Welcome back mtigley!
  • [kpatenio] Welcome bnasar!

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtension APIs
  • Thanks to Gregory Pappas’ contributions starting from Firefox 110:
    • tabs.getZoomSettings will properly support the “defaultZoomFactor” property (instead of always returning “1” as before) – Bug 1772166
    • a “close” action icon is now being shown next to the omnibox API’s deletable suggestions – Bug 1478095 (deletable suggestions have been also introduced recently, in Firefox 109 by Bug 1478095)
  • As part of the ongoing work on the declarativeNetRequest API: initial support for the Dynamic Ruleset has been introduced in Nightly 110 – Bug 1745764

Developer Tools

  • :jacksonwhale (new contributor) fixed a small CSS issue in RDM’s device dialog (bug)
  • :Oriol improved the way we display quotes to require less “escaping” (bug)
  • :Gijs fixed all the imports of sys.mjs modules in DevTools to use the proper names and APIs (bug)
  • :barret cleaned up a remaining usage of osfile.jsm in DevTools (bug)
  • Mark (:standard8) replaced all Cu.reportError calls with console.error (bug)
  • :arai fixed eager evaluation for expressions which can safely be considered as non-effectful (JSOp::InitAliasedLexical with hops == 0) (bug)
  • :ochameau removed the preference to switch back to the legacy Browser Toolbox (bug) and also removed the Browser Content Toolbox (bug).
    • The regular Browser Toolbox (and Browser Console) should now cover all your needs to debug the parent process and content processes (ask us if you have any trouble migrating from our Browser Content Toolbox workflows!).
    • screenshot
  • :ochameau updated the version of the source-map library we use in-tree, which came with some performance improvements (bug)
WebDriver BiDi
  • :jdescottes implemented two events of the WebDriver BiDi network module: network.beforeRequestSent and network.responseStarted (bug and bug)
  • :whimboo added general support for serialization of platform objects (bug)
  • :whimboo migrated marionette’s element cache from the parent process to the content process which is the first step to be able to share element references between WebDriver BiDi and Classic (bug)
  • :sasha fixed the event subscription logic to allow consumers to subscribe for events on any context (bug)

ESMification status

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)

PDFs & Printing


Performance Tools (aka Firefox Profiler)

  • Various small UI changes
    • The initial selection and tree expansion in call trees is now better:
      • Procure a selection also when the components update (for example when changing threads) (PR #4382). Previously no selection was ever provided after the first load.
      • Skip idle nodes when procuring an initial selection in the call tree (PR #4383). Previously we would very often select an idle node, because that’s where the most samples were captured. Indeed threads are usually very idle, but we’re interested in the moments when they’re not.
    • Do not automatically hide tracks when comparing profiles (Merge PR #4384). Previously it was common that the computed diffing track was hidden by the auto-hide algorithm.
    • Handle copy gesture for flame graph and stack chart (PR #4392). Thanks Krishna Ravishankar!
  • Improved Chrome and Linux perf importers
    • Chrome importer: Add 1 to line and column numbers of cpu profile (Merge PR #4403). Thanks Khairul Azhar Kasmiran!
    • linux perf: fix parsing frames with whitespaces in the path (PR #4410). Thanks Joel Höner!
  • Don’t miss Nazim’s lightning talk about improvements in performance regression alerts on Thursday! (please remove for the blog post)
  • Text only
    • Add some more content to the home page, about Android profiling as well as opening files from 3rd party tools (PR #4360)
    • Prevent ctrl+wheel events in timeline (PR #4350)
    • Make more explicit the fact that MarkerPayload is nullable (PR #4368)
    • Sanitize URL and file-path properties in markers (Merge PR #4369). We didn’t use these properties before so this wasn’t a problem for current payloads, but future patches in Firefox want to use them, so it’s important to remove this possibly private data.
    • Unselect and scroll to top when clicking outside of the activity graph (Merge PR #4375)
    • Do not show a tooltip when the stack index of the hovered sample is null, instead of crashing (PR #4376)
    • Do not trigger transforms when searching in the stack chart (PR #4387)
    • Add development note on Flow (PR #4391). Thanks Khairul Azhar Kasmiran!
    • Scroll the network chart at mount time if there’s a selected item (PR #4385)
    • Add VSCode settings.json to bypass flow-bin `SHASUM256.txt.sign` check (PR #4393). Thanks Khairul Azhar Kasmiran!
    • Do not scroll the various views as a result of a pointer click (PR #4386)
    • Do not throw an error when browsertime provides null timestamps incorrectly (Merge PR #4399)
    • Make cause.time optional (PR #4408)
    • Using mouseTimePosition in Selection.js and added tests for that (Merge PR #3000). This is the second step of a work to show a vertical line indicating the time position from the mouse cursor in all chronological panels at the same time. Thanks Hasna Hena Mow!

Search and Navigation

Storybook / Reusable components

  • Our Storybook has been updated
    • mconley fixed the styling for the (in-progress) Migration Wizard component Bug 1806128
    • tgiles added the MozSupportLink, for easier SUMO page linking Bug 1770447
      • <a is=”moz-support-link” support-page=”my-feature”></a>
    • tgiles added an Accessibility panel in Storybook which runs some accessibility tests against components Bug 1804927
  • mstriemer extracted the panel-list element (menu) from about:addons
    • This isn’t a fully-fledged “Reusable Component” but it would be better than writing yet another menu 🙂 Bug 1765635
  • hjones updated the moz-toggle element to now be backed by a button, rather than a checkbox. Toggles/switches should not be “form-associated” and should instead perform an immediate action, similar to a button Bug 1804771

The Mozilla BlogLatest Pocket Android app makes it easier to discover your saved and new stories

Google recently named Pocket as one of the best apps of 2022, and it’s only getting better. We spent a lot of time with our users last year to see how we can improve the experience on the Pocket Android app. This month, we’re rolling out updates based on user feedback so you can easily find the stories and topics you care about. Read on to learn more about what’s new in the Pocket Android app. 

Home is where you can find the joy of high-quality recommendations

We can all agree that a home is where we can take a deep breath, relax and unwind. So, we’ve created a new tab called Home where you can take the time to sit back, discover, and enjoy fascinating stories you want to read. There, you’ll see your recent saves at the top of the screen and discover new content from editorial recommendations or specific categories like technology, travel and entertainment. In the coming year, we’ll be adding new features to Home so that you can continue to discover and save high quality content. 

<figcaption class="wp-element-caption">Home has your recent saves and new content from editorial recommendations</figcaption>

Finding your recent saves just got easier 

As requested by Pocket users, we’ve renamed My List to Saves, where you can see the content you’ve saved to read at a more convenient time. We’ve also redesigned the section to let you filter by tags, favorites and highlights, as well as allow you to easily bulk edit. We moved filters into a carousel at the top of the page. Plus, we added a toggle for users to archive content. Lastly, we added the text “Listen” to the Listen icon. Together, these small changes can make a big impact in helping users get to their saved content quickly. 

<figcaption class="wp-element-caption">Redesigned Saves section lets you easily see the content you’ve saved</figcaption>

What’s ahead for 2023

We’ve done a lot of work on the Pocket Android app to evolve and improve your experience. This work lays down the foundation where we will continue to build more features. Many of today’s features will be available in the upcoming iOS app refresh launching this year. 

Discover the best of the web by downloading the latest Pocket Android app on Google Play.

The post Latest Pocket Android app makes it easier to discover your saved and new stories appeared first on The Mozilla Blog.

Will Kahn-GreeneSocorro Engineering: 2022 retrospective


2022 took forever. At the same time, it kind of flew by. 2023 is already moving along, so this post is a month late. Here's the retrospective of Socorro engineering in 2022.

Read more… (18 min remaining to read)

The Mozilla BlogStart this year fresh with Mozilla’s tech challenge

If you’ve already ditched your new year’s goals, we’re here to help. How about a refreshening of your online life with new habits and routines?

Are there newsletters you don’t read anymore? Mobile apps you no longer use? Or social media platforms you’ve left (ahem, Twitter)? We want to help.

We’ve put together a month-long challenge to refresh your online life. Each week, we’ll update this blog post with three easy tasks, all of which will take less than 10 minutes to complete. We want to help you build healthy online habits, so you can spend 2023 with fewer worries and more time to enjoy the best of what the internet has to offer

Clean up your devices and your digital footprint

Declutter your digital workspace by deleting unnecessary files on your desktop. To help keep your devices secure, turn on automatic software updates. Got social media accounts that you’ve sworn off for the new year? Here’s a quick guide to deleting online accounts.  

Manage your passwords like a pro

We get it: As more of our lives move online, the longer the list of passwords we have to remember. But you don’t have to resort to using your pet’s name, birthday or the word “password.” A password manager like the built-in Firefox password manager, for example, can create strong passwords for you and keep them safe and accessible for when you need to log into your accounts. 

Keep your inbox under control

Just like a clean desktop, a manageable inbox can help you stay focused. Hit the unsubscribe button on newsletters you no longer wish to receive. To curate your own reading list of online articles, use Pocket, which lets you save and organize stories so that you can savor them later. 

Want to sign up for a shopping discount code? Use Firefox Relay and enter an email mask instead of your true email address to prevent more marketing emails from clogging up your inbox.

The post Start this year fresh with Mozilla’s tech challenge  appeared first on The Mozilla Blog.

Will Kahn-GreeneBleach 6.0.0 release and deprecation

What is it?

Bleach is a Python library for sanitizing and linkifying text from untrusted sources for safe usage in HTML.

Bleach v6.0.0 released!

Bleach 6.0.0 cleans up some issues in linkify and with the way it uses html5lib so it's easier to reason about. It also adds support for Python 3.11 and cleans up the project infrastructure.

There are several backwards-incompatible changes, hence the 6.0.0 version.

I did some rough testing with a corpus of Standup messages data and it looks like bleach.clean is slightly faster with 6.0.0 than 5.0.0.

Using Python 3.10.9:

  • 5.0.0: bleach.clean on 58,630 items 10x: minimum 2.793s

  • 6.0.0: bleach.clean on 58,630 items 10x: minimum 2.304s

The other big change 6.0.0 brings with it is that it's now deprecated.

Bleach is deprecated

Bleach sits on top of html5lib which is not actively maintained. It is increasingly difficult to maintain Bleach in that context and I think it's nuts to build a security library on top of a library that's not in active development.

Over the years, we've talked about other options:

  1. find another library to switch to

  2. take over html5lib development

  3. fork html5lib and vendor and maintain our fork

  4. write a new HTML parser

  5. etc

With the exception of option 1, they greatly increase the scope of the work for Bleach. They all feel exhausting to me.

Given that, I think Bleach has run its course and this journey is over.

What happens now?


  1. Pass it to someone else?

    No, I won't be passing Bleach to someone else to maintain. Bleach is a security-related library, so making a mistake when passing it to someone else would be a mess. I'm not going to do that.

  2. Switch to an alternative?

    I'm not aware of any alternatives to Bleach. I don't plan to work on coordinating the migration for everyone from Bleach to something else.

  3. Oh my goodness--you're leaving us with nothing?

    Sort of.

I'm going to continue doing minimal maintenance:

  1. security updates

  2. support for new Python versions

  3. fixes for egregious bugs (begrudgingly)

I'll do that for at least a year. At some point, I'll stop doing that, too.

I think that gives the world enough time for either something to take Bleach's place, or for the sanitizing web api to kick in, or for everyone to come to the consensus that they never really needed Bleach in the first place.

/images/bleach_deprecation.thumbnail.jpg <figcaption>

Bleach. Tired. At the end of its journey.



Many thanks to Greg who I worked with on Bleach for a long while and maintained Bleach for several years. Working with Greg was always easy and his reviews were thoughtful and spot-on.

Many thanks to Jonathan who, over the years, provided a lot of insight into how best to solve some of Bleach's more squirrely problems.

Many thanks to Sam who was an indispensible resource on HTML parsing and sanitizing text in the context of HTML.

Where to go for more

For more specifics on this release, see here:

Documentation and quickstart here:

Source code and issue tracker here:

Wladimir PalantBitwarden design flaw: Server side iterations

In the aftermath of the LastPass breach it became increasingly clear that LastPass didn’t protect their users as well as they should have. When people started looking for alternatives, two favorites emerged: 1Password and Bitwarden. But do these do a better job at protecting sensitive data?

For 1Password, this question could be answered fairly easily. The secret key functionality decreases usability, requiring the secret key to be moved to each new device used with the account. But the fact that this random value is required to decrypt the data means that the encrypted data on 1Password servers is almost useless to potential attackers. It cannot be decrypted even for weak master passwords.

As to Bitwarden, the media mostly repeated their claim that the data is protected with 200,001 PBKDF2 iterations: 100,001 iterations on the client side and another 100,000 on the server. This being twice the default protection offered by LastPass, it doesn’t sound too bad. Except: as it turns out, the server-side iterations are designed in such a way that they don’t offer any security benefit. What remains are 100,000 iterations performed on the client side, essentially the same protection level as for LastPass.

Mind you, LastPass isn’t only being criticized for using a default iterations count that is three time lower than the current OWASP recommendation. LastPass also failed to encrypt all data, a flaw that Bitwarden doesn’t seem to share. LastPass also kept the iterations count for older accounts dangerously low, something that Bitwarden hopefully didn’t do either (Edit: yes, they did this, some accounts have considerably lower iteration count). LastPass also chose to downplay the breach instead of suggesting meaningful mitigation steps, something that Bitwarden hopefully wouldn’t do in this situation. Still, the protection offered by Bitwarden isn’t exactly optimal either.

Edit (2023-01-23): Bitwarden increased the default client-side iterations to 350,000 a few days ago. So far this change only applies to new accounts, and it is unclear whether they plan to upgrade existing accounts automatically. And today OWASP changed their recommendation to 600,000 iterations, it has been adjusted to current hardware.

Edit (2023-01-24): I realized that some of my concerns were already voiced in Bitwarden’s 2018 Security Assessment. Linked to it in the respective sections.

How Bitwarden protects users’ data

Like most password managers, Bitwarden uses a single master password to protect users’ data. The Bitwarden server isn’t supposed to know this password. So two different values are being derived from it: a master password hash, used to verify that the user is allowed to log in, and a key used to encrypt/decrypt the data.

A schema showing the master password being hashed with PBKDF2-SHA256 and 100,000 iterations into a master key. The master key is further hashed on the server side before being stored in the database. The same master key is turned into a stretched master key used to encrypt the encryption key, here no additional PBKDF2 is applied on the server side.<figcaption> Bitwarden password hashing, key derivation, and encryption. Source: Bitwarden security whitepaper </figcaption>

If we look at how Bitwarden describes the process in their security whitepaper, there is an obvious flaw: the 100,000 PBKDF2 iterations on the server side are only applied to the master password hash, not to the encryption key. This is pretty much the same flaw that I discovered in LastPass in 2018.

What this means for decrypting the data

So what happens if some malicious actor happens to get a copy of the data, like it happened with LastPass? They will need to decrypt it. And for that, they will have to guess the master password. PBKDF2 is meant to slow down verifying whether a guess is correct.

Testing the guesses against the master password hash would be fairly slow: 200,001 PBKDF2 iterations here. But the attackers wouldn’t waste time doing that of course. Instead, for each guess they would derive an encryption key (100,000 PBKDF2 iterations) and check whether this one can decrypt the data.

This simple tweak removes all the protection granted by the server-side iterations and speeds up master password guessing considerably. Only the client-side iterations really matter as protection.

What this means for you

The default protection level of LastPass and Bitwarden is identical. This means that you need a strong master password. And the only real way to get there is generating your password randomly. For example, you could generate a random passphrase using the diceware approach.

Using a dictionary for 5 dice (7776 dictionary words) and picking out four random words, you get a password with slightly over 50 bits of entropy. I’ve done the calculations for guessing such passwords: approximately 200 years on a single graphics card or $1,500,000.

This should be a security level sufficient for most regular users. If you are guarding valuable secrets or are someone of interest for state-level actors, you might want to consider a stronger password. Adding one more word to your passphrase increases the cost of guessing your password by factor 7776. So a passphrase with five words is already almost unrealistic to guess even for state-level actors.

All of this assumes that your KDF iterations setting is set to the default 100,000. Bitwarden will allow you to set this value as low as 5,000 without even warning you. This was mentioned as BWN-01-009 in Bitwarden’s 2018 Security Assessment, yet there we are five years later. Should your setting be too low, I recommend fixing it immediately. Reminder: current OWASP recommendation is 310,000.

Is Bitwarden as bad as LastPass?

So as it turns out, with the default settings Bitwarden provides exactly the same protection level as LastPass. This is only part of the story however.

One question is how many accounts have a protection level below the default configured. It seems that before 2018 Bitwarden’s default used to be 5,000 iterations. Then the developers increased it to 100,000 in multiple successive steps. When LastPass did that, they failed upgrading existing accounts. I wonder whether Bitwarden also has older accounts stuck on suboptimal security settings.

The other aspect here is that Dmitry Chestnykh wrote about Bitwarden’s server-side iterations being useless in 2020 already, and Bitwarden should have been aware of it even if they didn’t realize how my research applies to them as well. On the other hand, using PBKDF2 with only 100,000 iterations isn’t a great default today. Still, Bitwarden failed to increase it in the past years, apparently copying LastPass as “gold standard” – and they didn’t adjust their PR claims either:

Screenshot of text from the Bitwarden website: The default iteration count used with PBKDF2 is 100,001 iterations on the client (client-side iteration count is configurable from your account settings), and then an additional 100,000 iterations when stored on our servers (for a total of 200,001 iterations by default). The organization key is shared via RSA-2048. The utilized hash functions are one-way hashes, meaning they cannot be reverse engineered by anyone at Bitwarden to reveal your master password. Even if Bitwarden were to be hacked, there would be no method by which your master password could be obtained.

Users have been complaining and asking for better key derivation functions since at least 2018. It was even mentioned as BWN-01-007 in Bitwarden’s 2018 Security Assessment. This change wasn’t considered a priority however. Only after the LastPass breach things started moving, and it wasn’t Bitwarden’s core developers driving the change. Someone contributed the changes required for scrypt support and Argon2 support. The former was rejected in favor of the latter, and Argon2 will hopefully become the default (only?) choice at some point in future.

Adding a secret key like 1Password would have been another option to address this issue. This suggestion has also been around since at least 2018 and accumulated a considerable amount of votes, but so far it hasn’t been implemented either.

On the bright side, Bitwarden clearly states that they encrypt all your vault data, including website addresses. So unlike with LastPass, any data lifted from Bitwarden servers will in fact be useless until the attackers manage to decrypt it.

How server-side iterations could have been designed

In case you are wondering whether it is even possible to implement server-side iterations mechanism correctly: yes, it is. One example is the onepw protocol Mozilla introduced for Firefox Sync in 2014. While the description is fairly complicated, the important part is: the password hash received by the server is not used for anything before it passes through additional scrypt hashing.

Firefox Sync has a different flaw: its client-side password hashing uses merely 1,000 PBKDF2 iterations, a ridiculously low setting. So if someone compromises the production servers rather than merely the stored data, they will be able to intercept password hashes that are barely protected. The corresponding bug report has been open for the past six years and is still unresolved.

The same attack scenario is an issue for Bitwarden as well. Even if you configure your account with 1,000,000 iterations, a compromised Bitwarden server can always tell the client to apply merely 5,000 PBKDF2 iterations to the master password before sending it to the server. The client has to rely on the server to tell it the correct value, and as long as low settings like 5,000 iterations are supported this issue will remain.

Niko MatsakisRust in 2023: Growing up

When I started working on Rust in 2011, my daughter was about three months old. She’s now in sixth grade, and she’s started growing rapidly. Sometimes we wake up to find that her clothes don’t quite fit anymore: the sleeves might be a little too short, or the legs come up to her ankles. Rust is experiencing something similar. We’ve been growing tremendously fast over the last few years, and any time you experience growth like that, there are bound to be a few rough patches. Things that don’t work as well as they used to. This holds both in a technical sense — there are parts of the language that don’t seem to scale up to Rust’s current size — and in a social one — some aspects of how the projects runs need to change if we’re going to keep growing the way I think we should. As we head into 2023, with two years to go until the Rust 2024 edition, this is the theme I see for Rust: maturation and scaling.


In summary, these are (some of) the things I think are most important for Rust in 2023:

  • Implementing “the year of everywhere” so that you can make any function async, write impl Trait just about anywhere, and fully utilize generic associated types; planning for the Rust 2024 edition.
  • Beginning work on a Rust specification and integrating it into our processes.
  • Defining rules for unsafe code and smooth tooling to check whether you’re following them.
  • Supporting efforts to teach Rust in universities and elsewhere.
  • Improving our product planning and user feedback processes.
  • Refining our governance structure with specialized teams for dedicated areas, more scalable structure for broad oversight, and more intensional onboarding.

“The year of everywhere” and the 2024 edition

What do async-await, impl Trait, and generic parameters have in common? They’re all essential parts of modern Rust, that’s one thing. They’re also all, in my opinion, in a “minimum viable product” state. Each of them has some key limitations that make them less useful and more confusing than they have to be. As I wrote in “Rust 2024: The Year of Everywhere”, there are currently a lot of folks working hard to lift those limitations through a number of extensions:

None of these features are “new”. They just take something that exists in Rust and let you use it more broadly. Nonetheless, I think they’re going to have a big impact, on experienced and new users alike. Experienced users can express more patterns more easily and avoid awkward workarounds. New users never have to experience the confusion that comes from typing something that feels like it should work, but doesn’t.

One other important point: Rust 2024 is just around the corner! Our goal is to get any edition changes landed on master this year, so that we can spend the next year doing finishing touches. This means we need to put in some effort to thinking ahead and planning what we can achieve.

Towards a Rust specification

As Rust grows, there is increasing need for a specification. Mara had a recent blog post outlining some of the considerations — and especially the distinction between a specification and standardization. I don’t see the need for Rust to get involved in any standards bodies — our existing RFC and open-source process works well. But I do think that for us to continue growing out the set of people working on Rust, we need a central definition of what Rust should do, and that we need to integrate that definition into our processes more thoroughly.

In addition to long-standing docs like the Rust Reference, the last year has seen a number of notable efforts towards a Rust specification. The Ferrocene language specification is the most comprehensive, covering the grammar, name resolution, and overall functioning of the compiler. Separately, I’ve been working on a project called a-mir-formality, which aims to be a “formal model” of Rust’s type system, including the borrow checker. And Ralf Jung has MiniRust, which is targeting the rules for unsafe code.

So what would an official Rust specification look like? Mara opened RFC 3355, which lays out some basic parameters. I think there are still a lot of questions to work out. Most obviously, how can we combine the existing efforts and documents? Each of them has a different focus and — as a result — a somewhat different structure. I’m hopeful that we can create a complementary whole.

Another important question is how to integrate the specification into our project processes. We’ve already got a rule that new language features can’t be stabilized until the reference is updated, but we’ve not always followed it, and the lang docs team is always in need of support. There are hopeful signs here: both the Foundation and Ferrocene are interested in supporting this effort.

Unsafe code

In my experience, most production users of Rust don’t touch unsafe code, which is as it should be. But almost every user of Rust relies on dependencies that do, and those dependencies are often the most critical systems.

At first, the idea of unsafe code seems simple. By writing unsafe, you gain access to new capabilities, but you take responsibility for using them correctly. But the more you look at unsafe code, the more questions come up. What does it mean to use those capabilities correctly? These questions are not just academic, they have a real impact on optimizations performed by the Rust compiler, LLVM, and even the hardware.

Eventually, we want to get to a place where those who author unsafe code have clear rules to follow, as well as simple tooling to test if their code violates those rules (think cargo test —unsafe). Authors who want more assurance than dynamic testing can provide should have access to static verifiers that can prove their crate is safe — and we should start by proving the standard library is safe.

We’ve been trying for some years to build that world but it’s been ridiculously hard. Lately, though, there have been some breakthroughs. Gankra’s experiments with strict_provenance APIs have given some hope that we can define a relatively simple provenance model that will support both arbitrary unsafe code trickery and aggressive optimization, and Ralf Jung’s aforementioned MiniRust shows how a Rust operational semantics could look. More and more crates test with miri to check their unsafe code, and for those who wish to go further, the kani verifier can check unsafe code for UB (more formal methods tooling here).

I think we need a renewed focus on unsafe code in 2023. The first step is already underway: we are creating the opsem team. Led by Ralf Jung and Jakob Degen, the opsem team has the job of defining “the rules governing unsafe code in Rust”. It’s been clear for some time that this area requires dedicated focus, and I am hopeful that the opsem team will help to provide that.

I would like to see progress on dynamic verification. In particular, I think we need a tool that can handle arbitrary binaries. miri is great, but it can’t be used to test programs that call into C code. I’d like to see something more like valgrind or ubsan, where you can test your Rust project for UB even if it’s calling into other languages through FFI.

Dynamic verification is great, but it is limited by the scope of your tests. To get true reliability, we need a way for unsafe code authors to do static verification. Building static verification tools today is possible but extremely painful. The compiler’s APIs are unstable and a moving target. The stable MIR project proposes to change that by providing a stable set of APIs that tool authors can build on.

Finally, the best unsafe code is the unsafe code you don’t have to write. Unsafe code provides infinite power, but people often have simpler needs that could be made safe with enough effort. Projects like [cxx][] demonstrate the power of this approach. For Rust the language, safe transmute is the most promising such effort, and I’d like to see more of that.

Teaching Rust in universities

More and more universities are offering classes that make use of Rust, and recently many of these educators have come together in the Rust Edu initiative to form shared teaching materials. I think this is great, and a trend we should encourage. It’s helpful for the Rust community, of course, since it means more Rust programmers. I think it’s also helpful for the students: much like learning a functional programming language, learning Rust requires incorporating different patterns and structure than other languages. I find my programs tend to be broken into smaller pieces, and the borrow checker forces me to be more thoughtful about which bits of context each function will need. Even if you wind up building your code in other languages, those new patterns will influence the way you work.

Stronger connections to teacher can also be a great source of data for improving Rust. If we understand better how people learn Rust and what they find difficult, we can use that to guide our priorities and look for ways to make it better. This might mean changing the language, but it might also mean changing the tooling or error messages. I’d like to see us setup some mechanism to feed insights from Rust educators, both in universities but also trainers at companies like Ferrous Systems or Integer32, into the Rust teams.

One particularly exciting effort here is the research being done at Brown University1 by Will Crichton and Shriram Krisnamurthi. Will and Shriram have published an interactive version of the Rust book that includes quizzes. As a reader, these quizzes help you check that you understood the section. But they also provide feedback to the book authors on which sections are effective. And they allow for “A/B testing”, where you change the content of the book and see whether the quiz scores improve. Will and Shriram are also looking at other ways to deepen our understanding of how people learn Rust.

More insight and data into the user experience

As Rust has grown, we no longer have the obvious gaps in our user experience that there used to be (e.g., “no IDE support”). At the same time, it’s clear that the experience of Rust developers could be a lot smoother. There are a lot of great ideas of changes to make, but it’s hard to know which ones would be most effective. I would like to see a more coordinated effort to gather data on the user experience and transform it into actionable insights. Currently, the largest source of data that we have is the annual Rust survey. This is a great resource, but it only gives a very broad picture of what’s going on.

A few years back, the async working group collected “status quo” stories as part of its vision doc effort. These stories were immensely helpful in understanding the “async Rust user experience”, and they are still helping to shape the priorities of the async working group today. At the same time, that was a one-time effort, and it was focused on async specifically. I think that kind of effort could be useful in a number of areas.

I’ve already mentioned that teachers can provide one source of data. Another is simply going out and having conversations with Rust users. But I think we also need fine-grained data about the user experience. In the compiler team’s mid-year report, they noted (emphasis mine):

One more thing I want to point out: five of the ambitions checked the box in the survey that said “some of our work has reached Rust programmers, but we do not know if it has improved Rust for them.”

Right now, it’s really hard to know even basic things, like how many users are encountering compiler bugs in the wild. We have to judge that by how many comments people leave on a Github issue. Meanwhile, Esteban personally scours twitter to find out which error messages are confusing to people.2 We should look into better ways to gather data here. I’m a fan of (opt-in, privacy preserving) telemetry, but I think there’s a discussion to be had here about the best approach. All I know is that there has to be a better way.

Maturing our governance

In 2015, shortly after 1.0, RFC 1068 introduced the original Rust teams: libs, lang, compiler, infra, and moderation. Each team is an independent, decision-making entity, owning one particular aspect of Rust, and operating by consensus. The “Rust core team” was given the role of knitting them together and providing a unifying vision. This structure has been a great success, but as we’ve grown, it has started to hit some limits.

The first limiting point has been bringing the teams together. The original vision was that team leads—along with others—would be part of a core team that would provide a unifying technical vision and tend to the health of the project. It’s become clear over time though that there are really different jobs. Over this year, the various Rust teams, project directors, and existing core team have come together to define a new model for project-wide governance. This effort is being driven by a dedicated working group and I am looking forward to seeing that effort come to fruition this year.

The second limiting point has been the need for more specialized teams. One example near and dear to my heart is the new types team, which is focused on type and trait system. This team has the job of diving into the nitty gritty on proposals like Generic Associated Types or impl Trait, and then surfacing up the key details for broader-based teams like lang or compiler where necessary. The aforementioned opsem team is another example of this sort of team. I suspect we’ll be seeing more teams like this.

There continues to be a need for us to grow teams that do more than coding. The compiler team prioritization effort, under the leadership of apiraino, is a great example of a vital role that allows Rust to function but doesn’t involve landing PRs. I think there are a number of other “multiplier”-type efforts that we could use. One example would be “reporters”, i.e., people to help publish blog posts about the many things going on and spread information around the project. I am hopeful that as we get a new structure for top-level governance we can see some renewed focus and experimentation here.


Seven years since Rust 1.0 and we are still going strong. As Rust usage spreads, our focus is changing. Where once we had gaping holes to close, it’s now more a question of iterating to build on our success. But the more things change, the more they stay the same. Rust is still working to empower people to build reliable, performant programs. We still believe that building a supportive, productive tool for systems programming — one that brings more people into the “systems programming” tent — is also the best way to help the existing C and C++ programmers “hack without fear” and build the kind of systems they always wanted to build. So, what are you waiting for? Let’s get building!

  1. In disclosure, AWS is a sponsor of this work. 

  2. To be honest, Esteban will probably always do that, whatever we do. 

The Rust Programming Language BlogOfficially announcing the types team

Oh hey, it's another new team announcement. But I will admit: if you follow the RFCs repository, the Rust zulip, or were particularly observant on the GATs stabilization announcement post, then this might not be a surprise for you. In fact, this "new" team was officially established at the end of May last year.

There are a few reasons why we're sharing this post now (as opposed to months before or...never). First, the team finished a three day in-person/hybrid meetup at the beginning of December and we'd like to share the purpose and outcomes of that meeting. Second, posting this announcement now is just around 7 months of activity and we'd love to share what we've accomplished within this time. Lastly, as we enter into the new year of 2023, it's a great time to share a bit of where we expect to head in this year and beyond.

Background - How did we get here?

Rust has grown significantly in the last several years, in many metrics: users, contributors, features, tooling, documentation, and more. As it has grown, the list of things people want to do with it has grown just as quickly. On top of powerful and ergonomic features, the demand for powerful tools such as IDEs or learning tools for the language has become more and more apparent. New compilers (frontend and backend) are being written. And, to top it off, we want Rust to continue to maintain one of its core design principles: safety.

All of these points highlights some key needs: to be able to know how the Rust language should work, to be able to extend the language and compiler with new features in a relatively painless way, to be able to hook into the compiler and be able to query important information about programs, and finally to be able to maintain the language and compiler in an amenable and robust way. Over the years, considerable effort has been put into these needs, but we haven't quite achieved these key requirements.

To extend a little, and put some numbers to paper, there are currently around 220 open tracking issues for language, compiler, or types features that have been accepted but are not completely implemented, of which about half are at least 3 years old and many are several years older than that. Many of these tracking issues have been open for so long not solely because of bandwidth, but because working on these features is hard, in large part because putting the relevant semantics in context of the larger language properly is hard; it's not easy for anyone to take a look at them and know what needs to be done to finish them. It's clear that we still need better foundations for making changes to the language and compiler.

Another number that might shock you: there are currently 62 open unsoundness issues. This sounds much scarier than it really is: nearly all of these are edges of the compiler and language that have been found by people who specifically poke and prod to find them; in practice these will not pop up in the programs you write. Nevertheless, these are edges we want to iron out.

The Types Team

Moving forward, let's talk about a smaller subset of Rust rather than the entire language and compiler. Specifically, the parts relevant here include the type checker - loosely, defining the semantics and implementation of how variables are assigned their type, trait solving - deciding what traits are defined for which types, and borrow checking - proving that Rust's ownership model always holds. All of these can be thought of cohesively as the "type system".

As of RFC 3254, the above subset of the Rust language and compiler are under the purview of the types team. So, what exactly does this entail?

First, since around 2018, there existed the "traits working group", which had the primary goal of creating a performant and extensible definition and implementation of Rust's trait system (including the Chalk trait-solving library). As time progressed, and particularly in the latter half of 2021 into 2022, the working group's influence and responsibility naturally expanded to the type checker and borrow checker too - they are actually strongly linked and its often hard to disentangle the trait solver from the other two. So, in some ways, the types team essentially subsumes the former traits working group.

Another relevant working group is the polonius working group, which primarily works on the design and implementation of the Polonius borrow-checking library. While the working group itself will remain, it is now also under the purview of the types team.

Now, although the traits working group was essentially folded into the types team, the creation of a team has some benefits. First, like the style team (and many other teams), the types team is not a top level team. It actually, currently uniquely, has two parent teams: the lang and compiler teams. Both teams have decided to delegate decision-making authority covering the type system.

The language team has delegated the part of the design of type system. However, importantly, this design covers less of the "feel" of the features of type system and more of how it "works", with the expectation that the types team will advise and bring concerns about new language extensions where required. (This division is not strongly defined, but the expectation is generally to err on the side of more caution). The compiler team, on the other hand, has delegated the responsibility of defining and maintaining the implementation of the trait system.

One particular responsibility that has traditionally been shared between the language and compiler teams is the assessment and fixing of soundness bugs in the language related to the type system. These often arise from implementation-defined language semantics and have in the past required synchronization and input from both lang and compiler teams. In the majority of cases, the types team now has the authority to assess and implement fixes without the direct input from either parent team. This applies, importantly, for fixes that are technically backwards-incompatible. While fixing safety holes is not covered under Rust's backwards compatibility guarantees, these decisions are not taken lightly and generally require team signoff and are assessed for potential ecosystem breakage with crater. However, this can now be done under one team rather than requiring the coordination of two separate teams, which makes closing these soundness holes easier (I will discuss this more later.)

Formalizing the Rust type system

As mentioned above, a nearly essential element of the growing Rust language is to know how it should work (and to have this well documented). There are relatively recent efforts pushing for a Rust specification (like Ferrocene or this open RFC), but it would be hugely beneficial to have a formalized definition of the type system, regardless of its potential integration into a more general specification. In fact the existence of a formalization would allow a better assessment of potential new features or soundness holes, without the subtle intricacies of the rest of the compiler.

As far back as 2015, not long after the release of Rust 1.0, an experimental Rust trait solver called Chalk began to be written. The core idea of Chalk is to translate the surface syntax and ideas of the Rust trait system (e.g. traits, impls, where clauses) into a set of logic rules that can be solved using a Prolog-like solver. Then, once this set of logic and solving reaches parity with the trait solver within the compiler itself, the plan was to simply replace the existing solver. In the meantime (and continuing forward), this new solver could be used by other tools, such as rust-analyzer, where it is used today.

Now, given Chalk's age and the promises it had been hoped to be able to deliver on, you might be tempted to ask the question "Chalk, when?" - and plenty have. However, we've learned over the years that Chalk is likely not the correct long-term solution for Rust, for a few reasons. First, as mentioned a few times in this post, the trait solver is only but a part of a larger type system; and modeling how the entire type system fits together gives a more complete picture of its details than trying to model the parts separately. Second, the needs of the compiler are quite different than the needs of a formalization: the compiler needs performant code with the ability to track information required for powerful diagnostics; a good formalization is one that is not only complete, but also easy to maintain, read, and understand. Over the years, Chalk has tried to have both and it has so far ended up with neither.

So, what are the plans going forward? Well, first the types team has begun working on a formalization of the Rust typesystem, currently coined a-mir-formality. An initial experimental phase was written using PLT redex, but a Rust port is in-progress. There's lot to do still (including modeling more of the trait system, writing an RFC, and moving it into the rust-lang org), but it's already showing great promise.

Second, we've begun an initiative for writing a new trait solver in-tree. This new trait solver is more limited in scope than a-mir-formality (i.e. not intending to encompass the entire type system). In many ways, it's expected to be quite similar to Chalk, but leverage bits and pieces of the existing compiler and trait solver in order to make the transition as painless as possible. We do expect it to be pulled out-of-tree at some point, so it's being written to be as modular as possible. During our types team meetup earlier this month, we were able to hash out what we expect the structure of the solver to look like, and we've already gotten that merged into the source tree.

Finally, Chalk is no longer going to be a focus of the team. In the short term, it still may remain a useful tool for experimentation. As said before, rust-analyzer uses Chalk as its trait solver. It's also able to be used in rustc under an unstable feature flag. Thus, new ideas currently could be implemented in Chalk and battle-tested in practice. However, this benefit will likely not last long as a-mir-formality and the new in-tree trait solver get more usable and their interfaces become more accessible. All this is not to say that Chalk has been a failure. In fact, Chalk has taught us a lot about how to think about the Rust trait solver in a logical way and the current Rust trait solver has evolved over time to more closely model Chalk, even if incompletely. We expect to still support Chalk in some capacity for the time being, for rust-analyzer and potentially for those interested in experimenting with it.

Closing soundness holes

As brought up previously, a big benefit of creating a new types team with delegated authority from both the lang and compiler teams is the authority to assess and fix unsoundness issues mostly independently. However, a secondary benefit has actually just been better procedures and knowledge-sharing that allows the members of the team to get on the same page for what soundness issues there are, why they exist, and what it takes to fix them. For example, during our meetup earlier this month, we were able to go through the full list of soundness issues (focusing on those relevant to the type system), identify their causes, and discuss expected fixes (though most require prerequisite work discussed in the previous section).

Additionally, the team has already made a number of soundness fixes and has a few more in-progress. I won't go into details, but instead am just opting to putting them in list form:

As you can see, we're making progress on closing soundness holes. These sometimes break code, as assessed by crater. However, we do what we can to mitigate this, even when the code being broken is technically unsound.

New features

While it's not technically under the types team purview to propose and design new features (these fall more under lang team proper), there are a few instances where the team is heavily involved (if not driving) feature design.

These can be small additions, which are close to bug fixes. For example, this PR allows more permutations of lifetime outlives bounds than what compiled previously. Or, these PRs can be larger, more impactful changes, that don't fit under a "feature", but instead are tied heavily to the type system. For example, this PR makes the Sized trait coinductive, which effectively makes more cyclic bounds compile (see this test for an example).

There are also a few larger features and feature sets that have been driven by the types team, largely due to the heavy intersection with the type system. Here are a few examples:

  • Generic associated types (GATs) - The feature long predates the types team and is the only one in this list that has actually been stabilized so far. But due to heavy type system interaction, the team was able to navigate the issues that came on its final path to stabilization. See this blog post for much more details.
  • Type alias impl trait (TAITs) - Implementing this feature properly requires a thorough understanding of the type checker. This is close to stabilization. For more information, see the tracking issue.
  • Trait upcasting - This one is relatively small, but has some type system interaction. Again, see the tracking issue for an explanation of the feature.
  • Negative impls - This too predates the types team, but has recently been worked on by the team. There are still open bugs and soundness issues, so this is a bit away from stabilization, but you can follow here.
  • Return position impl traits in traits (RPITITs) and async functions in traits (AFITs) - These have only recently been possible with advances made with GATs and TAITs. They are currently tracked under a single tracking issue.


To conclude, let's put all of this onto a roadmap. As always, goals are best when they are specific, measurable, and time-bound. For this, we've decided to split our goals into roughly 4 stages: summer of 2023, end-of-year 2023, end-of-year 2024, and end-of-year 2027 (6 months, 1 year, 2 years, and 5 years). Overall, our goals are to build a platform to maintain a sound, testable, and documented type system that can scale to new features need by the Rust language. Furthermore, we want to cultivate a sustainable and open-source team (the types team) to maintain that platform and type system.

A quick note: some of the things here have not quite been explained in this post, but they've been included in the spirit of completeness. So, without further ado:

6 months

  • The work-in-progress new trait solver should be testable
  • a-mir-formality should be testable against the Rust test suite
  • Both TAITs and RPITITs/AFITs should be stabilized or on the path to stabilization.

EOY 2023

  • New trait solver replaces part of existing trait solver, but not used everywhere
  • We have an onboarding plan (for the team) and documentation for the new trait solver
  • a-mir-formality is integrated into the language design process

EOY 2024

EOY 2027

  • (Types) unsound issues resolved
  • Most language extensions are easy to do; large extensions are feasible
  • a-mir-formality passes 99.9% of the Rust test suite


It's an exciting time for Rust. As its userbase and popularity grows, the language does as well. And as the language grows, the need for a sustainable type system to support the language becomes ever more apparent. The project has formed this new types team to address this need and hopefully, in this post, you can see that the team has so far accomplished a lot. And we expect that trend to only continue over the next many years.

As always, if you'd like to get involved or have questions, please drop by the Rust zulip.

The Mozilla BlogReal talk: Did your 5-year-old just tease you about having too many open tabs?

An illustration shows various internet icons surrounding an internet browser window that reads, "Firefox and YouGov parenting survey."<figcaption class="wp-element-caption">Credit: Nick Velazquez / Mozilla</figcaption>

No one ever wanted to say “tech-savvy toddler” but here we are. It’s not like you just walked into the kitchen one morning and your kid was sucking on a binky and editing Wikipedia, right? Wait, really? It was pretty close to that? Well, for years there’s been an ongoing conversation on internet usage in families’ lives, and in 2020, the pandemic made us come face-to-face with that elephant in the room, the internet. There was no way around it. We went online for everything from virtual classrooms for kids, playing video games with friends, conducting video meetings with co-workers, and of course, streaming movies and TV shows. The internet’s role in our lives became a more permanent fixture in our family. It’s about time we gave it a rethink.

We conducted a survey with YouGov to get an understanding of how families use the internet in the United States, Canada, France, Germany and the United Kingdom. In November, we shared a preview with top insights from the report which included:

  • Many parents believe their kids have no idea how to protect themselves online. About one in three parents in France and Germany don’t think their child “has any idea on how to protect themselves or their information online.” In the U.S., Canada and the U.K., about a quarter of parents feel the same way.
  • U.S. parents spend the most time online compared to parents in other countries, and so do their children. Survey takers in the U.S. reported an average of seven hours of daily internet use via web browsers, mobile apps and other means. Asked how many hours their children spend online on a typical day, U.S. parents said an average of four hours. That’s compared to two hours of internet use among children in France, where parents reported spending about five hours online everyday. No matter where a child grows up, they spend more time online a day as they get older. 
  • Yes, toddlers use the web. Parents in North America and Western Europe reported introducing their kids to the internet some time between two and eight years old.  North America and the U.K. skew younger, with kids first getting introduced online between two and five for about a third of households.  Kids are introduced to the internet in France and Germany when they are older, between eight to 14 years old.

Today, we’re sharing more of the report, as well as our insights of what the numbers are telling us. Below is a link to the report:

An illustration reads: The Tech Talk

Toddlers, tablets, and the ‘Tech Talk’

Download our report

The internet is a great place for families. It gives us new opportunities to discover the world, connect with others and just generally make our lives easier and more colorful. But it also comes with new challenges and complications for the people raising the next generations. Mozilla wants to help families make the best online decisions, whatever that looks like, with our latest series, The Tech Talk.

The post Real talk: Did your 5-year-old just tease you about having too many open tabs? appeared first on The Mozilla Blog.

Will Kahn-GreeneSocorro: Schema based overhaul of crash ingestion: retrospective (2022)



2+ years

  • radically reduced risk of data leaks due to misconfigured permissions

  • centralized and simplified configuration and management of fields

  • normalization and validation performed during processing

  • documentation of data reviews, data caveats, etc

  • reduced risk of bugs when adding new fields--testing is done in CI

  • new crash reporting data dictionary with Markdown-formatted descriptions, real examples, relevant links


I've been working on Socorro (crash ingestion pipeline at Mozilla) since the beginning of 2016. During that time, I've focused on streamlining maintainence of the project, paying down technical debt, reducing risk, and improving crash analysis tooling.

One of the things I identified early on is how the crash ingestion pipeline was chaotic, difficult to reason about, and difficult to document. What did the incoming data look like? What did the processed data look like? Was it valid? Which fields were protected? Which fields were public? How do we add support for a new crash annotation? This was problematic for our ops staff, engineering staff, and all the people who used Socorro. It was something in the back of my mind for a while, but I didn't have any good thoughts.

In 2020, Socorro moved into the Data Org which has multiple data pipelines. After spending some time looking at how their pipelines work, I wanted to rework crash ingestion.

The end result of this project is that:

  1. the project is easier to maintain:

    • adding support for new crash annotations is done in a couple of schema files and possibly a processor rule

  2. risk of security issues and data breaches is lower:

    • typos, bugs, and mistakes when adding support for a new crash annotation are caught in CI

    • permissions are specified in a central location, changing permission for fields is trivial and takes effect in the next deploy, setting permissions supports complex data structures in easy-to-reason-about ways, and mistakes are caught in CI

  3. the data is easier to use and reason about:

    • normalization and validation of crash annotation data happens during processing and downstream uses of the data can expect it to be valid; further we get a signal when the data isn't valid which can indicate product bugs

    • schemas describing incoming and processed data

    • crash reporting data dictionary documenting incoming data fields, processed data fields, descriptions, sources, data gotchas, examples, and permissions

What is Socorro?

Socorro is the crash ingestion pipeline for Mozilla products like Firefox, Fenix, Thunderbird, and MozillaVPN.

When Firefox crashes, the crash reporter asks the user if the user would like to send a crash report. If the user answers "yes!", then the crash reporter collects data related to the crash, generates a crash report, and submits that crash report as an HTTP POST to Socorro. Socorro saves the submitted crash report, processes it, and has tools for viewing and analyzing crash data.

State of crash ingestion at the beginning

The crash ingestion system was working and it was usable, but it was in a bad state.

  • Poor data management

    Normalization and validation of data was all over the codebase and not consistent:

    • processor rule code

    • AWS S3 crash storage code

    • Elasticsearch indexing code

    • Telemetry crash storage code

    • Super Search querying and result rendering code

    • report view and template code

    • signature report code and template code

    • crontabber job code

    • any scripts that used the data

    • tests -- many of which had bad test data so who knows what they were really testing

    Naive handling of minidump stackwalker output which meant that any changes in the stackwalker output were predominantly unnoticed and there was no indication as to whether changed output created issues in the system.

    Further, since it was all over the place, there were no guarantees for data validity when downloading it using the RawCrash, ProcessedCrash, and SuperSearch APIs. Anyone writing downstream systems would also have to normalize and validate the data.

  • Poor permissions management

    Permissions were defined in multiple places:

    • Elasticsearch json redactor

    • Super Search fields

    • RawCrash API allow list

    • ProcessedCrash API allow list

    • report view and template code

    • Telemetry crash storage code

    • and other places

    We couldn't effectively manage permissions of fields in the stackwalker output because we had no idea what was there.

  • Poor documentation

    No documentation of crash annotation fields other than CrashAnnotations.yaml which didn't enforce anything in crash ingestion (process, valid type, data correctness, etc) and was missing important information like data gotchas, data review urls, and examples.

    No documentation of processed crash fields at all.

  • Making changes was high risk

    Changing fields from public to protected was high risk because you had to find all the places it might show up which was intractable. Adding support for new fields often took multiple passes over several weeks because we'd miss things. Server errors happend with some regularity due to weirdness with crash annotation values affecting the Crash Stats site.

  • Tangled concerns across the codebase

    Lots of tangled concerns where things defined in one place affected other places that shouldn't be related. For example, the Super Search fields definition was acting as a "schema" for other parts of the system that had nothing to do with Elasticsearch or Super Search.

  • Difficult to maintain

    It was difficult to support new products.

    It was difficult to debug issues in crash ingestion and crash reporting.

    The Crash Stats webapp contained lots of if/then/else bits to handle weirdness in the crash annotation values. Nulls, incorrect types, different structures, etc.

    Socorro contained lots of vestigial code from half-done field removal, deprecated fields, fields that were removed from crash reports, etc. These vestigial bits were all over the code base. Discovering and removing these bits was time consuming and error prone.

    The code for exporting data to Telemetry built the export data using a list of fields to exclude rather than a list of fields to include. This is backwards and impossible to maintain--we never should have been doing this. Further, it pulled data from the raw crash which we had no validation guarantees for which would cause issues downstream in the Telemetry import code.

    There was no way to validate the data used in the unit tests which meant that a lot of it was invalid. We had no way to validate the test data which meant that CI would pass, but we'd see errors in our stage and production environments.

  • Different from other similar systems

    In 2020, Socorro was moved to the Data Org in Mozilla which had a set of standards and conventions for collecting, storing, analyzing, and providing access to data. Socorro didn't follow any of it which made it difficult to work on, to connect with, and to staff. Things Data Org has that Socorro didn't:

    • a schema covering specifying fields, types, and documentation

    • data flow documentation

    • data review policy, process, and artifacts for data being collected and how to add new data

    • data dictionary for fields for users including documentation, data review urls, data gotchas

In summary, we had a system that took a lot of effort to maintain, wasn't serving our users' needs, and was high risk of security/data breach.

Project plan

Many of these issues can be alleviated and reduced by moving to a schema-driven system where we:

  1. define a schema for annotations and a schema for the processed crash

  2. change crash ingestion and the Crash Stats site to use those schemas

When designing this schema-driven system, we should be thinking about:

  1. how easy is it to maintain the system?

  2. how easy is it to explain?

  3. how flexible is it for solving other kinds of problems in the future?

  4. what kinds of errors will likely happen when maintaining the system and how can we avert them in CI?

  5. what kinds of errors can happen and how much risk do they pose for data leaks? what of those can we avert in CI?

  6. how flexible is the system which needs to support multiple products potentially with different needs?

I worked out a minimal version of that vision that we could migrate to and then work with going forward.

The crash annotations schema should define:

  1. what annotations are in the crash report?

  2. which permissions are required to view a field

  3. field documentation (provenance, description, data review, related bugs, gotchas, analysis tips, etc)

The processed crash schema should define:

  1. what's in the processed crash?

  2. which permissions are required to view a field

  3. field documentation (provenance, description, related bugs, gotchas, analysis tips, etc)

Then we make the following changes to the system:

  1. write a processor rule to copy, nomralize, and validate data from the raw crash based on the processed crash schema

  2. switch the Telemetry export code to using the processed crash for data to export

  3. switch the Telemetry export code to using the processed crash schema for permissions

  4. switch Super Search to using the processed crash for data to index

  5. switch Super Search to using the processed crash schema for documentation and permissions

  6. switch Crash Stats site to using the processed crash for data to render

  7. switch Crash Stats site to using the processed crash schema for documentation and permissions

  8. switch the RawCrash, ProcessedCrash, and SuperSearch APIs to using the crash annotations and processed crash schemas for documentation and permissions

After doing that, we have:

  1. field documentation is managed in the schemas

  2. permissions are managed in the schemas

  3. data is normalized and validated once in the processor and everything uses the processed crash data for indexing, searching, and rendering

  4. adding support for new fields and changing existing fields is easier and problems are caught in CI

Implementation decisions

Use JSON Schema.

Data Org at Mozilla uses JSON Schema for schema specification. The schema is written using YAML.

The metrics schema is used to define metrics.yaml files which specify the metrics being emitted and collected.

For example:

One long long long term goal for Socorro is to unify standards and practices with the Data Ingestion system. Towards that goal, it's prudent to build out a crash annotation and processed crash schemas using whatever we can take from the equivalent metrics schemas.

We'll additionally need to build out tooling for verifying, validating, and testing schema modifications to make ongoing maintenance easier.

Use schemas to define and drive everything.

We've got permissions, structures, normalization, validation, definition, documentation, and several other things related to the data and how it's used throughout crash ingestion spread out across the codebase.

Instead of that, let's pull it all together into a single schema and change the system to be driven from this schema.

The schema will include:

  1. structure specification

  2. documentation including data gotchas, examples, and implementation details

  3. permissions

  4. processing instructions

We'll have a schema for supported annotations and a schema for the processed crash.

We'll rewrite existing parts of crash ingestion to use the schema:

  1. processing 1. use processing instructions to validate and normalize annotation data

  2. super search 1. field documentation 2. permissions 3. remove all the normalization and validation code from indexing

  3. crash stats 1. field documentation 2. permissions 3. remove all the normalization and validation code from page rendering

Only use processed crash data for indexing and analysis.

The indexing system has its own normalization and validation code since it pulls data to be indexed from the raw crash.

The crash stats code has its own normalization and validation code since it renders data from the raw crash in various parts of the site.

We're going to change this so that all normalization and validation happens during processing, the results are stored in the processed crash, and indexing, searching, and crash analysis only work on processed crash data.

By default, all data is protected.

By default, all data is protected unless it is explicitly marked as public. This has some consequences for the code:

  1. any data not specified in a schema is treated as protected

  2. all schema fields need to specify permissions for that field

  3. any data in a schema is either: * marked public, OR * lists the permissions required to view that data

  4. for nested structures, any child field that is public has public ancesters

We can catch some of these issues in CI and need to write tests to verify them.

This is slightly awkward when maintaining the schema because it would be more reasonable to have "no permissions required" mean that the field is public. However, it's possible to accidentally not specify the permissions and we don't want to be in that situation. Thus, we decided to go with explicitly marking public fields as public.

Work done

Phase 1: cleaning up

We had a lot of work to do before we could start defining schemas and changing the system to use those schemas.

  1. remove vestigial code (some of this work was done in other phases as it was discovered)

  2. fix signature generation

  3. fix Super Search

    • [bug 1624345]: stop saving random data to Elasticsearch crashstorage (2020-06)

    • [bug 1706076]: remove dead Super Search fields (2021-04)

    • [bug 1712055]: remove system_error from Super Search fields (2021-07)

    • [bug 1712085]: remove obsolete Super Search fields (2021-08)

    • [bug 1697051]: add crash_report_keys field (2021-11)

    • [bug 1736928]: remove largest_free_vm_block and tiny_block_size (2021-11)

    • [bug 1754874]: remove unused annotations from Super Search (2022-02)

    • [bug 1753521]: stop indexing items from raw crash (2022-02)

    • [bug 1762005]: migrate to lower-cased versions of Plugin* fields in processed crash (2022-03)

    • [bug 1755528]: fix flag/boolean handling (2022-03)

    • [bug 1762207]: remove hang_type (2022-04)

    • [bug 1763264]: clean up super search fields from migration (2022-07)

  4. fix data flow and usage

    • [bug 1740397]: rewrite CrashingThreadInfoRule to normalize crashing thread (2021-11)

    • [bug 1755095]: fix TelemetryBotoS3CrashStorage so it doesn't use Super Search fields (2022-03)

    • [bug 1740397]: change webapp to pull crashing_thread from processed crash (2022-07)

    • [bug 1710725]: stop using DotDict for raw and processed data (2022-09)

  5. clean up the raw crash structure

Phase 2: define schemas and all the tooling we needed to work with them

After cleaning up the code base, removing vestigial code, fixing Super Search, and fixing Telemetry export code, we could move on to defining schemas and writing all the code we needed to maintain the schemas and work with them.

  • [bug 1762271]: rewrite json schema reducer (2022-03)

  • [bug 1764395]: schema for processed crash, reducers, traversers (2022-08)

  • [bug 1788533]: fix validate_processed_crash to handle pattern_properties (2022-08)

  • [bug 1626698]: schema for crash annotations in crash reports (2022-11)

Phase 3: fix everything to use the schemas

That allowed us to fix a bunch of things:

  • [bug 1784927]: remove elasticsearch redactor code (2022-08)

  • [bug 1746630]: support new threads.N.frames.N.unloaded_modules minidump-stackwalk fields (2022-08)

  • [bug 1697001]: get rid of UnredactedCrash API and model (2022-08)

  • [bug 1100352]: remove hard-coded allow lists from RawCrash (2022-08)

  • [bug 1787929]: rewrite Breadcrumbs validation (2022-09)

  • [bug 1787931]: fix Super Search fields to pull permissions from processed crash schema (2022-09)

  • [bug 1787937]: fix Super Search fields to pull documentation from processed crash schema (2022-09)

  • [bug 1787931]: use processed crash schema permissions for super search (2022-09)

  • [bug 1100352]: remove hard-coded allow lists from ProcessedCrash models (2022-11)

  • [bug 1792255]: add telemetry_environment to processed crash (2022-11)

  • [bug 1784558]: add collector metadata to processed crash (2022-11)

  • [bug 1787932]: add data review urls for crash annotations that have data reviews (2022-11)

Phase 4: improve

With fields specified in schemas, we can write a crash reporting data dictionary:

  • [bug 1803558]: crash reporting data dictionary (2023-01)

  • [bug 1795700]: document raw and processed schemas and how to maintain them (2023-01)

Then we can finish:

Random thoughts

This was a very very long-term project with many small steps and some really big ones. Getting large projects done is futile and the only way to do it successfully is to break it into a million small steps each of which stand on their own and don't create urgency for getting the next step done.

Any time I changed field names or types, I'd have to do a data migration. Data migrations take 6 months to do because I have to wait for existing data to expire from storage. On the one hand, it's a blessing I could do migrations at all--you can't do this with larger data sets or with data sets where the data doesn't expire without each migration becoming a huge project. On the other hand, it's hard to juggle being in the middle of multiple migrations and sometimes the contortions one has to perform are grueling.

If you're working on a big project that's going to require changing data structures, figure out how to do migrations early with as little work as possible and use that process as often as you can.

Conclusion and where we could go from here

This was such a huge project that spanned years. It's so hard to finish projects like this because the landscape for the project is constantly changing. Meanwhile, being mid-project has its own set of complexities and hardships.

I'm glad I tackled it and I'm glad it's mostly done. There are some minor things to do, still, but this new schema-driven system has a lot going for it. Adding support for new crash annotations is much easier, less risky, and takes less time.

It took me about a month to pull this post together.

That's it!

That's the story of the schema-based overhaul of crash ingestion. There's probably some bits missing and/or wrong, but the gist of it is here.

If you have any questions or bump into bugs, I hang out on #crashreporting on You can also write up a bug for Socorro.

Hopefully this helps. If not, let us know!

This Week In RustThis Week in Rust 478

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is syntactic-for, a syntactic "for" loop Rust macro.

Thanks to Tor Hovland for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

458 pull requests were merged in the last week

Rust Compiler Performance Triage

Nearly all flagged regressions are likely noise, except one rollup with minor impact on diesel that we will follow up on. We had a broad (albeit small) win from #106294.

Triage done by @pnkfelix. Revision range: 0442fbab..1f72129f


(instructions:u) mean range count
Regressions ❌
0.4% [0.2%, 1.7%] 39
Regressions ❌
0.5% [0.2%, 1.8%] 23
Improvements ✅
-0.4% [-0.6%, -0.2%] 7
Improvements ✅
-0.4% [-0.6%, -0.2%] 6
All ❌✅ (primary) 0.3% [-0.6%, 1.7%] 46

4 Regressions, 3 Improvements, 3 Mixed; 4 of them in rollups 50 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-01-18 - 2023-02-15 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Common arguments against Rust's safety guarantees:

  • The library you're binding to can have a segfault in it.
  • RAM can physically fail, causing dangling pointers.
  • The computer the Rust program is running on can be hit by a meteorite.
  • Alan Turing can come back from the dead and tell everyone that he actually made up computer science and none of it is real, thus invalidating every program ever made, including all Rust programs.

Ironmask on the phoronix forums

Thanks to Stephan Sokolow for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdImportant: Thunderbird 102.7.0 And Microsoft 365 Enterprise Users

Welcome to Thunderbird 102

Update on January 31st:

We’re preparing to ship a 2nd build of Thunderbird 102.7.1 with an improved patch for the Microsoft 365 oAuth issue reported here. Our anticipated release window is before midnight Pacific Time, January 31.

Update on January 28th:

Some users still experienced issues with the solution to the authentication issue that was included in Thunderbird 102.7.1. A revised solution has been proposed and is expected to ship soon. We apologize for the inconvenience this has caused, and the disruption to your workflow. You can track this issue via Bug #1810760.

Update on January 20th:

Thunderbird 102.7.0 was scheduled to be released on Wednesday, January 18, but we decided to hold the release because of an issue detected which affects authentication of Microsoft 365 Business accounts.

A solution to the authentication issue will ship with version 102.7.1, releasing during the week of January 23. Version 102.7.0 is now available for manual download only, to allow unaffected users to choose to update and benefit from the fixes it delivers

Please note that automatic updates are currently disabled, and users of Microsoft 365 Business are cautioned to not update. 

*Users who update and encounter difficulty can simply reinstall 102.6.1. Thunderbird should automatically detect your existing profile. However, you can launch the Profile Manager if needed by following these instructions.

On Wednesday, January 18, Thunderbird 102.7.0 will be released with a crucial change to how we handle OAuth2 authorization with Microsoft accounts. This may involve some extra work for users currently using Microsoft-hosted accounts through their employer or educational institution.

In order to meet Microsoft’s requirements for publisher verification, it was necessary for us to switch to a new Azure application and application ID. However, some of these accounts are configured to require administrators to approve any applications accessing email.

If you encounter a screen saying “Need admin approval” during the login process, please contact your IT administrators to approve the client ID 9e5f94bc-e8a4-4e73-b8be-63364c29d753 for Mozilla Thunderbird (it previously appeared to non-admins as “Mzla Technologies Corporation”).

We request the following permissions:

  • IMAP.AccessAsUser.All (Read and write access to mailboxes via IMAP.)
  • POP.AccessAsUser.All (Read and write access to mailboxes via POP.)
  • SMTP.Send (Send emails from mailboxes using SMTP AUTH.)
  • offline_access

(Please note that this change was previously implemented in Thunderbird Beta, but the Thunderbird 102.7.0 release introduces this change to our stable ESR release.)

The post Important: Thunderbird 102.7.0 And Microsoft 365 Enterprise Users appeared first on The Thunderbird Blog.

The Mozilla BlogHere’s what’s going on in the world of extensions

<figcaption class="wp-element-caption">Credit: Nick Velazquez</figcaption>

About one-third of Firefox users have installed an add-on before – whether it’s an extension to add powerful and customizable features or a visual theme to personalize the web browsing experience. But if you’re unfamiliar, add-ons are sort of like apps for your browser. They can add all kinds of features to Firefox to make browsing faster, safer or just more fun.

The past year introduced some exciting new changes to the extensions world. The majority of these changes are foundational and take place in the deeply technical back-end of the system, typically out of sight of most Firefox users. However, if you pride yourself on hanging out in popular cybersecurity hubs, reading the latest tech news or developing your own extensions then you might have caught wind of some of these changes yourself.

If you’re not in the loop about the new changes in extensions, let us break it down for you!

Several years ago, Google proposed Manifest V3 (aka a number of intrinsic changes to the Chrome extension framework). Many of these changes would introduce incompatibilities between Firefox and Chromium-based browsers. This means developers would need to support two very different versions of their extensions if they wanted them available for both Firefox and Chromium-based browser users – a heavy burden for most developers that could result in some extensions only being available for one browser.

We believe that Firefox users benefit most when they have access to the broadest selection of useful extensions and features available, thus we’ve always placed long-term bets on cross-browser compatibility and a standards-driven feature for extensions.

With that, we agreed to introduce Manifest V3 support for add-ons, maintaining a high level of compatibility to support cross-browser development. However, there are some critical areas — like security and privacy — where our principles call for a different course of action. In a few targeted areas we decided to depart from Chrome’s implementation and incorporate our own distinctively Mozilla elements. Thus Firefox’s version of Manifest V3 will provide cross-browser extension interoperability, along with uniquely improved privacy and security safeguards, and enhanced compatibility for mobile extensions.

If ads give you the ick, then one distinction we’ve made around ad blockers has been especially crucial to privacy-lovers everywhere.

Content blockers are super important to privacy-minded Firefox users and tend to be the most popular type of browser extension. They not only prevent ick-inducing ads from following you around the internet, but they also make browsing faster and more seamless.

So we weren’t surprised to hear that Chrome users were concerned after learning that several of the internet’s most popular ad blockers, like uBlock Origin, would lose some of their privacy-preserving functionality on Google’s web browser, resulting from the changes Manifest V3 brings to Chrome’s extensions platform – changes that strengthen other facets of security, while unfortunately limiting the capabilities of certain types of privacy extensions.

But rest assured that in spite of these changes to Chrome’s new extensions architecture, Firefox’s implementation of Manifest V3 ensures users can access the most effective privacy tools available like uBlock Origin and other content-blocking and privacy-preserving extensions.

The new extensions button on Firefox gives users control

Adopting Manifest V3 also paved the way for a handy new addition to your Firefox browser toolbar: the extensions button. This gives users the ability to inspect and control which extensions have permission to access specific websites you visit.

The majority of extensions need access to user data on websites in order to work, which allows extensions to offer powerful features and cater to a variety of user needs. Regrettably, this level of site access can be misused and jeopardize user privacy. The extensions button essentially provides users with an opt-in capability and choice that didn’t exist before.

The panel shows the user’s installed and enabled extensions and their current permissions. Users are free to grant ongoing access to a website or to make that decision per visit and can remove, report, and manage extensions and their permissions directly from the toolbar. 

And if you’re not seeing those controls for a beloved extension of yours, it’s most likely because it’s not yet available in its Manifest V3 version. Don’t fret! Changes take time.

We love choice, especially when tied to enhancing user privacy and security – a double-win!

At Mozilla, we’re all about protecting your privacy and security – all while offering add-ons and features that enhance performance and functionality so you can experience the very best of the web. If interested, you can find more information about the extensions button at

And if you’re a longtime Chrome user, don’t sweat it! Exploring a safer and more private alternative doesn’t have to be challenging. We can help you make the switch from Chrome to Firefox as your desktop browser in five simple steps. And don’t worry, you can bring along your bookmarks, saved passwords and even browsing history with you!

Interested in exploring thousands of free add-ons created by independent developers from all over the world? Please visit to explore Firefox-recommended add-ons.

The post Here’s what’s going on in the world of extensions appeared first on The Mozilla Blog.

Karl DubostQuirks, Site Interventions And Fixing Websites

Jelmer recently asked : "What is Site Specific Hacks?" in the context of the Web Inspector.

Red cinema building with windows and a sign cellphone repair.

Safari Technical Preview 161 shows a new button to be able to activate or deactivate Site Specific Hacks. But what are these?

Panel in the Web inspector helping to activate or deactivate some options.

Site Specific Hacks?

Site Specific Hacks are pieces of WebKit code (called Quirks internally) to change the behavior of the browser in order to repair for the user a broken behavior from a website.

When a site has a broken behavior and is then not usable by someone in a browser, there are a couple of choices:

  • If the broken behavior is wide spread across the Web, and some browsers work with it, the standard specification and the implementation need to be changed.
  • If the broken behavior is local to one or a small number of websites, there are two non-exclusive options

Outreach improves the Web, but it is costly in time and effort. Often it's very hard to reach the right person, and it doesn't lead necessary to the expected result. Websites have also their own business priorities.

A site specific hack or a quirk in WebKit lingo means a fix to help the browser cope with a way of coding of the Website which is failing in a specific context. They are definitely bandaid and not a strategy of development. They should be really here to give the possibility for someone using a browser to have a good and fluid experience. Ideally, outreach would be done in parallel and we would be able to remove the site specific hack after a while.

A Recent Example : FlightAware Webapp

I recently removed a quirk in WebKit, which was put in place in the past to solve an issue.

The bug was manifesting for a WebView on iOS applications with devices where window.devicePixelRatio is 3.

with this function

if (
(i && === e
    ? ((this.container = t),
    (this.context = i),
    (this.containerReused = !0))
    : this.containerReused &&
    ((this.container = null),
    (this.context = null),
    (this.containerReused = !1)),
) {
(n = document.createElement("div")).className = o;
var a =;
(a.position = "absolute"), (a.width = "100%"), (a.height = "100%");
var s = (i = li()).canvas;
    ((a = = "absolute"),
    (a.left = "0"),
    (a.transformOrigin = "top left"),
    (this.container = n),
    (this.context = i);

which is comparing the equivalence of two strings: with the value

(matrix(0.333333, 0, 0, 0.333333, 0, 0)

and e with the value

(matrix(0.3333333333333333, 0, 0, 0.3333333333333333, 0, 0)

The matrix string was computed by:

function Je(t) {
    return "matrix(" + t.join(", ") + ")"

So these two string clearly are different, then the code above was never executing.

in CSSOM specification for serialization, it is mentionned:


A base-ten number using digits 0-9 (U+0030 to U+0039) in the shortest form possible, using "." to separate decimals (if any), rounding the value if necessary to not produce more than 6 decimals, preceded by "-" (U+002D) if it is negative.

It was not always like this, in the past. The specification got changed at a point the implementations changed, and the issue surfaced once WebKit became compliant with the specification.

The old code was like this:

e.prototype.renderFrame = function (t, e) {
  var r = t.pixelRatio,
    n = t.layerStatesArray[t.layerIndex];
  !(function (t, e, r) {
    We(t, e, 0, 0, r, 0, 0);
  })(this.pixelTransform, 1 / r, 1 / r),
    qe(this.inversePixelTransform, this.pixelTransform);
  var i = Je(this.pixelTransform);
  this.useContainer(e, i, n.opacity);

// cut for brevity


Specifically this line could be fixed like this.

  })(this.pixelTransform, (1/r).toFixed(6), (1/r).toFixed(6) ),

That would probably help a lot. Note that Firefox, and probably chrome may have had the same issue on devices where window.devicePixelRatio is 3.

Outreach worked and they changed the code, but in the meantime the quirk was here to help people have a good user experience.

Why Deactivating A Quirk In The Web Inspector?

Why does the Web Inspector give the possibility to deactivate the site specific hacks aka Quirks?

  1. Web developers for the impacted websites need to know if their code fix solve the current. So it's necessary for them to be able to understand what would be the behavior of the browser without the quirk.
  2. Browser implementers and QA need to know if a quirk is still needed for a specific website. Deactivating them gives a quick way to tests if the quirk needs to be removed.

Could You Help WebKit Having Less Quirks?

The main list of Quirks is visible in the source code of WebKit. If you are part of a site for which WebKit had to create a quirk, do not hesitate to contact me on GitHub or by mail or on mastodon, and we could find a solution together to remove the Quirk in question.


The Servo BlogServo to Advance in 2023

We would like to share some exciting news about the Servo project. This year, thanks to new external funding, a team of developers will be actively working on Servo. The first task is to reactivate the project and the community around it, so we can attract new collaborators and sponsors for the project.

The focus for 2023 is to improve the situation of the layout system in Servo, with the initial goal of getting basic CSS2 layout working. Given the renewed activity in the project, we will keep you posted with more updates throughout the year. Stay tuned!

About Servo

Created by Mozilla Research in 2012, the Servo project is a Research & Development effort meant to create an independent, modular, embeddable web engine that allows developers to deliver content and applications using web standards. Servo is an experimental browser engine written in Rust, taking advantage of the memory safety properties and concurrency features of the language. Stewardship of Servo moved from Mozilla Research to the Linux Foundation in 2020, where its mission remains unchanged.

Frederik BraunOrigins, Sites and other Terminologies

In order to fully discuss security issues, their common root causes and useful prevention or mitigation techniques, you will need some common ground on the security model of the web. This, in turn, relies on various terms and techniques that will be presented in the next sections.

Feel free to …

Support.Mozilla.OrgIntroducing Erik Avila

Hey folks,

I’m delighted to introduce you to Erik Avila who is joining our team as an additional Community Support Advocate. Here’s a short intro from Erik:

Hi! I’m Erik. I’ll be helping the mobile support team to moderate and send responses to app reviews, also, I’ll help identify trends to track them. I’m very excited to help and work with you all.

Erik will be helping out with the Mobile Store Support initiative, alongside with Dayana. We also introduced him in the community call last week.

Please join me to congratulate and welcome Erik!

Patrick ClokeResearching for a Matrix Spec Change

The Matrix protocol is modified via Matrix Spec Changes (frequently abbreviated to “MSCs”). These are short documents describing any technical changes and why they are worth making (see an example). I’ve written a bunch and wanted to document my research process. [1]


I treat my research as a living document, not an artifact. Thus, I don’t worry much about the format. The important part is to start writing everything down to have a single source of truth that can be shared with others.

First, I write out a problem statement: what am I trying to solve? (This step might seem obvious, but it is useful to frame the technical changes in why they matter. Many proposals seem to skip this step.) Most of my work tends to be from the point of view of an end-user, but some changes are purely technical. Regardless, there is benefit from a shared written context of the issue to be solved.

From the above and prior knowledge, I list any open questions (which I update through this process). I’ll augment the questions with answers I find in my research, write new ones about things I don’t understand, or remove them as they become irrelevant.

Next, I begin collecting any previous work done in this area, including:

  • What is the current specification related to this? I usually pull out blurbs (with links back to the source) from the latest specification.

  • Are there any related known issues? It is also worth checking the issue trackers of projects: I start with the Synapse, Element Meta, and Element Web repositories.

  • Are there related outstanding MSCs or previous research? I search the matrix-spec-proposals repository for keywords, open anything that looks vaguely related and then crawl those for mentions of other MSCs. I’ll document the related ones with links and a brief description of the proposed change.

    I include both proposed and closed MSCs to check for previously rejected ideas.

  • Are others interested in this? Have others had conversation about it? I roughly follow the #matrix-spec room or search for rooms that might be interested in the topic. I would recommend joining the #matrix-spec room for brainstorming and searching.

    This step can help uncover any missed known issues and MSCs. I will also ask others with a longer history in the project if I am missing anything.

  • A brief competitive analysis is performed. Information can be gleaned from technical blog posts and API documentation. I consider not just competing products, but also investigate if others have solved similar technical problems. Other protocols are worth checking (e.g. IRC, XMPP, IMAP).

You can see an example of my research on Matrix read receipts & notifications.

Once I have compiled the above information, I jump into the current implementation to ensure it roughly matches the specification. [2] I start considering what protocol changes would solve the problem and are reasonable to implement. I find it useful to write down all of my ideas, not just the one I think is best. [3]

At this point I have:

  • A problem statement
  • A bunch of background about the current protocol, other proposed solutions, etc.
  • A list of open questions
  • Rough ideas for proposed solutions

The next step is to iterate with my colleagues: answer any open questions, check that our product goals will be met, and seek agreement that we are designing a buildable solution. [4]

Finally, I take the above and formalize it in into one or more Matrix Spec Changes. At this point I’ll think about error conditions / responses, backwards compatibility, security concerns, and any other parts of the full MSC. Once it is documented, I make a pull request with the proposal and self-review it for loose ends and clarity. I leave comments for any parts I am unsure about or want to open discussion on.

Then I ask me colleagues to read through it and wait for feedback from both them and any interested community members. It can be useful to be in the #matrix-spec room as folks might want to discuss the proposal.

[1]There’s a useful proposal template that I eventually use, but I do much of this process before constraining myself by that.
[2]This consists of looking through code as well as just trying it out by manually making API calls or understanding how APIs power product features.
[3]Part of the MSC proposal is documenting alternatives (and why you didn’t choose one of those). It is useful to brainstorm early before you’re set in a decision!
[4]I usually do work with Matrix homeservers and am not as experienced with clients. It is useful to bounce ideas off a client developer to understand their needs.

This Week In RustThis Week in Rust 477

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on Twitter or @ThisWeekinRust on, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is schnellru, which contains a fast and flexible LRU map.

Thanks to Squirrel for the suggestion!

Please submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from the Rust Project

443 pull requests were merged in the last week

Rust Compiler Performance Triage

A very quiet week, with few changes in either direction, and none of significant magnitude.

Triage done by @simulacrum. Revision range: b435960..0442fba

1 Regressions, 1 Improvements, 3 Mixed; 1 of them in rollups 48 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs
New and Updated RFCs
Call for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

  • No RFCs issued a call for testing this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Upcoming Events

Rusty Events between 2023-01-11 - 2023-02-08 🦀

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.


Please see the latest Who's Hiring thread on r/rust)

Quote of the Week

Now macros are fine, I mean we use them for implementing internals and you know if you have something that [...] needs to be implemented for lots and lots of different concrete types, then macros are a fine choice for that, but exposing that to users is something to be very careful about.

Raph Levien

llogiq is a tad sad there were no suggestions, but still likes the quote he ended up with!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, andrewpollack, U007D, kolharsam, joelmarcey, mariannegoldin, bennyvasquez.

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Firefox Add-on ReviewsRoblox browser extensions for better gaming

Every day, more than 50 million people play among millions of user-created games on Roblox. With a massive global audience and an ocean of games, there are vastly different ways users like to interact with Roblox. This is where the customization power of browser extensions can shine. If you’re a Roblox player or creator, you might be intrigued to explore some of these innovative extensions built just for Roblox users on Firefox. 


Packed with features, BTRoblox can do everything from altering Roblox’s looks to providing potent new functionality. 

Our favorite BTRoblox features include…

  • Change the look.  BTRoblox not only offers a bunch of custom themes to choose from (dark mode is always nice for game screen contrast), but you can even rearrange the site’s main navigation buttons and — perhaps best of all — hide ads. 
  • Currency converter. Get Robux converted into your currency of choice. 
  • Fast user search. Type in the search field and see auto-populated username returns. 

“Amazing extension. I can’t use Roblox anymore without this extension installed. That’s how big a difference it makes.” — Firefox user whosblue

<figcaption class="wp-element-caption">BTRoblox settings panel.</figcaption>


Another deep customization extension, RoGold offers a few familiar tricks like a currency converter and custom themes, but it really distinguishes itself with a handful of unique features. 

Notably unique RoGold features include…

  • Pinned games. Easily access your favorite content from a pin board. 
  • Live server stats. See FPS and ping rates instantly. 
  • Streamer mode. Play privately to avoid recognition. 
  • Bulk asset upload. Great for game creators, you can upload a huge number of decals at once (more asset varieties expected to be added over time).
  • Original finder. Helps you identify original assets and avoid knock-offs prone to suddenly disappearing. 
  • View banned users. What a curious feature — it displays hidden profiles of banned users. 

Roblox Server Finder

Sometimes you just need to find a Roblox game with enough room for you and a few friends. Roblox Sever Finder is ideal for that. This single feature extension simply informs you of the number of players on any public server, so you’ll know precisely which server can accommodate your party. 

Very easy to use extension. Just select your preferred number of players with the slider and hit Smart Search. You’re good to go!

“A stress reliever for searching servers!” — Firefox user mico

Friend Removal Button

This feature isn’t as sad as it sounds! Roblox puts a cap on the number of “friends” you’re allowed to connect with. But over time Roblox players come and go, accounts get abandoned, things happen. Then you’re left with a bunch of meaningless “friends” that clog your ability to form new Roblox connections. Friend Removal Button can help. 

The extension adds a red-mark button to each friend card so you can easily prune your friend list anytime.

“Finally I don’t have max friends, thanks.” Niksky6

Roblox URL Launcher

Use URL links to conveniently join games, servers, or studio areas. Roblox URL Launcher can help with a slew of situations. 

Roblox URL Launcher can help in these cases…

  • Easily follow friends into live games with just a link.
  • Go to a specific area of the studio. 
  • Join a server directly (also works with private servers if you have access). 

Hopefully you found an extension that will enhance your Roblox experience on Firefox! Do you also play on Steam? If so, check out these excellent Steam extensions

The Rust Programming Language BlogAnnouncing Rust 1.66.1

The Rust team has published a new point release of Rust, 1.66.1. Rust is a programming language that is empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.66.1 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.66.1 on GitHub.

What's in 1.66.1 stable

Rust 1.66.1 fixes Cargo not verifying SSH host keys when cloning dependencies or registry indexes with SSH. This security vulnerability is tracked as CVE-2022-46176, and you can find more details in the advisory.

Contributors to 1.66.1

Many people came together to create Rust 1.66.1. We couldn't have done it without all of you. Thanks!

The Rust Programming Language BlogSecurity advisory for Cargo (CVE-2022-46176)

This is a cross-post of the official security advisory. The official advisory contains a signed version with our PGP key, as well.

The Rust Security Response WG was notified that Cargo did not perform SSH host key verification when cloning indexes and dependencies via SSH. An attacker could exploit this to perform man-in-the-middle (MITM) attacks.

This vulnerability has been assigned CVE-2022-46176.


When an SSH client establishes communication with a server, to prevent MITM attacks the client should check whether it already communicated with that server in the past and what the server's public key was back then. If the key changed since the last connection, the connection must be aborted as a MITM attack is likely taking place.

It was discovered that Cargo never implemented such checks, and performed no validation on the server's public key, leaving Cargo users vulnerable to MITM attacks.

Affected Versions

All Rust versions containing Cargo before 1.66.1 are vulnerable.

Note that even if you don't explicitly use SSH for alternate registry indexes or crate dependencies, you might be affected by this vulnerability if you have configured git to replace HTTPS connections to GitHub with SSH (through git's url.<base>.insteadOf setting), as that'd cause you to clone the index through SSH.


We will be releasing Rust 1.66.1 today, 2023-01-10, changing Cargo to check the SSH host key and abort the connection if the server's public key is not already trusted. We recommend everyone to upgrade as soon as possible.

Patch files for Rust 1.66.0 are also available here for custom-built toolchains.

For the time being Cargo will not ask the user whether to trust a server's public key during the first connection. Instead, Cargo will show an error message detailing how to add that public key to the list of trusted keys. Note that this might break your automated builds if the hosts you clone dependencies or indexes from are not already trusted.

If you can't upgrade to Rust 1.66.1 yet, we recommend configuring Cargo to use the git CLI instead of its built-in git support. That way, all git network operations will be performed by the git CLI, which is not affected by this vulnerability. You can do so by adding this snippet to your Cargo configuration file:

git-fetch-with-cli = true


Thanks to the Julia Security Team for disclosing this to us according to our security policy!

We also want to thank the members of the Rust project who contributed to fixing this issue. Thanks to Eric Huss and Weihang Lo for writing and reviewing the patch, Pietro Albini for coordinating the disclosure and writing this advisory, and Josh Stone, Josh Triplett and Jacob Finkelman for advising during the disclosure.

Updated on 2023-01-10 at 21:30 UTC to include additional mitigations.

Wladimir PalantTouchEn nxKey: The keylogging anti-keylogger solution

Update (2023-01-16): This article is now available in Korean.

I wrote about South Korea’s mandatory so-called security applications a week ago. My journey here started with TouchEn nxKey by RaonSecure which got my attention because the corresponding browser extension has more than 10 million users – the highest number Chrome Web Store will display. The real number of users is likely considerably higher, the software being installed on pretty much any computer in South Korea.

That’s not because people like it so much: they outright hate it, resulting in an average rating of 1,3 out of 5 stars and lots of calls to abolish it. Yet using it is required if you want to do things like online banking in South Korea.

The banks pushing for the software to be installed claim that it improves security. People call it “malware” and a “keylogger.” I spent some time analyzing the inner workings of the product and determined the latter to be far closer to the truth. The application indeed contains key logging functionality by design, and it fails to sufficiently restrict access to it. In addition, various bugs range from simple denial of service to facilitating remote code execution. Altogether I reported seven security vulnerabilities in the product.

The backdrop

After I gave an overview of South Korea’s situation, people started discussing my article on various Korean websites. One comment in particular provided crucial information that I was missing: two news stories from 2005 on the Korea Exchange Bank hacking incident [1] [2]. These are light on technical details but let me try to explain how I understand this.

This was apparently a big deal in Korea in 2005. A cybercrime gang managed to steal 50 million Won (around $50,000 at the time) from people’s banking accounts by means of a Remote Access Trojan. This way they not only got the user’s login credentials but also information from their security card. From what I can tell, this security card was similar to indexed TANs, a second factor authentication method banished in the European Union in 2012 for the exact reason of being easily compromised by banking trojans.

How did the users’ computers get infected with this malicious application? From the description this sounds like a drive-by download when visiting a malicious website with the browser, a browser vulnerability was likely exploited. It’s also possible however that the user was tricked into installing the application. The browser in question isn’t named, but it is certain to be Internet Explorer as South Korea didn’t use anything else at this point.

Now the news stress the point that the user didn’t lose or give away their online banking credentials, they’ve done nothing wrong. The integrity of online banking in general is being questioned, and the bank is criticized for not implementing sufficient security precautions.

In 2005 there have been plenty of stories like this one in other countries as well. While I cannot claim that the issue has been completely eliminated, today it is far less common. On the one hand, web browsers got way more secure. On the other hand, banks have improved their second factor. At least in Europe you usually need a second device to confirm a transaction. And you see the transaction details when confirming, so you won’t accidentally confirm a transfer to a malicious actor.

South Korea chose a different route, the public outrage demanded quick results. The second news story identifies the culprit: a security application could have stopped the attack, but its use wasn’t mandatory. And the bank complies. It promises to deliver an “anti-hacking” application and to make its use mandatory for all users.

So it’s likely not a coincidence that I can find the first mentions of TouchEn Key around 2006/2007. The application claims to protect your sensitive data when you enter data into a web page. Eventually, TouchEn nxKey was developed to support non-Microsoft browsers, and that’s the one I looked into.

What does TouchEn nxKey actually do?

All the public sources on TouchEn nxKey tell that it is somehow meant to combat keyloggers by encrypting keyboard input. That’s all the technical information I could find. So I had to figure it out on my own.

Websites relying TouchEn nxKey run the nxKey SDK which consists of two parts: a bunch of JavaScript code running on the website and some server-side code. Here is how it works:

  1. You enter a password field on a website that uses the nxKey SDK.
  2. JavaScript code of the nxKey SDK detects it and notifies your local nxKey application.
  3. nxKey application activates its device driver in the Windows kernel.
  4. Device driver now intercepts all keyboard input. Instead of having it processed by the system, keyboard input is sent to the nxKey application.
  5. The nxKey application encrypts the keyboard input and sends it to the JavaScript code of the nxKey SDK.
  6. The JavaScript code puts the encrypted data into a hidden form field. The actual password field receives only dummy text.
  7. You finish entering your login credentials and click “Login.”
  8. The encrypted keyboard input is sent to the server along with other data.
  9. The server-side part of the nxKey SDK decrypts it and retrieves the plain text password from it. Regular login procedure takes over.

So the theory is: a keylogger attempting to record data entered into this website will only see encrypted data. It can see the public key used by the website, but it won’t have the corresponding private key. So no way to decrypt, the password is safe.

Yes, it’s a really nice theory.

How do websites communicate with TouchEn nxKey?

How does a website even know that a particular application is installed on the computer? And how does it communicate with it?

It appears that there is an ongoing paradigm shift here. Originally, TouchEn nxKey required its browser extension to be installed. That browser extension forwarded requests from the website to the application using native messaging. And it delivered responses back to the webpage.

Yet using browser extensions as intermediate is no longer state of the art. The current approach is for the websites to use WebSockets API to communicate with the application directly. Browser extensions are no longer required.

Website is shown communicating with TouchEn browser extension via touchenex_nativecall(). The extension in turn communicates with application CrossEXChrome via Native Messaging. Website on the other hand communicates directly with application CrossEXService via WebSocket on

I’m not sure when exactly this paradigm shift started, but it is far from complete yet. While some websites like Citibank Korea use the new WebSocket approach exclusively, other websites like that of the Busan Bank still run older code which relies exclusively on the browser extensions.

This does not merely mean that users still need to have the browser extension installed. It also explains the frequent complains about the software not being recognized despite being installed. These users got the older version of the software installed, one that does not support WebSocket communication. There is no autoupdate. With some banks still offering these older versions for download, it’s a mistake I made myself originally.

Abusing TouchEn extension to attack banking websites

The TouchEn browser extension is really tiny, its functionality being minimal. It should be hard to do much wrong here. Yet looking through its code, we see comments like this one:

result = JSON.parse(result);
var cbfunction = result.callback;

var reply = JSON.stringify(result.reply);
var script_str = cbfunction + "(" + reply + ");";
if(typeof window[cbfunction] == 'function')

So somebody designed a horribly bad (meaning: actually dangerous) way of doing something. Then they either realized that it could be done without eval(), or somebody pointed it out to them. Yet rather than removing the bad code, they kept it around just in case. Quite frankly, to me this demonstrates a very bad grasp of JavaScript, security and version control. And maybe it’s just me, but I wouldn’t let this person write code for a security product unsupervised.

Either way, the dangerous eval() calls have already been purged from the browser extension. Not so much in the JavaScript part of the nxKey SDK used by banking websites, but these are no concern so far. Still, with the code quality so bad, there are bound to be more issues.

And I found such an issue in the callback mechanism. A website can send a setcallback request to the application in order to register for some events. When such events occurs, the application will instruct the extension to call the registered callback function on the page. Essentially, any global function on the page can be called, by name.

Could a malicious webpage register a callback for some other web page then? There are two hurdles:

  1. The target webpage needs to have an element with id="setcallback".
  2. Callbacks are delivered to a specific tab.

The first hurdle means that primarily only websites using nxKey SDK can be attacked. When communicating via the browser extensions these will create the necessary element. Communication via WebSockets doesn’t create this element, meaning that websites using newer nxKey SDK aren’t affected.

The second hurdle seems to mean that only pages loaded in the current tab can be attacked, e.g. those loaded in a frame. Unless the nxKey application can be tricked into setting a wrong tabid value in its response.

And this turned out surprisingly easy. While the application uses a proper JSON parser to process incoming data, the responses are generated by means of calling sprintf_s(). No escaping is performed. So manipulating some response properties and adding quotation marks to it allows injecting arbitrary JSON properties.

  id: 'something","x":"y'

The id property will be copied into the application’s response, meaning that the response suddenly gets a new JSON property called x. This vulnerability allows injecting any value for tabid into the response.

How does a malicious page know the ID of a banking tab? It could use its own tab ID (which TouchEn extension helpfully exposes) and try guessing other tab IDs. Or it could simply leave this value empty. The extension is being helpful in this case:

tabid = response.response.tabid;
if (tabid == "")
  chrome.tabs.query({active: true, currentWindow: true}, function(tabs) {
    chrome.tabs.sendMessage(tabs[0].id, response, function(res) {});

So if the tabid value is empty it will deliver the message to the currently active tab.

Meaning that one possible attack looks like this:

  1. Open a banking website in a new tab, it becoming the active tab.
  2. Wait for the page to load, so that the element with id="setcallback" is present.
  3. Send a setcallback message via the TouchEn extension to set a callback to some function, while also overwriting JSON response properties with "tabid":"" and "reply":"malicious payload".

The first call to the callback occurs immediately. So the callback function will be called in the banking website, with the malicious payload from the reply property as parameter.

We are almost there. A possible callback function could be eval but there is a final hurdle: TouchEn passes the reply property through JSON.stringify() before giving it to the callback. So we actually get eval("\"malicious payload\"") and this doesn’t do anything.

On the other hand, maybe the target page has jQuery? And calling $('"<img src=x onerror=alert(\'Hi,_this_is_JavaScript_code_running_on_\'+document.domain)>"') will produce the expected result: says: Hi,

Is expecting jQuery for an attack to succeed cheating? Not quite, the websites using TouchEn nxKey will most likely also use TouchEn Transkey (an on-screen keyboard) as well, and this one relies on jQuery. Altogether, all South Korean banking sites seem heavily dependent on jQuery which is a bad idea.

But update_callback, the designated callback of the nxKey SDK, can also be abused to run arbitrary JavaScript code when passed JSON-stringified data. Calling update_callback('{"FaqMove":"javascript:alert(\'Hi, this is JavaScript code running on \'+document.domain)"}') will attempt to redirect to a javascript: link and run arbitrary code as a side-effect: says: Hi, this is JavaScript code running on

So this attack allows a malicious website to compromise any website relying on the TouchEn extension. And none of the “security” applications South Korean banks force users to install detect or prevent this attack.

Side-note: Browser extensions similar to TouchEn

Back when I started my testing there were two TouchEn extensions in the Chrome Web Store. The less popular but largely identical extension has since been removed.

This isn’t the end of the story however. I found three more almost identical extensions: CrossWeb EX and Smart Manager EX by INISAFE as well as CrossWarpEX by iniLINE. CrossWeb EX is the most popular of those and currently listed with more than 4 million users. These extensions similarly expose websites to attacks.

My first thought was that RaonSecure and INISAFE belong to the same company group. That doesn’t appear to be the case.

But then I saw this page by the iniLINE software development company:

A web page featuring Initech and RaonSecure logos among others.

This lists Initech and RaonSecure as partners, so it would appear that iniLINE are the developers of these problematic browser extensions. Another interesting detail: the first entry in the “Major customers” line at the top is the Ministry of National Defense. I just hope that their defense work results in better code than what their other partners get…

Using keylogging functionality from a website

Now let’s say that there is a malicious website. And let’s say that this website tells TouchEn nxKey: “Hi there, the user is on a password field right now, and I want the data they enter.” Will that website get all the keyboard input then?

Yes, it will! It will get whatever the user types, regardless of which browser tab is active right now or whether the browser itself is active at all. The nxKey application simply complies with the request, it won’t check whether it makes any sense at this point. In fact, it will even give websites the administrator password entered into a User Access Control prompt.

But there certainly are hurdles? Yes, there are. First of all, such a website needs a valid license. It needs to communicate that license in the get_versions call prior to using any application functionality:

  "tabid": "whatever",
  "init": "get_versions",
  "m": "nxkey",
  "origin": "",

This particular license is only valid for So it can only be used by the website. Or by any other website claiming to be

See that origin property in the code above? Yes, TouchEn nxKey actually believes that rather than looking at the Origin HTTP header. So it is trivial to lift a license from some website using nxKey legitimately and claim to be that website. It’s not even necessary to create a fake license.

Another hurdle: won’t the data received by the malicious website be encrypted? How does one decrypt it? It should be possible to use a different public key, one where the private key is known. Then one would only need to know the algorithm, and then decrypting the data would work.

Except: none of that is necessary. If TouchEn nxKey doesn’t receive any public key at all, it will simply drop the encryption! The website will receive the keyboard input in clear text then.

Behold, my proof of concept page (less than 3 kB with all the HTML boilerplate):

Webpage screenshot: Hey, this page knows what you type into other applications! Type in any application and watch the text appear here: I AM TYPING THIS INTO A UAC PROMPT

There is still a third hurdle, one that considerably reduces the severity of this vulnerability: keyboard input intercepted by a malicious web page no longer reaches its destination. A user is bound to get suspicious when they start typing in a password, yet nothing appears in the text field. My analysis of the nxKey application suggests that it only works this way: the keyboard input reaches either the web page or its actual target, but never both.

Attacking the application itself

We’ve already established that whoever wrote the JavaScript code of this product wasn’t very proficient at it. But maybe that’s because all their experts have a C++ background? We’ve already seen this before, developers trying to leave JavaScript and delegate all tasks to C++ code as soon as possible.

Sadly, this isn’t a suspicion I can confirm. I’m way more used to analyzing JavaScript than binary code, but it seems that the application itself is similarly riddled with issues. In fact, it mostly uses approaches typical to C rather than C++. There is lots of manual memory management here.

I already mentioned their use of sprintf_s(). An interesting fact about functions like sprintf_s() or strcpy_s(): while these are the “memory safe” versions of sprintf() or strcpy() functions which won’t overflow the buffer, these are still tricky to use. If you fail giving them a sufficiently large buffer, these will invoke the invalid parameter handler. And by default this makes the application crash.

Guess what: nxKey application almost never makes sure the buffer is sufficiently large. And it doesn’t change the default behavior either. So sending it an overly large value will in many cases crash the application. A crash is better than a buffer overflow, but a crashed application can no longer do its job. Typical result: your online banking login form appears to work correctly, but it receives your password as clear text now. You only notice something being wrong when submitting the form results in an error message. This vulnerability allows Denial-of-Service attacks.

Another example: out of all JSON parsers, the developers of the nxKey application picked out the one written in C. Not only that, they also took a random repository state from January 2014 and never bothered updating it. That null pointer dereference fixed in June 2014? Yeah, still present. So sending ] (a single closing square bracket) to the application instead of JSON data is sufficient to crash it. Another vulnerability allowing Denial-of-Service attacks.

And that WebSockets server websites connect to? It uses OpenSSL. Which OpenSSL? Actually, OpenSSL 1.0.2c. Yes, I can almost hear the collective sigh of all the security professionals here. OpenSSL 1.0.2c is seven years old. In fact, end of support for the 1.0.2 branch was three years ago: on January 1st, 2020. The last release here was OpenSSL 1.0.2u, meaning 18 more releases fixing bugs and security issues. None of the fixes made it into the nxKey application.

Let’s look at something more interesting than crashes. The application license mentioned above is base64-encoded data. The application needs to decode it. The decoder function looks like this:

size_t base64_decode(char *input, size_t input_len, char **result)
  size_t result_len = input_len / 4 * 3;
  if (str[input_len - 1] == '=')
  if (str[input_len - 2] == '=')
  *result = malloc(result_len + 1);

  // Decoding input in series of 4 characters here

I’m not sure where this function comes from. It has clear similarities with the base64 decoder of the CycloneCRYPTO library. But CycloneCRYPTO writes the result into a pre-allocated buffer. So it might be that the buffer allocation logic was added by nxKey developers themselves.

And that logic is flawed. It clearly assumes that input_len is a multiple of four. But for input like abcd== its calculation will result in a 2 bytes buffer being allocated, despite the actual output being 3 bytes large.

Is a one byte heap overflow exploitable? Yes, it clearly is as this Project Zero blog post or this article by Javier Jimenez explain. Writing such an exploit is beyond my skill level however.

Instead my proof of concept page merely sent the nxKey application randomly generated license strings. This was sufficient to crash the application in a matter of seconds. Connecting the debugger showed clear evidence of memory corruption: the application crashed because it attempted to read or write data using bogus memory locations. In some cases these memory locations came from the data supplied by my website. So clearly someone with sufficient skill and dedication could have abused that vulnerability for remote code execution.

Modern operating systems have mechanisms to make turning buffer overflows like this one into code execution vulnerabilities harder. But these mechanisms only help if they are actually being used. Yet nxKey developers turned Address space layout randomization off on two DLLs loaded by the application, Data Execution Prevention was turned off on four DLLs.

Abusing the helper application

So far this was all about web-based attacks. But what about the scenario where a malware application managed it into the system already and is looking for ways to expand its privileges? For an application meant to help combat such malware, TouchEn nxKey does surprisingly badly at keeping its functionality to itself.

There is for example the CKAgentNXE.exe helper application starting up whenever nxKey is intercepting keyboard input. Its purpose: when nxKey doesn’t want to handle a key, make sure it is delivered to the right target application. The logic in TKAppm.dll library used by the main application looks roughly like this:

if (IsAdmin())
  keybd_event(virtualKey, scanCode, flags, extraInfo);
  AgentConnector connector;

  // An attempt to open the helper’s IPC objects

  if (!connector.connected)
    // Application isn’t running, start it now

    while (!connector.connected)

  // Some IPC dance involving a mutex, shared memory and events
  connector.sendData(2, virtualKey, scanCode, flags, extraInfo);

Since the nxKey application is running with user’s privileges, it will fall back to running CKAgentNXE.exe in every sensible setup. And that helper application, upon receiving command code 2, will call SendInput().

It took me a while to get an idea of what the reason for this approach might be. After all, both nxKey application and CKAgentNXE.exe are running on the same privilege level. Why not just call SendInput()? Why is this indirection necessary?

I noticed however that CKAgentNXE.exe sets a security descriptor for its IPC objects to allow access from processes with integrity level Low. And I also noticed that the setup program creates registry entries under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\Low Rights\ElevationPolicy to allow automatic elevation of CKAgentNXE.exe. And that’s where it clicked: this is all because of the Internet Explorer sandbox.

So when TouchEn Key runs as ActiveX in Internet Explorer, its integrity level is Low. Being sandboxed in this way effectively makes it impossible to use SendInput(). This restriction is circumvented by allowing to run and automatically elevate CKAgentNXE.exe from the Internet Explorer sandbox. Once the helper application is running, the sandboxed ActiveX control can connect to it and ask it to do something. Like calling SendInput().

Outside of Internet Explorer this approach makes no sense, yet TouchEn nxKey also delegates work to CKAgentNXE.exe. And this has consequences for security.

Let’s say we have a malware that is running on the integrity level Low. It likely got there by exploiting a browser vulnerability, but now it is stuck in that sandbox. What can it do now? Why, just wait for CKAgentNXE.exe to start up (bound to happen sooner or later) and use it to break out!

My proof of concept application asked CKAgentNXE.exe to generate fake keyboard input for it: Win key, then C, M, D and the Enter key. This resulted in a command line prompt being opened, this one running with the Middle integrity level (the default one). A truly malicious application could then type in an arbitrary command to run code outside the sandbox.

Not that a truly malicious application would do things in such a visible way. CKAgentNXE.exe also accepts command code 5 for example which will load an arbitrary DLL into any process. That’s a much nicer way to infect a system, don’t you think?

At least this time one of the mandatory security applications decided to make itself useful and flag the threat:

AhnLab Safe Transaction application warning about C:\Temp\test.exe being infected with Malware/Win.RealProtect-LS.C5210489

A malware author could probably figure out what triggers this warning and get around it. Or they could initiate a web socket connection to make sure CKAgentNXE.exe starts up without also activating AhnLab application like a real banking website would. But why bother? It’s only a prompt, the attack isn’t being stopped proactively. By the time the user clicks to remove the malicious application, it will be too late – the attack already succeeded.

Accessing the driver’s keylogging functionality directly

As mentioned above, TouchEn nxKey application (the one encrypting keyboard input it receives from the driver) is running with user’s privileges. It isn’t an elevated application, it has no special privileges. How is access to the driver’s functionality being restricted then?

The correct answer of course is: it isn’t. Any application on the system has access to this functionality. It only needs to know how nxKey communicates with its driver. And in case you are wondering: that communication protocol isn’t terribly complicated.

I am not sure what the idea here was. TKAppm.dll, the library doing the driver communication, is obfuscated using Themida. The vendor behind Themida promises:

Themida® uses the SecureEngine® protection technology that, when running in the highest priority level, implements never seen before protection techniques to protect applications against advanced software cracking.

Maybe nxKey developers thought that this would offer sufficient protection against reverse engineering. Yet connecting a debugger at runtime allows saving already decrypted TKAppm.dll memory and load the result into Ghidra for analysis.

Message box titled TouchEn nxKey. The text says: Debugging Program is detected. Please Close Debugging Program and try again. TouchEn nxKey will not work with subsequent key. (If system is virtual PC, try real PC.)

Sorry, too late. I’ve already got what I needed. And it was no use that your application refuses to work when booting in Safe Mode.

Either way, I could write a tiny (70 lines of code) application that would connect to the driver and use it to intercept all keyboard input on the system. It didn’t require elevation, running with user’s privileges was sufficient. And unlike with a web page this application could also make sure this keyboard input is delivered to its destination, so the user doesn’t notice anything. Creating a keylogger was never so easy!

The best part: this keylogger integrated with the nxKey application nicely. So nxKey would receive keyboard input, encrypt it and send encrypted data to the website. And my tiny keylogger would also receive the same keyboard input, as clear text.

Side-note: Driver crashes

There is something you should know when developing kernel drivers: crashing the driver will crash the entire system. This is why you should make extra certain that your driver code never fails.

Can the driver used by nxKey fail? While I didn’t look at it too closely, I accidentally discovered that it can. See, the application will use DeviceIoControl() to ask the driver for a pointer to the input buffer. And the driver creates this pointer by calling MmMapLockedPagesSpecifyCache().

Yes, this means that this input buffer is visible to every single application on the system. But that’s not the main issue. It’s rather: what happens if the application requests the pointer again? Well, the driver will simply do another MmMapLockedPagesSpecifyCache() call.

After around 20 seconds of doing this in a loop the entire virtual address space is exhausted and MmMapLockedPagesSpecifyCache() returns NULL. The driver doesn’t check the return value and crashes. Boom, the operating system reboots automatically.

This issue isn’t exploitable from what I can tell (note: I am no expert when it comes to binary exploitation), but it is still rather nasty.

Will it be fixed?

Usually, when I disclose vulnerabilities they are already fixed. This time that’s unfortunately not the case. As far as I can tell, none of the issues have been addressed so far. I do not know when the vendors plan to fix these issues. I also do not know how they plan to push out the update to the users, particularly given that banks are already distributing builds that are at least three versions behind the current release. You remember: there is no autoupdate functionality.

Even reporting these issues was complicated. Despite specializing in security, RaonSecure doesn’t list any kind of security contact. In fact, RaonSecure doesn’t list any contact whatsoever, except for a phone number in Seoul. No, I’m not going to phone to Korea asking whether anyone speaks English there.

Luckily, KrCERT provides a vulnerability report form specifically for foreign citizens to use. This form will frequently error out and require you to re-enter everything, and some reports get caught up in a web firewall for no apparent reason, but at least the burden of locating the security contact is on someone else.

I reported all the vulnerabilities to KrCERT on October 4th, 2022. I still tried to contact some RaonSecure executives directly but received no response. At least KrCERT confirmed forwarding my reports to RaonSecure roughly two weeks later. They also noted that RaonSecure asked for my email address and wanted to contact me. They never did.

And that’s it. The 90 days disclosure deadline was a week ago. TouchEn nxKey was apparently released on October 4th, 2022, the same day I reported these vulnerabilities. At the time of writing it remains the latest release, and all the vulnerabilities described here are still present in it. The latest version of the TouchEn browser extension used by millions of people is still five years old, released in January 2018.

Update (2023-01-10): In comments to Korean media, RaonSecure claims to have fixed the vulnerabilities and to distribute the update to customers. I cannot currently confirm this claim. The company’s own download server is still distributing TouchEn nxKey

Side-note: The information leak

How do I even know that they are working on a fix? Well, thanks to something that never happened to me before: they leaked my proofs of concept (meaning: almost complete exploits for the vulnerabilities) prior to the deadline.

See, I used to attach files to my reports directly. However, these attachments would frequently end up being removed or otherwise destroyed by overzealous security software. So instead I now upload whatever files are needed to demonstrate the issue to my server. A link to my server always works. Additional benefit: even with companies that don’t communicate I can see in the logs whether the vendor accessed the proof of concept at all, meaning whether my report reached anyone.

A few days ago I checked the logs for accesses to the TouchEn nxKey files. And immediately saw Googlebot. Sure enough: these files ended up being listed in the Google index.

Now I use a random folder name, it cannot be guessed. And I only shared the links with the vendor. So the vendor must have posted a publicly visible link to the exploits somewhere.

And that’s in fact what they did. I found a development server, publicly visible and indexed by Google. It seems that this server was originally linking to my proof of concept pages. By the time I found it, it was instead hosting the vendor’s modified copies of them.

The first request by Googlebot was on October 17th, 2022. So I have to assume that these vulnerabilities could be found via a Google search more than two months prior to the disclosure deadline. They have been accessed many times, hard to tell whether it’s only been the product’s developers.

After reporting this issue the development server immediately disappeared from the public internet. Still, such careless handling of security-sensitive information isn’t something I’ve ever seen before.

Can the nxKey concept even work?

We’ve seen a number of vulnerabilities in the TouchEn nxKey application. By attempting to combat keyloggers, nxKey developers built a perfect keylogging toolset and failed to restrict access to it. But the idea is nice, isn’t it? Maybe it would actually be a useful security tool if built properly?

Question is: the keylogger that is being protected against, what level does it run on? The way I see it, there are four options:

  1. In the browser. So some malicious JavaScript code is running in the online banking page, attempting to capture passwords. That code can trivially stop the page from activating nxKey.
  2. In the system, with user’s privileges. This privilege level is e.g. sufficient to kill the CrossEXService.exe process which is also running with user’s privileges. This achieves the same results as my denial-of-service attacks, protection is effectively disabled.
  3. In the system, with administrator privileges. That’s actually sufficient privileges to unload the nxKey driver and replace it by a trojanized copy.
  4. In the hardware. Game over, good luck trying any software-based solutions against that.

So whatever protection nxKey might provide, it relies on attackers who are unaware of nxKey and its functionality. Generic attacks may be thwarted, but it is unlikely to be effective against any attacks targeting specifically South Korean banks or government organizations.

Out of these four levels, number 2 might be possible to fix. The application CrossEXService.exe could be made to run with administrator’s privileges. This would prevent malware from messing with this process. Effectiveness of this protection would still rely on the malware being unable to get into the user’s browser however.

I cannot see how this concept could be made to work reliably against malware operating on other levels.

The Rust Programming Language BlogUpdating the Android NDK in Rust 1.68

We are pleased to announce that Android platform support in Rust will be modernized in Rust 1.68 as we update the target NDK from r17 to r25. As a consequence the minimum supported API level will increase from 15 (Ice Cream Sandwich) to 19 (KitKat).

In NDK r23 Android switched to using LLVM's libunwind for all architectures. This meant that

  1. If a project were to target NDK r23 or newer with previous versions of Rust a workaround would be required to redirect attempts to link against libgcc to instead link against libunwind. Following this update this workaround will no longer be necessary.
  2. If a project uses NDK r22 or older it will need to be updated to use r23 or newer. Information about the layout of the NDK's toolchain can be found here.

Going forward the Android platform will target the most recent LTS NDK, allowing Rust developers to access platform features sooner. These updates should occur yearly and will be announced in release notes.

Patrick ClokeMatrix Read Receipts & Notifications

I recently wrapped up a project on improving notifications in threads for Matrix. This is adapted from my research notes to understand the status quo before adapting the Matrix protocol for threads (in MSC3771 and MSC3773). Hopefully others find the information useful!


These notes are true as of the v1.3 of the Matrix spec and also cover some Matrix spec changes which may or may not have been merged since. It is known to be out of date with the changes from MSC2285, MSC3771, and MSC3773.


Matrix uses “receipts” to note which part of a room has been read by a user. It considers the history for a room to be split into three sections1:

  1. Messages the user has read (or indicated they aren’t interested in them).
  2. Messages the user might have read some but not others.
  3. Messages the user hasn’t seen yet.

The fully read marker is between 1 & 2 while the read receipt is between 2 & 3. Note that fully read markers are not shared with other users while read receipts are.

Another way to consider this is2:

  1. Fully read marker: a private bookmark to indicate the point which has been processed in the discussion. This allows a user to go back to it later.
  2. Read receipts: public indicators of what a user has seen to inform other participants that the user has seen it.
  3. Hidden read receipts: a private mechanism to synchronize “unread messages” indicators between a user’s devices (while still retaining the ability from 1 as a separate concept). (See MSC2285.)

Fully read markers

They are stored in the room account data for the user (but modified via a separate API).

The API is:

POST /_matrix/client/v3/rooms/{roomId}/read_markers

The read receipt can optionally be updated at the same time.

In Element Web your fully read marker is displayed as the green line across the conversation.

Read receipts

Only is defined at the moment, but it is meant to be generic infrastructure.

Updating a read receipt updates a “marker” which causes any notifications prior to and including the event to be marked as read.3 A user has a single read receipt “marker” per room.

Passed to clients as an m.receipt event under the ephemeral array for each room in the /sync response. The event includes a map of event ID -> receipt type -> user ID -> data (currently just a timestamp). Note that the event is a delta from previous events. An example:

  "content": {
    "$": {
      "": {
        "": {
          "ts": 1436451550453
  "room_id": "!",
  "type": "m.receipt"

The API is:

POST /_matrix/client/v3/rooms/{roomId}/receipt/{receiptType}/{eventId}

In Element Web read receipts are the small avatars on the right hand side of the conversation. Note that your own read receipt is not shown.

Hidden read receipts (MSC2285)

A new receipt type ( to set a read receipt without sharing it with other users. It also modifies the /read_markers API to accept the new receipt type and modifies the /receipts API to accept the fully read marker.

This is useful to synchronize notifications across devices while keeping read status private.

Push rules

A user’s push rules determine one or more user-specific actions on each event. They are customizable, but the homeserver provides default rules. They can result in an action to:

  1. Do nothing
  2. Notify the user (notify action), which can have additional actions (“tweaks”):
    1. Highlight the message (highlight action)
    2. Play a sound (sound action)

By default, all new and events generate a notification, events with a user’s display name or username in them or @room generate highlights, etc.

Push rules for relations (MSC3664)

Augments push rules to allow applying them to the target of an event relationship and defines a default push rule for replies (not using the reply fallback).

Event notification attributes and actions (MSC2785)

A proposed replacement for push rules, the results are essentially the same actions (and presumedly would not change the data returned in /sync, see below).

Notification counts in /sync

The number of notification events and highlight events since the user’s last read receipt are both returned on a per room basis as part of a /sync response (for joined room).

Notification and highlight events are any messages where the push rules resulted in an action of notify or highlight, respectively. (Note that a highlight action must be a notify action, thus highlight_count <= notification_count.)

An example:

  "account_data": [...],
  "ephemeral": [...],
  "state": [...],
  "summary": {...},
  "timeline": {...},
  "unread_notifications": {
      "highlight_count": 0,
      "notification_count": 0

Unread messages count (MSC2654)

A new field is added under the unread_notifications field (unread_count) which is the total number of events matching particular criteria since the user’s last read receipt.

This replaces MSC2625, which adds a new push rule action (mark_unread) to perform the same task. In this rendition, notify implies mark_unread and thus highlight_count <= notification_count <= unread_count.

Push notifications

Push notifications receive either the number of unread messages (across all rooms) or the number of rooms with unread messages (depending on the value of push.group_unread_count_by_room in the Synapse configuration). Unread messages are any messages where the push rules resulted in an action of notify.

This information is sent from the homeserver to the push gateway as part of every notification:

  "notifications": {
    "counts": {
      "unread": 1,

Wladimir PalantSouth Korea’s online security dead end

Edit (2023-01-04): A Korean translation of this article is now available here, thanks to Woojin Kim. Edit (2023-01-07): Scheduled one more disclosure for February.

Last September I started investigating a South Korean application with unusually high user numbers. It took me a while to even figure out what it really did, there being close to zero documentation. I eventually realized that the application is riddled with security issues and, despite being advertised as a security application, makes the issue it is supposed to address far, far worse.

That’s how my journey to the South Korea’s very special security application landscape started. Since then I investigated several other applications and realized that the first one wasn’t an outlier. All of them caused severe security and privacy issues. Yet they were also installed on almost every computer in South Korea, being a prerequisite for using online banking or government websites in the country.

Message on stating: [IP Logger] program needs to be installed to ensure safe use of the service. Do you want to move to the installation page?

Before I start publishing articles on the individual applications’ shortcomings I wanted to post a summary of how (in my limited understanding) this situation came about and what exactly went wrong. From what I can tell, South Korea is in a really bad spot security-wise right now, and it needs to find a way out ASAP.

Historical overview

I’ve heard about South Korea being very “special” every now and then. I cannot claim to fully understand the topic, but there is a whole Wikipedia article on it. Apparently, the root issue were the US export restrictions on strong cryptography in the 90ies. This prompted South Korea to develop their own cryptographic solutions.

It seems that this started a fundamental distrust in security technologies coming out of the United States. So even when the export restrictions were lifted, South Korea continued adding their own security layers on top of SSL. All users had to install special applications just to use online banking.

Originally, these applications used Microsoft’s proprietary ActiveX technology. This only worked in Internet Explorer and severely hindered adoption of other browsers in South Korea.

Wikipedia lists several public movements aimed at improving this situation. Despite the pressure from these, it took until well after 2010 that things actually started changing.

Technologically, the solutions appear to have gone through several iterations. The first one were apparently NPAPI plugins, the closest thing to ActiveX in non-Microsoft browsers. I’ve also seen solutions based on browser extensions which are considerably simpler than NPAPI plugins.

Currently, the vendors appear to have realized that privileged access to the browser isn’t required. Instead, they merely need a communication channel from the websites to their application. So now all these applications run a local web server that websites communicate with.

Current situation

Nowadays, a typical Korean banking website will require five security applications to be installed before you are allowed to log in. One more application is suggested to manage this application zoo. And since different websites require different sets of applications, a typical computer in South Korea probably runs a dozen different applications from half a dozen different vendors. Just to be able to use the web.

Screenshot of a page titled “Install Security Program.” Below it the text “To access and use services on Busan Bank website, please install the security programs. If your installation is completed, please click Home Page to move to the main page. Click [Download Integrated Installation Program] to start automatica installation. In case of an error message, please click 'Save' and run the downloaded the application.” Below that text the page suggests downloading “Integrated installation (Veraport)” and five individual applications.

Each of these applications comes with a website SDK that the website needs to install, consisting of half a dozen JavaScript files. So your typical Korean banking website takes quite a while to load and initialize.

Interestingly, most of these applications don’t even provide centralized download servers. The distribution and updates have been completely offloaded to websites using these security applications.

And that is working exactly as well as you’d expect. Even looking at mere usability, I’ve noticed an application that a few years ago went through a technology change: from using a browser extension to using a local web server for communication. Some banks still distribute and expect the outdated application version, others work with the new one. For users it is impossible to tell why they have the application installed, yet their bank claims that they don’t. And they complain en masse.

Obviously, websites distributing applications also makes them a target. And properly securing so many download servers is unrealistic. So a few years ago the North Korean Lazarus group made the news by compromising some of these servers in order to distribute malware.

Software quality

I took a thorough look at the implementation of several security applications widely used in South Korea. While I’ll go into the specific issues in future blog posts, some tendencies appear universal across the entire landscape.

One would think, being responsible for the security of an entire nation would make vendors of such software be extra vigilant. That’s not what I saw however. In fact, security-wise these applications are often decades behind state of the art.

This starts with a simple fact: some of these applications are written in the C programming language, not even C++. It being a low-level programming language, these days it is typically used in code that has to work close to hardware such as device drivers. Here however it is used in large applications interacting with websites in complicated ways.

The manual approach to memory management in C is a typical source of exploitable memory safety issues like buffer overflows. Avoiding them in C requires utmost care. While such bugs weren’t the focus of my investigation, I couldn’t fail noticing that the developers of these applications didn’t demonstrate much experience avoiding memory safety issues.

Modern compilers provide a number of security mechanisms to help alleviate such issues. But these applications don’t use modern compilers, relying on Visual Studio versions released around 15 years ago instead.

And even the basic security mechanisms supported by these ancient compilers, such as Address Space Layout Randomization (ALSR) and Data Execution Prevention (DEP), tend to be disabled. There is really no good reason for that, these are pure security benefit “for free.”

To make matters even worse, the open source libraries bundled in these applications tend to see no updates whatsoever. So far the record holder was a library which turned out to be more than a decade old. There have been more than 50 releases of this library since then, with many improvements and security fixes. None of them made it into the application.

Security through obscurity

Given how South Korea’s security applications are all about cryptography, they are surprisingly bad at it. In most cases, cryptography is merely being used as obfuscation, only protecting against attackers who cannot reverse engineer the algorithm. Other issues I’ve seen were dropping encryption altogether if requested or algorithm parameters that have been deprecated decades ago.

In fact, vendors of these applications appear to view reverse engineering as the main issue. There is very little transparency and much security through obscurity here. It’s hard to tell whether this approach actually works to deter hackers or we merely don’t learn about the successful attacks.

Either way, I’ve seen multiple applications use software “protection” that decrypts the code at runtime to prevent reverse engineering. While I don’t have much experience with such mechanisms, I found that attaching to the process with x64dbg at runtime and using the Scylla plugin does just fine to get a decrypted exe/dll file that can be fed into a disassembler.

There are services that will immediately shut down the application if a debugger is attached. And one application even attempts to prevent browser’s Developer Tools from being used. Neither mechanism mitigates security risks, the goal here is rather maintaining obscurity.

Explanation attempts

I think the main issue here is that the users are not the customers. While this is supposedly all about their safety, the actual customers are the banks. The users don’t get to choose whether to install an application, it is required. And banks can delegate liability away.

If somebody loses money due to a hack, the bank cannot possibly be at fault. The bank did everything right after all. It made the user install all the important security applications. That seems to be the logic here.

This creates a market for bogus security applications. Most of them fail at properly addressing an issue. Way too often they even make matters considerably worse. And in the few cases where meaningful functionality is present, a modern web browser is perfectly capable of it without any third-party software.

But none of this matters as long as banks continue to buy these applications. And whether they do is only related to whether they see a value for themselves, not whether the application does anything meaningful.

The vendors know that of course. That’s why they haven’t been investing into the security of their applications for decades, it simply doesn’t matter. What matters are the features that banks will see. Ways for them to customize the application. Ways for them to collect even more of users’ data. Ways for them to spend money and to get (perceived) security back without any noteworthy effort.

Getting out of the dead end

Unfortunately, I know too little about the Korean society to tell how they would get out of this less than perfect situation. One thing I’m pretty certain about however: improving the existing security applications won’t do it.

Yes, I reported the security and privacy issues I found. I gave the vendors some time to protect the users before my disclosure. And I hope they will.

It isn’t really going to change the situation however. Because many of these issues are by design. And if they fix all of them, they will no longer have a product to sell.

In fact, the ideal outcome is dismantling South Korea’s special security landscape altogether. Not relying on any of these security applications will be a huge win for security. This likely won’t be possible without some definitive legislative action however. Ideally one that will give users a choice again and outlaw forcing installation of third-party applications just to use basic online services.

Schedule of future disclosures

When I report security issues vendors generally get 90 days to fix them. Once that deadline is over I disclose my research. If you are interested in reading the disclosures, you can subscribe to the RSS feed of this blog. Alternatively, you could also check my blog on the scheduled disclosure dates:

Frederik BraunFinding and Fixing DOM-based XSS with Static Analysis

This article first appeared on the Firefox Attack & Defense blog.

Despite all the efforts of fixing Cross-Site Scripting (XSS) on the web, it continuously ranks as one of the most dangerous security issues in software.

In particular, DOM-based XSS is gaining increasing relevance: DOM-based XSS is a form of XSS …

Mike TaylorRemember when the IE 11 User-Agent forced Mozilla to freeze part of its User-Agent string (last week)

If you happen to be using Firefox Beta 109 on an overpriced MacBook Pro that has a sticky letter s today (the 29th of December, 2022), this is what the User-Agent string looks like:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/109.0

And as of last week, the UA string in Firefox for versions 110 and higher looks like so:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:109.0) Gecko/20100101 Firefox/110.0

If you managed to visually discover the difference (I guess in the world’s lamest “Spot the Difference” game), congrats. If you didn’t, take note that rv:109.0 did not change in the second one—but Firefox/110.0 did.

So why did Mozilla just freeze rv:109.0 in the User-Agent string? Perhaps forever, or just perhaps until Firefox 120 is released?

Presumably in an attempt to unburden itself from a legacy of UA-sniffing-driven workarounds for a browser that hadn’t historically supported a lot of useful things (like WebGL, or some ES5 or ES6 stuff - I don’t really remember and can’t be bothered to look it up), the IE team decided to change up their User-Agent string back in 2013.

Here’s the IE10 UA, which followed the same-ish predictable pattern they had since IE 2.0:

Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.2; Trident/6.0)

And here’s an IE11 UA:

Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; AS; rv:11.0) like Gecko

Basically they were trying to solve the problem of “now that we’ve invested seriously in web standards, how do we get access to content that makes use of those features, and still have a detectable version number for analytics (or whatever). And it doesn’t not makes sense to borrow Firefox’s rv: convention to accomplish that (slapping an extra “like Gecko” in there for good luck can’t hurt, I suppose (but more realistically, there was probably some bank or government site that sniffed for Safari’s like Gecko token)).

And then cut to today, about 9 years later where a handful of sites (including popular ones like bestbuy and cvs) tell Firefox users to upgrade to a modern browser, because there’s probably something really clever like var isIE = /rv:11/i.test(navigator.userAgent);.

That’s obviously lame, and hence, Mozilla has frozen another part of its UA string for compatibility. Anyways, happy new years, especially to the folks working to make sure the web is still usable in Firefox.

Support.Mozilla.OrgA glimpse of 2022

Hey SUMO nation,

Time surely flies, and here we are, already at the end of the year. 2022 has been an amazing year for the Customer Experience team. We welcomed 5 new people to our team this year, including 2 engineers, and 1 technical writer.

As an extension of our team, the SUMO community definitely plays an important role in our achievements. Let’s take a moment to reflect on what the community has accomplished this year. 

  • Forum

From January 1 to November 11, we posted 49K answers (from non-OP* users) to 28K questions posted on the forum. On average, our answer rate within 72 hours is 75% while our solved rate is around 14%. In a month, around 300 users have contributed to the forum (including the OP).

*See Support glossary
  • KB

From January 1 to November 11, the KB contributors have made 1746 revisions (all contributor revisions) with a 73% review rate and 95% approval rate. On average, we have in total of 30 contributors to our Knowledge Base on a monthly basis. 

  • Localization

The localization community had been doing great things this year by submitting to 13K revisions from January 1 to November 11. The review rate for the localization is looking pretty good at 90%, while the approval rate is 99%. On average, there are around 73 contributors involved on a monthly basis, from around 30 locales. We saw the PT-PT community has been recently re-activated as well after the pandemic, which is amazing.

  • Social Support

From January 1 to December 28, the Social Support Contributors have contributed to 908 responses in total (39.6% of our total responses). We also have been able to improve our resolved rate from 58% in 2021 to 70% this year. 

  • Mobile Store

Last but not least, from January 1 to Dec 28, the Mobile Store Support contributors have contributed to 1.6K replies and onboarded 4 new contributors this year. The response conversion (comparison between total responses against total moderation) rate is also looking good, with 47% on average throughout the year. Meaning, 47% of reviews that contributors moderated are replied to.

Apart from that, we have also managed to work on a few projects throughout the year:

  • Mobile hybrid support

In Q2, we hired a Community Support Advocate whose primary role is to support the mobile store ecosystem by moderating questions in Google Play Store and Apple App Store. This Community Support Advocate is working alongside contributors on Google Play Store and takes primary care of the App Store reviews as well as moderating forum questions (mainly by adding tags) for the mobile products to this day.

With the spirit of continuing the community program, we also rename the Mobile Support program to Mobile Store Support in Q4 with the introduction of the new contributor onboarding page.

  • Locale audit

We also did a locale audit in Q2 to check on the stage of our localization community. I presented the result of the audit on the community call in June.

  • Internal community dashboard

After the platform team fixed the data pipeline issue that was going on since the beginning of the year, Q3 follows with a project to create an internal community dashboard. I gave a brief overview of the project back then on the community call in July.

  • MR2022

Major Release 2022 went smoothly in Q3 because of the support of you all. Similar to what we did for the Major Release last year, we also prepared a list of changes for contributors and monitor the inbounds closely across the channels that we oversee. This time, the product team also worked with the CMO team to collect rapid feedback about some of the major features that we released in Firefox 106.

  • Contributor onboarding launch

In early November, we finally got to see the new face of our contributor onboarding page, which was formerly called the Get Involved page. You can learn more about this update in this blog post or by directly checking out the page.

It was not all rainbows and butterflies, though. On September 2022, we learned the news about the passing of one of our top contributors in the forum, FredMcD. It was definitely a great loss for the community.

Despite all the bumps, we do survive the year 2022, with grace and triumph. All the numbers that I presented at the beginning are not merely metrics. They are reflections of a collective effort from all of you, Mozillians around the world, who work tirelessly to keep the internet healthy and supported each other in the spirit of keeping the internet as a global public resource, open and accessible to all. I’m proud of working alongside you all and to reflect on what we have accomplished this year.

And yes, let’s keep on rocking the helpful web through 2023 and beyond!


If you’re a looker and interested in contributing to Mozilla Support, please head over to our Contribute page to learn more about our programs!




Wladimir PalantLastPass breach: The significance of these password iterations

LastPass has been breached, data has been stolen. I already pointed out that their official statement is misleading. I also explained that decrypting passwords in the stolen data is possible which doesn’t mean however that everybody is at risk now. For assessing whether you are at risk, a fairly hidden setting turned out critical: password iterations.

LastPass provides an instruction to check this setting. One would expect it to be 100,100 (the LastPass default) for almost everyone. But plenty of people report having 5,000 configured there, some 500 and occasionally it’s even 1 (in words: one) iteration.

Screenshot of LastPass preferences. The value in the Password Iterations field: 1

Let’s say this up front: this isn’t the account holders’ fault. It rather is a massive failure by LastPass. They have been warned, yet they failed to act. And even now they are failing to warn the users who they know are at risk.

What is this setting about?

This setting is actually central to protecting your passwords if LastPass loses control of your data (like they did now). Your passwords are encrypted. In order to decrypt them, the perpetrators need to guess your master password. The more iterations you have configured, the slower this guessing will be. The current OWASP recommendation is 310,000 iterations. So the LastPass default is already factor three below the recommendation.

What’s the impact if you have an even lower iterations number configured? Let’s say you have a fairly strong master password, 50 bits of entropy. For example, it could be an eight character random password, with uppercase and lowercase letters, digits and even some special characters. Yes, such password is already rather hard to remember but you want your passwords to be secure.

Or maybe you went for a diceware password. You took a word list for four dices (1296 words) and you randomly selected five words for your master password.

Choosing a password with 50 bits entropy without it being randomized? No idea how one would do it. Humans are inherently bad at choosing strong passwords. You’d need a rather long password to get 50 bits, and you’d need to avoid obvious patterns like dictionary words.

Either way, if this is your password and someone got your LastPass vault, guessing your master password on a single graphics card would take on average 200 years. Not unrealistic (someone could get more graphics cards) but usually not worth the effort. But that’s the calculation for 100,100 iterations.

Let’s look at how time estimates and cost change depending on the number of iterations. I’ll be using the cost estimate by Jeffrey Goldberg who works at 1Password.

Iterations Guessing time on a single GPU Cost
100,100  200 years $1,500,000
5,000  10 years $75,000
500  1 year $7,500
1  17 hours $15

And that’s a rather strong password. According to this older study, the average password has merely 40 bits of entropy. So divide all numbers by 1,000 for that.

How did the low iteration numbers come about?

The default for LastPass accounts wasn’t always 100,100 iterations. Originally it was merely 1 iteration. At some point this was changed to 500 iterations, later to 5,000. And the final change adjusted this value to 100,100 iterations.

I don’t know exactly when and how these changes happened. Except for the last one: it happened in February 2018 as a result of my research.

Edit (2022-12-30): I now know more, thanks to The switch to 500 iterations happened in June 2012, the one to 5,000 iterations in February 2013. To quote Sc00bz: “I shamed the CEO into increasing this. «I think it is irresponsible to tell your users the recommended iteration count is 500. When 12 years ago, PBKDF2 had a recommended minimum iteration count of 1000.»”

LastPass was notified through their bug bounty program on Bugcrowd. When they reported fixing the issue I asked them about existing accounts. That was on February 24th, 2018.

Screenshot from Bugcrowd. bobc sent a message (5 years ago): Ok thank you. Our default is now 100k rounds and artificial limits on number of rounds have been removed. palant sent a message (5 years ago) Yes, the default changed it seems. But what about existing accounts?

They didn’t reply. So I prompted them again in an email on March 15th and got the reply that the migration should take until end of May.

I asked again about the state of the migration on May 23rd. This time the reply was that the migration is starting right now and is expected to complete by mid-June.

On June 25th I was once again contacted by LastPass, asking me to delay disclosure until they finish migrating existing accounts. I replied asking whether the migration actually started now and got the response: yes, it did last week.

My disclosure of the LastPass issues was finally published on July 9th, 2018. After all the delays requested by LastPass, their simultaneously published statement said:

we are in the process of automatically migrating all existing LastPass users to the new default.

We can now safely assume that the migration wasn’t actually underway even at this point. One user reported receiving an email about their account being upgraded to a higher password iterations count, and that was mid-2019.

Worse yet, for reasons that are beyond me, LastPass didn’t complete this migration. My test account is still at 5,000 iterations, as are the accounts of many other users who checked their LastPass settings. LastPass would know how many users are affected, but they aren’t telling that.

In fact, it’s painfully obvious that LastPass never bothered updating users’ security settings. Not when they changed the default from 1 to 500 iterations. Not when they changed it from 500 to 5,000. Only my persistence made them consider it for their latest change. And they still failed implementing it consistently.

So we now have people report finding their accounts to be configured with 500 iterations. And for some it’s even merely one iteration. For example here. And here. And here.

This is a massive failure on LastPass’ side, they failed to keep these users secure. They cannot claim ignorance. They had years to fix this. Yet they failed.

What could LastPass do about it now?

There is one thing that LastPass could do easily: query their database for users who have less than 100,100 iterations configured and notify all of them. Obviously, these users are at heightened risk due to the LastPass breach. Some found out about it, most of them likely didn’t. So far, LastPass chose not to notify them.

Of course, LastPass could also deliver on their promise and fix the iterations count for the affected accounts. It won’t help with the current breach but at least it will better protect these accounts in future. So far this didn’t happen either.

Finally, LastPass could change the “Password Iterations” setting and make sure that nobody accidentally configures a value that is too low. It’s Security 101 that users shouldn’t be able to set settings to values that aren’t safe. But right now I changed the iterations count for my test account to 1 and I didn’t even get a warning about it.

Wladimir PalantWhat’s in a PR statement: LastPass breach explained

Right before the holiday season, LastPass published an update on their breach. As people have speculated, this timing was likely not coincidental but rather intentional to keep the news coverage low. Security professionals weren’t amused, this holiday season became a very busy time for them. LastPass likely could have prevented this if they were more concerned about keeping their users secure than about saving their face.

Their statement is also full of omissions, half-truths and outright lies. As I know that not everyone can see through all of it, I thought that I would pick out a bunch of sentences from this statement and give some context that LastPass didn’t want to mention.

Screenshot of the LastPass blog post: Update as of Thursday, December 22, 2022. To Our LastPass Community, We recently notified you that an unauthorized party gained access to a third-party cloud-based storage service, which LastPass uses to store archived backups of our production data. In keeping with our commitment to transparency, we want to provide you with an update regarding our ongoing investigation.

Let’s start with the very first paragraph:

In keeping with our commitment to transparency, we want to provide you with an update regarding our ongoing investigation.

In fact, this has little to do with any commitment. LastPass is actually required by US law to immediately disclose a data breach. We’ll soon see how transparent they really are in their statement.

While no customer data was accessed during the August 2022 incident, some source code and technical information were stolen from our development environment and used to target another employee, obtaining credentials and keys which were used to access and decrypt some storage volumes within the cloud-based storage service.

LastPass is trying to present the August 2022 incident and the data leak now as two separate events. But using information gained in the initial access in order to access more assets is actually a typical technique used by threat actors. It is called lateral movement.

So the more correct interpretation of events is: we do not have a new breach now, LastPass rather failed to contain the August 2022 breach. And because of that failure people’s data is now gone. Yes, this interpretation is far less favorable of LastPass, which is why they likely try to avoid it.

Note also how LastPass avoids mentioning when this “target another employee” happened. It likely did already before they declared victory in September 2022, which also sheds a bad light on them.

The cloud storage service accessed by the threat actor is physically separate from our production environment.

Is that supposed to be reassuring, considering that the cloud storage in question apparently had a copy of all the LastPass data? Or is this maybe an attempt to shift the blame: “It wasn’t our servers that the data has been lifted from”?

To date, we have determined that once the cloud storage access key and dual storage container decryption keys were obtained, the threat actor copied information from backup that contained basic customer account information and related metadata including company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses from which customers were accessing the LastPass service.

We learn here that LastPass was storing your IP addresses. And since they don’t state how many they were storing, we have to assume: all of them. And if you are an active LastPass user, that data should be good enough to create a complete movement profile. Which is now in the hands of an unknown threat actor.

Of course, LastPass doesn’t mention this implication, hoping that the less tech-savvy users won’t realize.

There is another interesting aspect here: how long did it take to copy the data for millions of users? Why didn’t LastPass detect this before the attackers were done with it? We won’t learn that in their statement.

The threat actor was also able to copy a backup of customer vault data from the encrypted storage container which is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data.

Note how LastPass admits not encrypting website URLs but doesn’t group it under “sensitive fields.” But website URLs are very much sensitive data. Threat actors would love to know what you have access to. Then they could produce well-targeted phishing emails just for the people who are worth their effort.

Never mind the fact that some of these URLs have parameters attached to them. For example, LastPass will sometimes save password reset URLs. And occasionally they will still be valid. Oops…

None of this is new of course. LastPass has been warned again and again that not encrypting URLs and metadata is a very bad idea. In November 2015 (page 67). In January 2017. In July 2018. And that’s only the instances I am aware of. They chose to ignore the issue, and they continue to downplay it.

These encrypted fields remain secured with 256-bit AES encryption and can only be decrypted with a unique encryption key derived from each user’s master password using our Zero Knowledge architecture.

Lots of buzzwords here. 256-bit AES encryption, unique encryption key, Zero Knowledge architecture, all that sounds very reassuring. It masks over a simple fact: the only thing preventing the threat actors from decrypting your data is your master password. If they are able to guess it, the game is over.

As a reminder, the master password is never known to LastPass and is not stored or maintained by LastPass.

Unless they (or someone compromising their servers) decide to store it. Because they absolutely could, and you wouldn’t even notice. E.g. when you enter your master password into the login form on their web page.

But it’s not just that. Even if you use their browser extension consistently, it will fall back to their website for a number of actions. And when it does so, it will give the website your encryption key. For you, it’s impossible to tell whether this encryption key is subsequently stored somewhere.

None of this is news to LastPass. It’s a risk they repeatedly chose to ignore. And that they keep negating in their official communication.

Because of the hashing and encryption methods we use to protect our customers, it would be extremely difficult to attempt to brute force guess master passwords for those customers who follow our password best practices.

This prepares the ground for blaming the customers. LastPass should be aware that passwords will be decrypted for at least some of their customers. And they have a convenient explanation already: these customers clearly didn’t follow their best practices.

We’ll see below what these best practices are and how LastPass is actually enforcing them.

We routinely test the latest password cracking technologies against our algorithms to keep pace with and improve upon our cryptographic controls.

Sounds reassuring. Yet I’m aware of only one occasion where they adjusted their defaults: in 2018, when I pointed out that their defaults were utterly insufficient. Nothing changed after that, and they again are falling behind.

Now to their password best practices:

Since 2018, we have required a twelve-character minimum for master passwords. This greatly minimizes the ability for successful brute force password guessing.

If you are a LastPass customer, chances are that you are completely unaware of this requirement. That’s because LastPass didn’t ask existing customers to change their master password. I had my test account since 2018, and even today I can log in with my eight-character password without any warnings or prompts to change it.

So LastPass required twelve characters for the past four years, but a large portion of their customer base likely still uses passwords not complying with this requirement. And LastPass will blame them should their data be decrypted as a result.

To further increase the security of your master password, LastPass utilizes a stronger-than-typical implementation of 100,100 iterations of the Password-Based Key Derivation Function (PBKDF2), a password-strengthening algorithm that makes it difficult to guess your master password.

Note “stronger-than-typical” here. I seriously wonder what LastPass considers typical, given that 100,000 PBKDF2 iterations are the lowest number I’ve seen in any current password manager. And it’s also the lowest protection level that is still somewhat (barely) acceptable today.

In fact, OWASP currently recommends 310,000 iterations. LastPass hasn’t increased their default since 2018, despite modern graphics cards becoming much better at guessing PBKDF2-protected passwords in that time – at least by factor 7.

And that isn’t even the full story. In 2018 LastPass increased the default from 5,000 iterations to 100,100. But what happened to the existing accounts? Some have been apparently upgraded, while other people report still having 5,000 iterations configured. It’s unclear why these haven’t been upgraded.

In fact, my test account is also configured with 5,000 iterations. There is no warning when I log in. LastPass won’t prevent me from changing this setting to a similarly low value. LastPass users affected don’t learn that they are at risk. But they get blamed now for not keeping up with LastPass recommendations.

Update (2022-12-27): I’ve now seen comments from people who have their accounts configured to 500 iterations. I’m not even sure when this was the LastPass default, but they failed to upgrade people’s accounts back then as well. And now people’s data leaked with protection that is factor 620 (!!!) below what OWASP currently recommends. I am at loss of words at this utter negligence.

In fact, there is so far one confirmed case of an account configured with 1 (in words: one) iteration, which was apparently the LastPass default before they changed to 500. I’ll just leave this standing here.

If you use the default settings above, it would take millions of years to guess your master password using generally-available password-cracking technology.

I’ll translate: “If you’ve done everything right, nothing can happen to you.” This again prepares the ground for blaming the customers. One would assume that people who “test the latest password cracking technologies” would know better than that. As I’ve calculated, even guessing a truly random password meeting their complexity criteria would take less than a million years on average using a single graphics card.

But human-chosen passwords are far from being random. Most people have trouble even remembering a truly random twelve-character password. An older survey found the average password to have 40 bits of entropy. Such passwords could be guessed in slightly more than two months on the same graphics card. Even an unusually strong password with 50 bits of entropy would take 200 years on average – not unrealistic for a high value target that somebody would throw more hardware on.

Another data point to estimate typical password strength: a well-known XKCD comic puts a typical “strong” password at 28 bits of entropy and a truly strong diceware password at 44 bits. Guessing time on a single graphics card: on average 25 minutes and 3 years respectively.

The competitor 1Password solves this issue by adding a truly random factor to the encryption, a secret key. Some other password managers switched to key generation methods that are way harder to bruteforce than PBKDF2. LastPass did neither, failed to adjust parameters to modern hardware, and is now preparing to blame customers for this failure.

There are no recommended actions that you need to take at this time. 

This is just gross negligence. There certainly are recommended actions to take, and not merely for people with overly simple master passwords or too low number of iterations. Sufficiently determined attackers will be able to decrypt the data for almost anyone. The question is merely whether it’s worth it for them.

So anybody who could be a high value target (activists, dissidents, company admins etc.) should strongly consider changing all their passwords right now. You could of course also consider switching to a competitor who in the case of a breach will be more concerned about keeping you safe than about saving their face.

We have already notified a small subset (less than 3%) of our Business customers to recommend that they take certain actions based on their specific account configurations.

Presumably, that’s the accounts configured with 5,000 iterations, these are at risk and LastPass can easily determine that. But why notify only business customers? My test account for example is also configured with 5,000 iterations and I didn’t receive any notification.

Again, it seems that LastPass attempts to minimize the risk of litigation (hence alerting businesses) while also trying to prevent a public outcry (so not notifying the general public). Priorities…

Wladimir PalantWhat data does LastPass encrypt?

A few days ago LastPass admitted that unknown attackers copied their “vault data.” It certainly doesn’t help that LastPass failed to clarify which parts of the vaults are encrypted and which are not. LastPass support adds to the confusion by stating that password notes aren’t encrypted which I’m quite certain is wrong.

In fact, it’s pretty easy to view your own LastPass data. And it shows that barely anything changed since I wrote about their “encrypted vault” myth four years go. Passwords, account and user names, as well as password notes are encrypted. Everything else: not so much. Page addresses are merely hex-encoded and various metadata fields are just plain text.

Downloading your LastPass data

When you are logged into LastPass, a copy of your “vault data” can still be downloaded under Only one detail changed: a POST request is required now, so simply opening the address in the browser won’t do.

Instead, you can open Developer Tools on and enter the following command:

fetch("", {method: "POST"})
  .then(response => response.text())
  .then(text => console.log(text.replace(/>/g, ">\n")));

This will produce your account’s data, merely with additional newlines inserted for readability.

Side note: you can also download the data in the original binary format by adding mobile=1 to the request body. It’s really the same data however, merely less readable.

What’s in the data

Obviously, the most interesting part here are the accounts. These look like this:

<account name="!abcd|efgh" urid="0" id="123456" url="687474703A2F2F6578616D706C652E636F6D"
    m="0" http="0" fav="0" favico="0" autologin="0" basic_auth="0" group="" fiid="654321"
    genpw="0" extra="!ijkl|mnop" isbookmark="0" never_autofill="0" last_touch="1542801288"
    last_modified="1516645222" sn="0" realm="" sharedfromaid="" pwprotect="0"
    launch_count="0" username="!qrst|uvwx">
  <login urid="0" url="687474703A2F2F6578616D706C652E636F6D" submit_id="" captcha_id=""
      custom_js="" u="!qrst|uvwx" p="!stuv|wxyz" o="" method="">

First of all, encrypted data should have the format !<base64>|<base64>. This is AES-CBC encryption. The first base64 string is the initialization vector, the second one the actual encrypted data. If you see encrypted data that is merely a base64 string: that’s AES-ECB encryption which absolutely shouldn’t be used today. But LastPass only replaced it around five years ago, and I’m not sure whether they managed to migrate all existing passwords.

As you can see here, the encrypted fields are name, username (duplicated as u), p (password) and extra (password notes).

Everything else is not encrypted. The url attributes in particular are merely hex encoded, any hex to text web page can decode that easily. Metadata like modification times and account settings is plain text.

There are more unencrypted settings here, for example:

<neverautologin url="64726976652e676f6f676c652e636f6d"/>

Apparently, is an exception from the password autofill for some reason. More interesting:

  <equivdomain edid="1" domain="616d65726974726164652e636f6d"/>
  <equivdomain edid="1" domain="7464616d65726974726164652e636f6d"/>

This lists and as equivalent domains, passwords from one are autofilled on another. As these fields aren’t encrypted, someone with access to this data on LastPass servers could add a rule to automatically fill in password on for example. I reported this issue in 2018, supposedly it was resolved in August 2018.

What’s the deal with the encrypted username field?

The account data starts with:

<accounts accts_version="105" updated_enc="1" encrypted_username="acbdefgh"
    cbc="1" pd="123456">

What’s the deal with the encrypted_username field? Does it mean that LastPass doesn’t know the decrypted account name (email address)?

That’s of course not the case, LastPass knows the email address of each user. They wouldn’t be able to send out breach notifications to everyone otherwise.

LastPass merely uses this field to verify that it got the correct encryption key. If decrypting this value yields the user’s email address then the encryption key is working correctly.

And: yes, this is AES-ECB, a long deprecated encryption scheme. But here it really doesn’t matter.

Why is unencrypted metadata an issue?

As I’ve already established in the previous article, decrypting LastPass data is possible but expensive. Nobody will do that for all the millions of LastPass accounts.

But the unencrypted metadata allows prioritizing. Someone with access to And this account has also been updated recently? Clearly someone who is worth the effort.

And it’s not only that. Merely knowing who has the account where exposes users to phishing attacks for example. The attackers now know exactly who has an account with a particular bank, so they can send them phishing emails for that exact bank.

Wladimir PalantLastPass has been breached: What now?

If you have a LastPass account you should have received an email updating you on the state of affairs concerning a recent LastPass breach. While this email and the corresponding blog post try to appear transparent, they don’t give you a full picture. In particular, they are rather misleading concerning a very important question: should you change all your passwords now?

Screenshot of an email with the LastPass logo. The text: Dear LastPass Customer, We recently notified you that an unauthorized party was able to gain access to a third-party cloud-based storage service which is used by LastPass to store backups. Earlier today, we posted an update to our blog with important information about our ongoing investigation. This update includes details regarding our findings to date, recommended actions for our customers, as well as the actions we are currently taking.

The following statement from the blog post is a straight-out lie:

If you use the default settings above, it would take millions of years to guess your master password using generally-available password-cracking technology.

This makes it sound like decrypting the passwords you stored with LastPass is impossible. It also prepares the ground for blaming you, should the passwords be decrypted after all: you clearly didn’t follow the recommendations. Fact is however: decrypting passwords is expensive but it is well within reach. And you need to be concerned.

I’ll delve into the technical details below. But the executive summary is: it very much depends on who you are. If you are someone who might be targeted by state-level actors: danger is imminent and you should change all your passwords ASAP. You should also consider whether you still want them uploaded to LastPass servers.

If you are a regular “nobody”: access to your accounts is probably not worth the effort. Should you hold the keys to your company’s assets however (network infrastructure, HR systems, hot legal information), it should be a good idea to replace these keys now.

Unless LastPass underestimated the scope of the breach that is. If their web application has been compromised nobody will be safe. Happy holidays, everyone!

Edit (2022-12-27): As it turned out, even for a “nobody” there are certain risk factors. You should especially check your password iterations setting. LastPass failed to upgrade some accounts from 5,000 to 100,100 iterations. If it’s the former for you, your account has a considerably higher risk of being targeted.

Also, when LastPass introduced their new password complexity requirements in 2018 they failed to enforce them for existing accounts. So if your master password is shorter than twelve characters you should be more concerned about your passwords being decrypted.

What happened really?

According to the LastPass announcement, “an unauthorized party gained access to a third-party cloud-based storage service, which LastPass uses to store archived backups of our production data.” What data? All the data actually: “company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses from which customers were accessing the LastPass service.” And of course:

The threat actor was also able to copy a backup of customer vault data

That’s where your passwords are kept. Luckily, these are encrypted. Whether the encryption will hold is a different question, one that I’ll discuss below.

But first: one important detail is still missing. When did this breach happen? Given that LastPass now seems to know which employee was targeted to gain access, they should also know when it happened. So why not say it?

I can only see one explanation: it happened immediately after their August 2022 breach. After investigating that incident, in September 2022 LastPass concluded:

Although the threat actor was able to access the Development environment, our system design and controls prevented the threat actor from accessing any customer data or encrypted password vaults.

I suspect that this conclusion was premature and what has been exposed now is merely the next step of the first breach which was already ongoing in September. Publishing the breach date would make it obvious, so LastPass doesn’t to save their face.

How long does decrypting passwords take?

Whenever LastPass has a security incident they are stressing their Zero Knowledge security model. What it supposedly means:

LastPass does not have any access to the master passwords of our customers’ vaults – without the master password, it is not possible for anyone other than the owner of a vault to decrypt vault data

Ok, let’s assume that indeed no master passwords have been captured (more on that below). This means that attackers will have to guess master passwords in order to decrypt stored passwords. How hard would that be?

The minimums requirements for a LastPass master password are:

  • At least 12 characters long
  • At least 1 number
  • At least 1 lowercase letter
  • At least 1 uppercase letter
  • Not your email

So “Abcdefghijk1” is considered perfectly fine as your master password, “lY0{hX3.bV” on the other hand is not.

Let’s look at the passwords meeting these exact baseline requirements. They consist of ten lowercase letters, one uppercase letter and one digit. If one were to generate such a password randomly, there were 4.8 · 1018 possible passwords. This seems like a lot.

But it all stands and falls with the way the encryption key is derived from the master password. Ideally, it should be a very slow process which is hard to speed up. Unfortunately, the PBKDF2 algorithm used by LastPass is rather dated and can run very efficiently on a modern graphics card for example. I’ve already explored this issue four years ago. Back then my conclusion was:

Judging by these numbers, a single GeForce GTX 1080 Ti graphics card (cost factor: less than $1000) can be used to test 346,000 guesses per second.

In response to my concerns LastPass increased the number of PBKDF2 iterations from 5,000 to 100,100. That’s much better of course, but this graphics card can still test more than 17,000 guesses per second.

Update (2022-12-23): While LastPass changed the default in 2018, it seems that they never bothered changing the settings for existing accounts like I suggested. So there are still LastPass accounts around configured with 5,000 PBKDF2 iterations. The screenshot below shows my message on Bugcrowd from February 2018, so LastPass was definitely aware of the issue.

Screenshot from Bugcrowd. bobc sent a message (5 years ago): Ok thank you. Our default is now 100k rounds and artificial limits on number of rounds have been removed. palant sent a message (5 years ago) Yes, the default changed it seems. But what about existing accounts?

If someone tries to blindly test all the 4.8 · 1018 possible passwords, a match will be found on average after 4,5 million years. Except that this graphics card is no longer state of the art. Judging by these benchmark results, a current NVIDIA GeForce RTX 4090 graphics card could test more than 88,000 guesses per second!

And we are already down to on average “merely” 860 thousand years to guess a baseline LastPass password. No, not “millions of years” like LastPass claims. And that’s merely a single graphics card anyone could buy for $2000, it will be faster if someone is willing to throw more/better hardware at the problem.

But of course testing all the passwords indiscriminately is only an approach that makes sense for truly random passwords. Human-chosen passwords on the other hand are nowhere close to being random. That’s especially the case if your master password is on some list of previously leaked passwords like this one. More than a billion passwords? No problem, merely slightly more than four hours to test them all on a graphics card.

But even if you managed to choose a truly unique password, humans are notoriously bad at choosing (and remembering) good passwords. Password cracking tools like hashcat know that and will first test passwords that humans are more likely to choose. Even with the more complicated passwords humans can come up with, cracking them should take determined attackers months at most.

By the way: no, diceware and similar approaches don’t automatically mean that you are on the safe side. The issue here is that word lists are necessarily short in order to keep the words easily rememberable. Even with a 7776 words dictionary which is on the large side already, a three words combination can on average be guessed in a month on a single graphics card. Only a four words password gives you somewhat reasonable safety, for something as sensitive as a password manager master password five words are better however.

Update (2022-12-23): Originally, the calculations above were done with 190,000 guesses per second rather than 88,000 guesses. I wrongly remembered that LastPass used PBKDF2-HMAC-SHA1, but it’s the somewhat slower PBKDF2-HMAC-SHA256.

Possible improvements

The conclusion is really: PBKDF2 is dead for securing important passwords. Its performance barely changed on PC and mobile processors in the past years, so increasing the number of iterations further would introduce unreasonable delays for the users. Yet the graphics card performance skyrocketed, making cracking passwords much easier. This issue affects not merely LastPass but also the competitor 1Password for example.

Implementors of password managers should instead switch to modern algorithms like scrypt or Argon2. These have the important property that they cannot easily be sped up with specialized hardware. That’s a change that KeePass for example implemented in 2017, I did the same for my password manager a year later.

Does it mean that all passwords are compromised?

The good news: no, the above doesn’t mean that all passwords stored by LastPass should be considered compromised. Their database contains data for millions of users, and the key derivation process uses per-user salts (the user’s email address actually). Attempting to crack the encryption for all users would be prohibitively expensive.

The big question is: who is responsible for this breach? Chances are, it’s some state-level actor. This would mean that they have a list of accounts they want to target specifically. And they will throw significant resources at cracking the password data for these accounts. Which means: if you are an activist, dissident or someone else who might get targeted by a state-level adversary, the best time to change all your passwords was a month ago. The second best time is right now.

But there is also a chance that some cybercrime gang is behind this breach. These have little reason to invest significant hardware resources into getting your Gmail password, there are easier ways to get it such as phishing. They will rather abuse all the metadata that LastPass was storing unencrypted.

Unless of course you have access to something of value. For example, if your LastPass data contains the credentials needed to access your company’s Active Directory server, decrypting your passwords to compromise your company network might make it worthwhile. In that case you should also strongly consider changing your credentials.

By the way, how would the attackers know which “encrypted vaults” contain valuable information? I mentioned it before: LastPass doesn’t actually encrypt everything in their “vault.” Passwords are encrypted but the corresponding page addresses are not. So seeing who holds the keys to what is trivial.

But everyone else is safe, right?

The above assumes that the LastPass statement is correct and only the backup storage has been accessed. I’m far from certain that this is correct however. They’ve already underestimated the scope of the breach back in September.

The worst-case scenario is: some of their web application infrastructure could be compromised. When you use the LastPass web application, it will necessarily get the encryption key required to decrypt your passwords. Even if LastPass doesn’t normally store it, a manipulated version of the web application could send the encryption key to the attackers. And this would make the expensive guessing of the master password unnecessary, the passwords could be easily decrypted for everybody.

Note: you are using the web application, even if you always seem to use the LastPass browser extension. That’s because the browser extension will fall back to the web application for many actions. And it will provide the web application with the encryption key when it does that. I’ve never looked into their Android app but presumably it behaves in a similar way.

Will Kahn-GreeneVolunteer Responsibility Amnesty Day: December 2022

Today is Volunteer Responsibility Amnesty Day where I spend some time taking stock of things and maybe move some projects to the done pile.

In June, I ran a Volunteer Responsibility Amnesty Day [1] for Mozilla Data Org because the idea really struck a chord with me and we were about to embark on 2022h2 where one of the goals was to "land planes" and finish projects. I managed to pass off Dennis and end Puente. I also spent some time mulling over better models for maintaining a lot of libraries.

This time around, I'm just organizing myself.

Here's the list of things I'm maintaining in some way that aren't the big services that I work on:

what is it:

Bleach is an allowed-list-based HTML sanitizing Python library.



keep doing:


next step:

more on this next year

what is it:

Python configuration library.



keep doing:


next step:

keep on keepin on

what is it:

Python metrics library.



keep doing:


next step:

keep on keepin on

what is it:

Python library for scrubbing Sentry events.



keep doing:


next step:

keep on keepin on

what is it:

Fake Sentry server for local development.



keep doing:


next step:

keep on keepin on, but would be happy to pass this off

what is it:

Sphinx extension for documenting JavaScript and TypeScript.



keep doing:


next step:

keep on keepin on

what is it:

Command line utilities for interacting with Crash Stats



keep doing:


next step:

keep on keepin on

what is it:

Utility for combining GitHub pull requests.



keep doing:


next step:

keep on keepin on

what is it:

Firefox addon for attaching GitHub pull requests to Bugzilla.



keep doing:


next step:

keep on keepin on

what is it:

Python library for symbolicating stacks and generating crash signatures.



keep doing:


next step:

keep on keepin on for now, but figure out a better long term plan

what is it:

Python library for generating crash signatures.



keep doing:


next step:

keep on keepin on

what is it:

Django OpenID Connect library.


contributor (I maintain docker-test-mozilla-django-oidc

keep doing:


next step:

think about dropping this at some point

That's too many things. I need to pare the list down. There are a few I could probably sunset, but not any time soon.

I'm also thinking about a maintenance model where I'm squishing it all into a burst of activity for all the libraries around some predictable event like Python major releases.

I tried that out this fall and did a release of everything except Bleach (more on that next year) and rob-bugson which is a Firefox addon. I think I'll do that going forward. I need to document it somewhere so as to avoid the pestering of "Is this project active?" issues. I'll do that next year.

The Mozilla BlogMozilla to explore healthy social media alternative

In early 2023, Mozilla will stand up and test a publicly accessible instance in the Fediverse at Mozilla.Social. We’re eager to join the community in growing, experimenting, and learning how we can together solve the technical, experience, and trustworthiness challenges inherent in hyper-scale social systems.  Our intention is to contribute to the healthy and sustainable growth of a federated social space that doesn’t just operate but thrives on its own terms, independent of profit- and control-motivated tech firms.  An open, decentralized, and global social service that puts the needs of people first is not only possible, but it’s absolutely necessary.

Our Pledge for a Healthy Internet describes our hopes for the Internet, and what it can become: a powerful tool for promoting civil discourse and human dignity. One that elevates critical thinking and reasoned argument, that honors shared experience and individual expression and brings together diverse and global communities to work together for the common good. Today we see the rising tide of the Fediverse, through Mastodon, Matrix, Pixelfed, and many others as a promising next step in that direction. Together we have an opportunity to apply the lessons of the past to build a social experience for humanity that is healthy, sustainable, and sheltered from the centralized control of any one entity. 

Mozilla has a quarter-century track record of world-class open development, building products that champion individual agency and privacy in the age of surveillance capitalism. We hope to bring this experience to bear in service of the Fediverse, just as we hope to learn from those who are already working hard in this community. While we’re starting this exploration on Mastodon — as a mature, stable project, it’s an ideal first step into the Fediverse — we believe the potential of the Fediverse is bigger and broader than Mastodon alone. With a growing range of projects serving a growing range of creators and consumers, we’re looking forward to working on the challenges that crosscut the Fediverse, the shared problems that require shared solutions.

Now is the time, as we’re living through the consequences of 20 years of centralized, corporate-controlled social media, with a small oligopoly of large tech firms tightening their grip on the public square. In private hands our choice is limited, toxicity is rewarded, rage is called engagement, public trust is corroded, and basic human decency is often an afterthought. Getting from the internet we have to the internet we want will be a heavy lift, requiring significant investment in scalable, human-centred solutions for user and community safety, product experience, and sustainability. These are all big challenges, and there’s a lot we need to learn on the road ahead.

So let’s get moving. 

The post Mozilla to explore healthy social media alternative appeared first on The Mozilla Blog.

Spidermonkey Development BlogSpiderMonkey Newsletter (Firefox 108-109)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 108 and 109 Nightly release cycles.

The SpiderMonkey team is proud of everything we accomplished this year. Happy Holidays!

👷🏽‍♀️ New features

  • We’ve shipped Import Maps in Firefox 108.
  • We implemented Array.fromAsync (disabled by default).
  • We added support for more Wasm GC instructions (disabled by default).
  • We implemented more parts of the decorators proposal (disabled by default).

⚙️ Modernizing JS modules

We’re working on improving our implementation of modules. This includes supporting modules in Workers, adding support for Import Maps, and ESMification (replacing the JSM module system for Firefox internal JS code with standard ECMAScript modules).

  • See the AreWeESMifiedYet website for the status of ESMification.
  • We modernized the module implementation to use more native C++ data structures instead of JS objects.

💾 Robust Caching

We’re working on better (in-memory) caching of JS scripts based on the new Stencil format. This will let us integrate better with other resource caches used in Gecko and might also allow us to potentially cache JIT-related hints.

The team is currently working on removing the dependency on JSContext for off-thread parsing. This will make it easier to integrate with browser background threads and will further simplify the JS engine.

  • We converted more code to use ErrorContext for allocations.
  • We changed some data structures to use global singletons instead of JSContext.

🚀 Performance

We continue to look for performance wins in a variety of areas to improve Speedometer and related benchmarks, as well as websites that are utilizing a lot of JavaScript code.

  • We inlined the megamorphic has-property cache lookup directly in JIT code.
  • We added an optimization to fold multiple IC stubs if they’re all identical except for a single shape guard. This improves performance for polymorphic property accesses.
  • We removed a lot of unnecessary C++ heap allocations in the IC code.
  • We changed the Cell header word to be non-atomic, to improve C++ code generation.
  • We added JIT inlining for new Object().
  • We added a fast path for plain objects to OrdinaryToPrimitive.
  • We optimized shape guards for objects on the prototype to have fewer memory loads.
  • We added JIT inlining for the .size getter on Map and Set.
  • We added an optimization to cache for-in iterators on the shape.
  • We improved charAt and charCodeAt JIT optimizations to support rope strings and out-of-bounds indexes.
  • We added JIT inlining for parseInt.
  • We improved string-to-atom performance by caching recently atomized strings.
  • We added JIT inlining for Number.prototype.toString when called with a base argument.
  • We eliminated redundant guards when adding multiple properties to an object.
  • We made a lot of changes to implement parallel marking in our GC (disabled by default).
  • We used signal handlers to optimize null checks in Wasm GC code.
  • We implemented support for FMA3 instructions for Wasm Relaxed SIMD.
  • We improved performance for growing Wasm tables by small amounts.

📚 Miscellaneous

  • We removed the Streams implementation from SpiderMonkey, now that it’s implemented outside the JS engine.
  • The fuzzing team landed some code to improve differential testing with the Fuzzilli JS fuzzer.
  • We simplified the profiler’s global JIT code table by reusing our AvlTree data structure instead of using a custom skip list implementation.
  • We improved our Shape data structures to use derived classes more to improve type safety and to simplify future changes.

Will Kahn-GreeneNormConf 2022 thoughts

I went to NormConf 2022, but didn't attend the whole thing. It was entirely online as a YouTube livestream for something like 14 hours split into three sessions. It had a very active Slack instance.

I like doing post-conference write-ups because then I have some record of what I was thinking at the time. Sometimes that's useful for other people. Often it's helpful for me.

I'm data engineer adjacent. I work on a data pipeline for crash reporting, but it's a streaming pipeline, entirely bespoke, and doesn't use any/many of the tools in the data engineer toolkit. There's no ML. There's no NLP. I don't have a data large-body-of-water. I'm not using SQL much. I'm not having Python packaging problems. Because of that, I kind of skipped over the data engineer related talks.

The conference was well done. Everyone did a great job. The Slack channels I lurked in were hopping. The way they did questions worked really well.

These are my thoughts on the talks I watched.

Read more… (7 min remaining to read)

Will Kahn-GreeneInstalling Windows (2022)

Installing Windows (2022)

I work at Mozilla. We get a laptop refresh periodically. I got a new laptop that I was going to replace my older laptop with. I'm a software engineer and I work on services that are built using Docker and tooling that runs on Linux.

This post covers my attempt at setting up a Windows laptop for software development for the projects I work on after having spent the last 20 years predominantly using Linux and Linux-like environments.

Spoiler: This is a failed attempt and I gave up and stuck with Linux.

Read more… (5 min remaining to read)

The Talospace ProjectFirefox 108 on POWER

Now that the Talos II is back in order and the Fedora 37 upgrade is largely behind me, it's now time to upgrade Firefox to version 108. There's some nice performance improvements here plus a hotkey for about:processes with Shift-Escape. Support for WebMIDI seems a little gratuitous, but what the hey (haven't tried it yet, the Macs mostly handle my music stuff), and there are also new CSS features. As before linking still requires Dan Horák's patch from bug 1775202 or the browser won't link on 64-bit Power ISA (alternatively put --disable-webrtc in your .mozconfig if you don't need WebRTC). Otherwise, we were able to eliminate one of our patches from the PGO-LTO diff, so use the new one for Firefox 108 and the .mozconfigs from Firefox 105.

Firefox NightlySearch persistence, a new migrator and more! – These Weeks in Firefox: Issue 129


  • James from the Search team has been working on persisting the search term in the address bar after you do a search in it. It is now enabled on Nightly (see bug 1802564). Try it out, file bugs if you see them and let us know if you have any feedback.
    • A Firefox window is shown with the search term "mozilla" in the URL bar. The search term is persisted there even though the results have been loaded already in the content area. The search input has some informational text below it saying: "Searching just got simpler. Try making your search more specific here in the address bar. To show the URL instead, visit Search, in settings."

      In Nightly, search terms in the URL bar persist after pressing Enter!

  • Evan, one of the students working with us from CalState LA, landed a patch that adds a new Opera GX migrator
    • It’s currently disabled by default, but can by enabled by setting `browser.migrate.opera-gx.enabled` to true in about:config
  • For WebExtension authors: thanks to Alex Ochameau’s work in Bug 1410932, starting from Firefox 110 the errors raised from extension content script contexts will be logged in the related tab’s web console
  • The screenshots component now respects the setting “Always ask you where to save files” when downloading screenshots. Set `screenshots.browser.component.enabled` to true to use this feature.

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug
  • Gregory Pappas [:gregp]
  • Itiel
  • Janvi Bajoria [:janvi01]
  • Jonas Jenwald [:Snuffleupagus]
New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Improvements to “all sites” optional permissions support in the about:addons permissions view – Bug 1778461
WebExtensions Framework
WebExtension APIs

Developer Tools

  • Zac Svoboda improved the color of errors in the JSON viewer (bug)
  • We added scrollend event in the Debugger Event Breakpoints panel (bug)
    • The Firefox debugger panel is open and is paused on an event breakpoint. Informational text is present saying: "Paused on event breakpoint. DOM 'scrollend' event". A list of Event Listener Breakpoints is also highlighted showing that the debugger can break anytime "scrollend" events fire.

      Want to hit a breakpoint as soon as scrolling ends? Now’s your chance!

WebDriver BiDi
  • Julian fixed a bug in webDriver where WebDriver:FindElements would not retrieve elements if the Firefox window was behind another app’s window (bug)
  • Sasha added full support for session.subscribe and session.unsubscribe commands (bug, spec)
  • Henrik added support for serialization of Node objects (bug)
  • James implemented the browsingContext.captureScreenshot command (bug, spec)

Desktop Integrations


ESMification status

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)

Performance Tools (aka Firefox Profiler)

  • Made searching in markers more generic. Now any property can be a searchable property, the gecko engineers can now change this themselves without changing the frontend. (PR #4352, Bug 1803751)
  • Landed some changes in the category colors (more contrast, brown is now really brown, added a new magenta color).
  • Changed the treeherder performance regression template to output the before and after profile links for each regression (PR #7588)
    • A table is shown in a Bugzilla comment showing various regressions being reported for a patch in that bug. A new column is highlighted in that table. The header of that column is "Performance Profiles", and each row has a set of "Before" and "After" links linking the reader to useful performance profiles for debugging the regression.

      This will make it much easier to immediately jump in and start analyzing performance regressions.

Search and Navigation

  • [James] fixed context menu search terms to be displayed in the address bar for certain engines. Bug 1801602
  • [Drew] has refactored quick suggest telemetry tests. Bug 1804807
  • [Drew] did a bunch of refactors to support row buttons in all row types. Bug 1803873
  • [Dale] updated a few strings for the quick action buttons. Bug 1783153
  • [Dale] turned off quickactions on zero prefix by default. Bug 1803566
  • [Stephanie] fixed a bug so we can record search telemetry from different search engines who have the same underlying ad provider. Bug 1804739
  • [Mandy] fixed a bug where search engine order was modified. Bug 1800662
  • [Standard8] has refactored multiple files in search:
    • Bug 1804520 – removed the callback argument for SearchSuggestionController
    • Bug 1803911 – replace Cu.reportError calls in newtab

Storybook / Reusable components

  • Our Storybook is online
    • Currently a manual deployment process, will look into automating it soon
  • moz-button-group element landed in about:logins, standardising/simplifying a few modal buttons. See Bug 1792238 and Bug 1802377
    • Before:
      • A dialog box showing warning text that passwords are being exported in plaintext to a file. The buttons at the bottom of the dialog are spaced with "Cancel" on the left, and "Export..." on the right.

        This is how it has looked up until now, which is inconsistent with our other dialogs.

    • After:
      • A dialog box showing warning text that passwords are being exported in plaintext to a file. The buttons at the bottom of the dialog are right aligned.

        This is how it’s supposed to look. Thanks, moz-button-group!

  • Emilio updated stylesheet loading to be sync for our privileged shadow DOM, avoiding flashes of unstyled content (FOUC) Bug 1799200

Mozilla Addons BlogNew extensions available now on Firefox for Android Nightly

As we continue to develop extensions support on Firefox for Android, we’re pleased to announce new additions to our library of featured Android extensions. To access featured extensions on Firefox for Android, tap Settings -> Add-ons.

Based on currently available APIs, performance evaluations, and listening to requests from the Mozilla community, here are five new extensions now available to Firefox for Android users…

Firefox Relay

Mozilla’s own Firefox Relay is now available for mobile usage. The extension lets you easily generate email masks that will forward messages to your authentic email while hiding your address from unwanted spam, or worse, hackers.

Tampermonkey on Android.


One of the most popular userscript managers makes its way to mobile. Tampermonkey top features include automatic update checks, an intuitive display of running scripts, plus browser and cloud storage sync.

Read Aloud: A Text to Speech Voice Reader

Have you ever wanted your news or other web pages (even PDF’s) read aloud so your hands and eyes are free to focus on other things? Read-Aloud: A Text to Speech Voice Reader can now accommodate you on Android — in 40+ languages.


More than just an effective ad blocker, AdNauseum punches back against privacy invasive ad tech by clicking a bunch of blocked ads in the background so advertisers can’t build an accurate profile of your interests.


A simple extension that provides a powerful privacy feature — ClearURLs automatically strips away tracking elements from web links you open.

Create extension collections on Firefox for Android Beta

By following these instructions you can now create your own custom extension collections on Firefox for Android Beta (previously, collections were only available on Nightly). Name the collection anything you like, so long as there aren’t any spaces in its title. When creating your collection, you’ll see a number in the Custom URL field; this is your user ID. You’ll need the collection name and user ID to configure Beta in the following way:

Once created, simply add extensions to your collection. Each collection generates a custom URL so you’re to share it with others.

Are more extensions coming to Firefox for Android?

Absolutely. Right now we’re focused on implementing Manifest version 3 (MV3) for Firefox desktop (i.e. wide ranging foundational changes to WebExtensions API). In 2023 we’ll begin work on the mobile adoption of MV3. Though we’re still early in planning, MV3 will certainly offer a number of advances for mobile extensions, such as elegant handling of process restarts and improved security by splitting extensions into their own processes, while also retaining critical MV2 features that support privacy and ad blocking capabilities. Indeed our goal is to design MV3 for mobile in such a manner we’re able to open up the discoverability of mobile extensions beyond the short list available today. As plans take shape, we’ll be sure to keep you informed. In the meantime you’re welcome to join conversations about extensions development on Firefox Add-ons Discourse.


The post New extensions available now on Firefox for Android Nightly appeared first on Mozilla Add-ons Community Blog.

The Rust Programming Language BlogAnnouncing Rust 1.66.0

The Rust team is happy to announce a new version of Rust, 1.66.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.66.0 with:

rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.66.0 on GitHub.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.66.0 stable

Explicit discriminants on enums with fields

Enums with integer representations can now use explicit discriminants, even when they have fields.

enum Foo {
    C(bool) = 42,

Previously, you could use explicit discriminants on enums with representations, but only if none of their variants had fields. Explicit discriminants are useful when passing values across language boundaries where the representation of the enum needs to match in both languages. For example,

enum Bar {
    C = 42,

Here the Bar enum is guaranteed to have the same layout as u8. In addition, the Bar::C variant is guaranteed to have a discriminant of 42. Variants without explicitly-specified values will have discriminants that are automatically assigned according to their order in the source code, so Bar::A will have a discriminant of 0, Bar::B will have a discriminant of 1, and Bar::D will have a discriminant of 43. Without this feature, the only way to set the explicit value of Bar::C would be to add 41 unnecessary variants before it!

Note: whereas for field-less enums it is possible to inspect a discriminant via as casting (e.g. Bar::C as u8), Rust provides no language-level way to access the raw discriminant of an enum with fields. Instead, currently unsafe code must be used to inspect the discriminant of an enum with fields. Since this feature is intended for use with cross-language FFI where unsafe code is already necessary, this should hopefully not be too much of an extra burden. In the meantime, if all you need is an opaque handle to the discriminant, please see the std::mem::discriminant function.


When benchmarking or examining the machine code produced by a compiler, it's often useful to prevent optimizations from occurring in certain places. In the following example, the function push_cap executes Vec::push 4 times in a loop:

fn push_cap(v: &mut Vec<i32>) {
    for i in 0..4 {

pub fn bench_push() -> Duration { 
    let mut v = Vec::with_capacity(4);
    let now = Instant::now();
    push_cap(&mut v);

If you inspect the optimized output of the compiler on x86_64, you'll notice that it looks rather short:

  sub rsp, 24
  call qword ptr [rip + std::time::Instant::now@GOTPCREL]
  lea rdi, [rsp + 8]
  mov qword ptr [rsp + 8], rax
  mov dword ptr [rsp + 16], edx
  call qword ptr [rip + std::time::Instant::elapsed@GOTPCREL]
  add rsp, 24

In fact, the entire function push_cap we wanted to benchmark has been optimized away!

We can work around this using the newly stabilized black_box function. Functionally, black_box is not very interesting: it takes the value you pass it and passes it right back. Internally, however, the compiler treats black_box as a function that could do anything with its input and return any value (as its name implies).

This is very useful for disabling optimizations like the one we see above. For example, we can hint to the compiler that the vector will actually be used for something after every iteration of the for loop.

use std::hint::black_box;

fn push_cap(v: &mut Vec<i32>) {
    for i in 0..4 {

Now we can find the unrolled for loop in our optimized assembly output:

  mov dword ptr [rbx], 0
  mov qword ptr [rsp + 8], rbx
  mov dword ptr [rbx + 4], 1
  mov qword ptr [rsp + 8], rbx
  mov dword ptr [rbx + 8], 2
  mov qword ptr [rsp + 8], rbx
  mov dword ptr [rbx + 12], 3
  mov qword ptr [rsp + 8], rbx

You can also see a side effect of calling black_box in this assembly output. The instruction mov qword ptr [rsp + 8], rbx is uselessly repeated after every iteration. This instruction writes the address v.as_ptr() as the first argument of the function, which is never actually called.

Notice that the generated code is not at all concerned with the possibility of allocations introduced by the push call. This is because the compiler is still using the fact that we called Vec::with_capacity(4) in the bench_push function. You can play around with the placement of black_box, or try using it in multiple places, to see its effects on compiler optimizations.

cargo remove

In Rust 1.62.0 we introduced cargo add, a command line utility to add dependencies to your project. Now you can use cargo remove to remove dependencies.

Stabilized APIs

Other changes

There are other changes in the Rust 1.66 release, including:

  • You can now use ..=X ranges in patterns.
  • Linux builds now optimize the rustc frontend and LLVM backend with LTO and BOLT, respectively, improving both runtime performance and memory usage.

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.66.0

Many people came together to create Rust 1.66.0. We couldn't have done it without all of you. Thanks!

Frederik BraunDOM Clobbering

This article first appeared on the HTMLHell Advent Calendar 2022.


When thinking of HTML-related security bugs, people often think of script injection attacks, which is also known as Cross-Site Scripting (XSS). If an attacker is able to submit, modify or store content on your web page, they might include …

Support.Mozilla.OrgWhat’s up with SUMO – December 2022

Hi everybody,

It’s been a while since our last monthly update. Ever since our internal dashboard was broken, we didn’t have an easy way to export the platform data. Now that we got access to our data back, let’s talk about what we’ve missed.

Welcome note and shout-outs

  • Welcome to Daniel López, Spencer Peck, Rafael Oliver, and Edoardo Viola. Thanks for joining the Social & Mobile Store Support.
  • Thanks to every one of you for contributing to SUMO. Those who replied to our users in the forum, Twitter, or Play Store reviews. All of you who helped us improve the Knowledge Base. Last but not the least, to many of you who helped translate the help articles to your local languages. Thank you all so much! I can’t stress enough that SUMO cannot exist without you. ❤️❤️❤️

If you know anyone that we should feature here, please contact Kiki and we’ll make sure to add them in our next edition.

Community news

  • Forum question detail is now limited only to these groups and the trusted contributors group.
  • Our new contribute page is finally released. Check out what we’ve changed and share the news to your network and local community.
  • We have a new Technical Writer joined the content team on late October. Please join me to welcome to Lucas.
  • Learn more about Hubs transition and how it impacts the support team in this blog post.
  • Learn more about Mozilla x Pulse acquisition.
  • Watch the community call in December to learn more about what we’ve accomplished throughout this year.

Catch up

  • Watch the monthly community call if you haven’t. Learn more about what’s new in August, September, October, November and December! Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can answer them during the meeting.
  • If you’re an NDA’ed contributor, you can watch the recording of the Customer Experience weekly scrum meeting from AirMozilla to catch up with the latest product updates.
  • Consider subscribe to Firefox Daily Digest to get daily updates about Firefox from across different platforms.
  • Check out SUMO Engineering Board to see what the platform team is currently doing.
  • Check out the following release notes from Kitsune in the month:

Community stats


KB pageviews (*)

* KB pageviews number is a total of KB pageviews for /en-US/ only
Month Page views Vs previous month
August 2022 7,419,744 1.29%
September 2022 7,258,663 -2.17%
October 2022 7,545,033 3.95%
November 2022 7,156,797 -5.15%

Top 5 KB contributors in the last 90 days: 

KB Localization

Top 10 locales based on total page views

Locale Aug 2022 Sep 2022 Oct 2022 Nov 2022 Localization progress
de 8.35% 8.58% 9.40% 9.94% 97%
zh-CN 7.39% 7.34% 6.83% 7.44% 100%
fr 5.96% 7.07% 7.22% 7.24% 89%
es 5.85% 6.11% 5.91% 5.89% 32%
pt-BR 5.04% 4.25% 3.89% 3.55% 56%
ru 3.98% 4.11% 4.06% 4.04% 86%
ja 3.81% 3.90% 4.03% 4.01% 52%
pl 2.00% 2.16% 2.17% 2.20% 87%
It 1.85% 2.26% 2.37% 2.20% 99%
zh-TW 1.47% 1.57% 1.69% 1.57% 4%
* Locale pageviews is an overall pageviews from the given locale (KB and other pages)

** Localization progress is the percentage of localized article from all KB articles per locale per Dec 8,2022

Top 5 localization contributors in the last 90 days: 

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Aug 2022 3247 73.11% 9.52% 57.30%
Sep 2022 3337 70.99% 9.32% 58.25%
Oct 2022 3997 64.95% 9.06% 58.26%
Nov 2022* 1196 63.04% 7.19% 52.51%
* November data is updated only up to Nov 11th

Top 5 forum contributors in the last 90 days: 

Social Support

Channel Total incoming conv Conv interacted Resolution rate
Aug 2022 381 409 77.39%
Sep 2022 197 183 83.53%
Oct 2022 254 275 70.68%
Nov 2022 201 175 48.73%

Top 5 Social Support contributors in the past 2 months: 

  1. Jens Hausdorf
  2. Tim Maks
  3. Christophe Villeneuve
  4. Bithiah K
  5. Magno Reis

Play Store Support

Channel Aug – Nov 2022
Total reviews moderated Total reviews replied
Firefox for Android 3733 2187
Firefox Focus for Android 1680 554
Firefox Klar Android 2 0

Top 5 Play Store contributors in the past 4 months: 

Product updates

To catch up on product releases update, please watch the recording of the Customer Experience scrum meeting from AirMozilla. You can also subscribe to the AirMozilla folder by clickling on the Subscribe button at the top right corner of the page to get notifications each time we add a new recording.

Useful links:

Hacks.Mozilla.OrgHow the Mozilla Community helps shape our products

A product is first an idea, then a project, and then a prototype. It is tested, refined, and localized so that it is accessible to users in different regions. When the product is released into the world, these users need to be supported. Of course, there are always going to be improvements, fixes, and new features, and those will also need to be planned, developed, tested…and so on, and so forth…

What do all these stages have in common?

Here at Mozilla, our awesome community is there every step of the way to support and contribute to our products. None of what we do would be possible without this multicultural, multilingual community of like-minded people working together to be a better internet.

Of course, contributions to our products are not everything that the community does. There is much more that our community creates, contributes, and discusses.

However,  as a major release recently happened we want to take the occasion to celebrate our community by giving you a peek at how their great contributions helped with version 106 (as well as all versions!) of Firefox.

Ideation (Mozilla Connect)

Ideas for new features and products come from many different sources. Research, data, internal ideas, and feature requests during Foxfooding…at Mozilla one of the sources of new ideas is Mozilla Connect.

Mozilla Connect is a collaborative space for ideas, feedback, and discussions that help shape future product releases.  Anyone can propose and vote for new ideas. The ideas that gain more support are brought to the appropriate team for review.

Firefox Picture in picture subtitles was a feature requested by the Mozilla Connect Community!

Connect is also a place where Mozilla Product Managers ask for feedback from the community when thinking about ways to improve our product, and where the community can interact directly with Product Managers and engineers.

In this way, the community contributes to continuous product improvement and introduces diverse perspectives and experiences to our product cycle.

Connect played a role in the latest Firefox Major release on both sides of the ideation cycle.

Are you enjoying the new PDF editor’s functionalities? Then you should know that the community discussed this idea in Connect. After many upvotes, the idea was officially brought to the product team.

After the release is done, the community joined discussions with Firefox Product Managers to give feedback and new suggestions on the new features.

Interested? Get started here.

Development (Code contribution and patches)

Mozilla developers work side by side with the Community.

Community members find and help solve product bugs and help with the development of different features.

Community is fundamental for the development of Firefox, as community members routinely add their code contributions to the Nightly version of Firefox!

You can check out how staff members and contributors work together to solve issues in the Nightly version of Firefox.

Interested? Check out how you can submit your first code contribution. You can also discover more about Nightly here.

Testing and reporting bugs 

There are many ways in which the Community helps find and report bugs. One of these is a Foxfooding campaign.  

Because we still have to meet a Mozillian that does not enjoy a good (and…less good) pun, Foxfooding is the Firefox version of Dogfooding.

This is where we make a feature or a product available to our community (and staff) before it is released to the public. Then we ask them to use it, test it, and submit bugs, product feedback, and feature requests.

This is an incredibly precious process, as it ensures that the product is tested by a very diverse (and enthusiastic) group of people, bringing unexpected feedback, and testing in much more diverse conditions than we could do internally.

Plus it is, you know, fun ;)

We ran a Foxfooding campaign for the last Major Release too! And the community all over the world submitted more than 60 bugs.

Foxfooding campaigns are published here. You can subscribe to our Community Newsletter to be notified when one is starting.

Furthermore, community members find, report, and help solve Firefox Nightly bugs, as well as bugs that appear in other Firefox versions.

Finding and reporting bugs is a great contribution, helping to continuously improve Mozilla Products.

In fact, simply using Firefox Nightly (or Beta) is a way to contribute easily to the Mozilla project. The simple fact of using Nightly sends anonymous data on and crash reports that help discover issues before we ship to the general public.

Localization (l10n)

Currently, Firefox is localized in 98 languages (110 in the Nightly version) and that is entirely thanks to the effort of a determined international community.

Localization is important because we are committed to a Web open and accessible to all— where a person’s demographic characteristics do not determine their online access, opportunities, or quality of experience.

The Mozilla Localization effort represents a commitment to advancing these aspirations. They work together with people everywhere who share the goal to make the internet an even better place for everyone.

The community worked really hard on the global launch for the Major release! Thank you to all localizers that took part in this global launch. There were more than 274 folks working on, and (approximately) 67,094 translations!

Users Support (SuMo)

Once a product is out into the world, the work is far from done! There are always bugs that need reporting, users who need troubleshooting help, and new features that need explanation…

At Mozilla, the Mozilla Support a.k.a. SuMO community is the one supporting users all over the world, answering support questions through the forum, socials, or mobile app stores, creating helpdesk articles, and localizing the articles.

When it’s done right, providing high-quality support may contribute to our user’s loyalty and retention. Plus, it can help improve the product: when we bring the data back to the product team, we can establish a feedback loop that can be delivered into product improvements as well.

The SuMO community is actively helping users during the Major release. Up until now:

  • 3975 forum responses were sent during the release from 2344 questions that were submitted.
  • 12 support articles were created, updated, and translated into Greek, French, Italian, Japanese, Russian, Portuguese, Simplified Chinese, Polish, and many more.
  • They posted responses to review in the 445 Google Play Store responses
  • They answer 88 Twitter questions

And they are still going strong!

Want to Join?

Would you also like to contribute? Our products are one of the ways in which we shape the web, and protect the privacy of our users. Getting involved is a great way to contribute to the missions and get in touch with like-minded people.

Please check our /contribute page for more information, subscribe to our Community Newsletter,  or join our #communityroom in Matrix.

The post How the Mozilla Community helps shape our products appeared first on Mozilla Hacks - the Web developer blog.

Mozilla ThunderbirdThunderbird For Android Preview: Modern Message Redesign

K-9 Mail becomes Thunderbird Android

The road to bringing you a great Thunderbird email experience on Android devices begins with K-9 Mail, which joined our family earlier this year. And we’ve been busy improving K-9 Mail as we prepare its transition to Thunderbird for Android in Summer 2023. (Check out our roadmap for updates!)

Last week we showed you the new Swipe actions in K-9 Mail 6.400. Today, it’s something even more exciting: a completely redesigned message view! 

Preview: K-9 Mail’s Redesigned Message View

First, a short disclaimer: the redesigned message view is a work-in-progress. That means the mock-ups you’ll see in this post will inform the final design, and they’ll improve as development progresses. But if you have feedback, we’d love to see it! You can always join our Android Planning mailing list and contribute to the discussion.

OK, here is K-9 Mail’s current message view:

K-9 Mail: Current Message View<figcaption>K-9 Mail Message View (Current, Light Mode)</figcaption>

It’s clean and readable, but we can do more to help you stay organized and to highlight key information at a glance.

Here is our direction for the updated message design:  

Redesigned message view for Android version of Thunderbird<figcaption>Redesigned message view for Android version of Thunderbird</figcaption>
Redesigned message view for Android version of Thunderbird (bottom sheet with more message details)<figcaption>Redesigned message view for Android version of Thunderbird (bottom sheet with additional message details) </figcaption>

There are several new UI elements to point out in the screenshots above. Let’s do a list outlining the new look, and then we’ll summarize everything with two annotated screenshots below.

Left Screenshot (Message View)

  • We recently introduced swiping gestures to navigate through next and previous messages, so the arrows in this screenshot were removed.
  • The name of the account this message was sent to is indicated by the oval blue “Thunderbird Ryan” chip. (You choose whatever color you like for this account indicator.) It will only be displayed if you have more than one account, and you’re in a view where the contents of multiple accounts have been aggregated (such as the Unified Inbox). 
  • A reply action button, with an overflow (three vertical dots) menu containing additional actions.
  • “Important / To-Do / Work” text: These are examples of organizational labels that can be added to the message. (This feature will be implemented after IMAP Label Support is implemented.)

Right Screenshot (Bottom Sheet / Detail Overlay)

  • Tapping any part of the grey area (the box that has the message’s labels, To/From names, etc) that isn’t the reply button or overflow menu will open a bottom sheet containing additional message details. You’ll see each recipient’s name, address and photo, alongside various action buttons for each contact.
  • You can drag up the bottom sheet to cover the entire screen; especially useful if there are many recipients. We intend to provide a way to search or filter the recipient list for situations like that.

Those Screenshots Again, With Notes:

Let’s bring it all together with another look at these two screenshots of the redesigned message view, but annotated to call out some of the new features and UI elements:

Redesigned message view (annotated)<figcaption>Redesigned message view (annotated) </figcaption>
Redesigned message view (annotated)<figcaption>Redesigned message view (annotated) </figcaption>

As cketti and the team continue to improve and polish K-9 Mail on the road to Thunderbird for Android, we’ll keep you posted with key updates.

Join The Beta, Experience Thunderbird on Android First

If you want to experience the newest features and visual improvements first, and help us test it all in the process, consider joining the ongoing K-9 Mail beta. You’ll see Thunderbird for Android taking shape!

Here’s where you can get the releases:

GitHub releases → We publish all of our releases there. Look for the “Pre-release label” that identifies a beta version.

Play Store → You should be able to join the beta program using the Google Play Store app on the device. Look out for the “Join the beta” section in K-9 Mail’s detail page.

F-Droid → Unlike stable versions, beta versions on F-Droid are not marked with a “Suggested” label. You have to manually select such a version to install it. To get update notification for non-suggested versions you need to check ‘Settings > Expert mode > Unstable updates’ in the F-Droid app.

The post Thunderbird For Android Preview: Modern Message Redesign appeared first on The Thunderbird Blog.

The Rust Programming Language BlogLaunching the 2022 State of Rust Survey

The 2022 State of Rust Survey is here!

It's that time again! Time for us to take a look at who the Rust community is composed of, how the Rust project is doing, and how we can improve the Rust programming experience. The Rust Survey working group is pleased to announce our 2022 State of Rust Survey! Whether or not you use Rust today, we want to know your opinions. Your responses will help the project understand its strengths and weaknesses, and establish development priorities for the future.

Completing this survey should take about 5–20 minutes and is anonymous. We will be accepting submissions for the next two weeks (until the 19th of December), and we will share our findings on sometime in early 2023. You can also check out last year’s results.

We're happy to be offering the survey in the following languages. If you speak multiple languages, please pick one.

Please help us spread the word by sharing the survey link on your social network feeds, at meetups, around your office, and in other communities.

If you have any questions, please see our frequently asked questions.

Finally, we wanted to thank everyone who helped develop, polish, and test the survey.

Firefox NightlyWebExtensions Mv3, WebMIDI, OpenSearch, PiP updates and more! – These Weeks in Firefox: Issue 128


Image of an opensearch result appearing on Firefox's URL bar.

Site-specific searches can be executed on the URL bar for sites like Wikipedia, for example.

Image of a Picture-in-Picture window with playback controls displayed, including a brand new video scrubber.

The video scrubber allows you to seek at a certain duration from the PiP window with ease.

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  • Brian Pham
  • Itiel
  • Sebastian Zartner [:sebo]

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • As part of the work related to “Origin Controls “ and “Unified Extensions UI”:
    • The Unified Extensions UI is also riding the 109 release train – Bug 1801129
    • Follow-ups and bug fixes
    • Fixed a regression that was preventing the “disabled” state of an extension action to be applied correctly. This bug was also affecting Beta (108) – Bug 1802411
    • Extensions actions can now be pinned/unpinned from the Unified Extensions panel – Bug 1782203
    • Default area of extension actions is now the unified extensions panel – Bug 1799947
  • Niklas fixed a regression on opening a link from the context menu options on an extension sidebar page (regressed in Firefox 107 by Bug 1790855), the fix has been landed in nightly 109 and uplifted to beta 108 – Bug 1801360
  • Emilio fixed a regression related to the windows.screen properties in extension background pages returning physical screen dimensions (regressed in Firefox 103 by Bug 1773813) – Bug 1798213
WebExtension APIs
  • As part of the ongoing work on the declarativeNetRequest API: the initial implementation of the declarativeNetRequest rule engine is being hooked to the networking (and so rules with actions and conditions already supported are now being applied to actual intercepted network requests) – Bug 1745761
Addon Manager & about:addons
  • SitePermsAddonProvider: a new Add-ons Manager provider used to provision virtual add-ons which unlock dangerous permissions for specific sites. We are experimenting with using an add-on install flow to gate site access to WebMIDI in order to convey to users that granting such access entails trusting the site.

Developer Tools

  • Zac Svoboda tweaked the JSON viewer so the toolbar matches the one of the toolbox  (bug)
  • Karntino Areros made watchpoint more legible (bug)
  • Clinton Adeleke fixed padding in the debugger “welcome box” (bug)
  • Sean Feng made a change so opening DevTools does not trigger PerformanceObserver callback (bug)
  • Julian fixed adding new rules in inspector on pages with CSP (bug)
WebDriver BiDi
  • Opening a new tab with WebDriver:NewWindow now properly sets the focus on the page (bug)
  • Column numbers in expectations and stacktraces are now 0-based (bug)

ESMification status

  • Please consider migrating your components if you haven’t already. Don’t forget actors as well.
  • ESMified status:
    • browser: 39.7%
    • toolkit: 29.8%
    • Total: 41.3% (up from 38.4%)
  • #esmification on Matrix
  • Migration Document (with a walkthrough!)

Lint, Docs and Workflow

Migration Improvements (CalState LA Project)


Performance Tools (aka Firefox Profiler)

Clicking a node in the activity graph selects the call tree tab now. (PR #4331)

A gif showing the new clicking behavior on the activity graph for Firefox Profiler.

Clicking a node in the activity graph selects the call tree tab now.

  • Added markers for browsertime visual metrics. (PR #4330) (Example profile)

    Image of Firefox Profiler showing off new markers for browsertime visual metrics.

    New markers in the Test category for various browsertime visual metrics.

  • Improved the vertical scrolling of our treeviews like call tree, marker table panels. (PR #4332)
  • Added a profiler marker for FirstContentfulPaint metric. (Bug 1691820) (Example profile)

    Image of the Firefox Profiler showing an added marker for a metric named FirstContentfulPaint.

    View all details relating to First Contentful Paint via a tooltip.

Search and Navigation

Support.Mozilla.OrgHubs transition

Hi SUMO folks,

I’m delighted to share this news with you. The Hubs team has recently transitioned into a new phase of a product. If in the past, you needed to figure out the hosting and deployment on your own with Hubs Cloud, you now have the option to simply subscribe to unlock more capabilities to customize your Hubs room. To learn more about this transformation, you can read their blog post.

Along with this relaunch, Mozilla has also just acquired Active Replica, a team that shares Mozilla’s passion for 3D development. To learn more about this acquisition, you can read this announcement.

What does this mean for the community?

To support this change, the SUMO team has been collaborating with the Hubs team to update Hubs help articles that we host on our platform. We also recently removed Hubs AAQ (Ask a Question) from our forum, and replaced it with a contact form that is directly linked to our paid support infrastructure (similar to what we have for Mozilla VPN and Firefox Relay).

Paying customers of Hubs will need to be directed to file a support ticket via the Hubs contact form which will be managed by our designated staff members. Though contributors can no longer help with the forum, you are definitely welcome to help with Hubs’ help articles. There’s also a Mozilla Hubs Discord server that contributors can pop into and participate in.

We are excited about the new direction that the Hubs team is taking and hope that you’ll support us along the way. If you have any questions or concerns, we’re always open to discussion.

Allen Wirfs-BrockHow Smalltalk Became a AI Language

A model pretending to use a Tektronix 4404

This post is based upon a Twitter thread that was originally published on December 2. 2018.

There is a story behind how Tektronix Smalltalk became branded as an AI language in 1984.

In the 1960s-70s, Tektronix Inc had grown to become an industry leading electronics competing head-to-head with Hewlett-Packard.  In the early ’80s Tektronix was rapidly going digital and money was being poured into establishing a Computer Research Lab (CRL) within Tek Labs. Two early successful CRL projects was my effort to create a viable performance Smalltalk virtual machine that ran on Motorola 680xx family processors and Roger Bates/Tom Merrow’s effort to develop an Alto-like 680xx based workstation for use in the lab.

The workstation was called the Magnolia and eventually over 50 of the were built. One for everybody in the fully staffed CRL. Tom’s team ported Unix to Magnolia and started working on Unix window mangers. I got Smalltalk-80 up on it using my virtual machine implementation.

CRL was rapidly staffing up with newly hired PhD-level CS researchers and each of them got a Magnolia. They were confronted with the choice of programming in a multi-window but basically shell-level Unix environment or a graphically rich Smalltalk live dev environment.  Most of them, including most of the AI group, chose to build their research prototypes using Smalltalk— particularly after a little evangelism from Ward Cunningham. Many cool projects were built and demonstrated to Tek executives at the annual Tek Labs research forums (internal “science fairs”) in ’81-’83.

During that time there was a lot of (well deserved) angst within Tek about its seeming inability to timely ship new products incorporating new technologies and addressing new markets. At the fall 1982 research forum Tom, myself, and Rick LeFaive, CRL’s director, (and perhaps Ward) sat down with some very senior Tek execs in front of a couple of Magnolias and ran through the latest demos. The parting words from the execs were: “We have to do something with this!”

Over the next couple months Tom Merrow and I developed the concept for a “low-cost” ($10k) Smalltalk workstation.  Rebecca Wirfs-Brock had been software lead of the recently successful 410x “low cost” graphics terminals and we thought we could leverage its mechanicals for our workstation. Over the first half of ’83 Roger Bates and Chip Schnarel prototyped a 68010-based processor and display that would fit inside a 4105 enclosure. It was code named “Pegasus”.

After much internal politics, in late summer of 1983 we got the go ahead to turn Pagasus into a product. An intrapreneurial “special products unit” (SPU) was formed to take Pegasus to market. The SPU management was largely the team that had initially done the 410x terminals.

So, finally we get to the AI part of the story. Mike Taylor was the marketing manager of the Pegasus SPU. One day in late August of ’83 I was chatting with Mike in a CRL corridor. He says something like: Smalltalk is very cool but to market it we have to tell people what they can use it for?

I initially muttered some words about exploratory programming, objects, software reuse, etc. Then I paused, as wheels turned in my mind. AI was in the news because of Japan’s Fifth Generation Computing Initiative and I had just seen an of issue Time magazine that included coverage of it. I thought: objects, symbolic computing, garbage collection, LISP and responded to Mike: Smalltalk is an AI language.

Mike said: What!?? You mean Pegasus is a $10K AI machine? That’s something I can sell!

Before I knew what happened the Pegasus SPU was rechristened as AIM (AI Machines) and we were trying to figure out how we were going to support Common Lisp and Prolog in addition to Smalltalk.

The Pegasus was announced as the Tektronix 4404 in August 1984 at that year’s AAAI conference. The first production units shipped in January 1985 at a list price of $14,950. Even at that price it was considered a bargain.

You can read more about the history and technology of Tektronix Smalltalk and Tek AI machine at my Tektronix Smalltalk Document Archive.

Demo video of Tek Smalltalk on a Tektronix 4404

Tantek ÇelikRunning For The @W3C Advisory Board (@W3CAB) Special Election

Hi, I’m Tantek Çelik and I’m running for the W3C Advisory Board (AB) to help it reboot W3C as a community-led, values-driven, and more effective organization. I have been participating in and contributing to W3C groups and specifications for over 24 years.

I am Mozilla’s Advisory Committee (AC) representative and have previously served on the AB for several terms, starting in 2013. In the early years I helped lead the movement to offer open licensing of W3C standards, and make it more responsive to the needs of independent websites and open source implementers. In my most recent term I led the AB’s Priority Project for an updated W3C Vision. I set the example of a consensus-based work-mode of summarizing & providing granular proposed resolutions to issues, presenting these to the AB at the August 2022 Berlin meeting, and making edits to the W3C Vision according to consensus.

I co-chaired the W3C Social Web Working Group that produced several widely interoperably deployed Social Web Standards, most notably the ActivityPub specification, which has received renewed attention as the technology behind Mastodon and other implementations growing an open decentralized alternative to proprietary social media networks such as Twitter. ActivityPub was but one of seven W3C Recommendations produced by the Social Web Working Group, six of which are widely adopted by implementations & their users, five of those with still functional test suites today, almost five years later.

Most recently, I’ve focused on the efforts to clarify and operationalize W3C’s core values, and campaigned to add Sustainability to W3C’s Horizontal Reviews in alignment with the TAG’s Ethical Web Principles. I established the Sustainability Community Group and helped organize interested participants at TPAC 2022 into asynchronous work areas.

The next 6-18 months of the Advisory Board are going to be a critical transition period, and will require experienced AB members to actively work in coordination with the TAG and the Board of Directors to establish new models and procedures for sustainable community-driven leadership and governance of W3C.

I have Mozilla’s financial support to spend my time pursuing these goals, and ask for your support to build the broad consensus required to achieve them.

You can follow my posts directly from my feed or from Mastodon with:

If you have any questions or want to chat about the W3C Advisory Board, Values & Vision, or anything else W3C related, please reach out by email: tantek at Thank you for your consideration.